diff --git a/docs/X/appendixes.md b/docs/en/appendixes.md similarity index 100% rename from docs/X/appendixes.md rename to docs/en/appendixes.md diff --git a/docs/X/appendixes.zh.md b/docs/en/appendixes.zh.md similarity index 100% rename from docs/X/appendixes.zh.md rename to docs/en/appendixes.zh.md diff --git a/docs/X/biblio.md b/docs/en/biblio.md similarity index 100% rename from docs/X/biblio.md rename to docs/en/biblio.md diff --git a/docs/X/biblio.zh.md b/docs/en/biblio.zh.md similarity index 100% rename from docs/X/biblio.zh.md rename to docs/en/biblio.zh.md diff --git a/docs/X/brin-builtin-opclasses.md b/docs/en/brin-builtin-opclasses.md similarity index 100% rename from docs/X/brin-builtin-opclasses.md rename to docs/en/brin-builtin-opclasses.md diff --git a/docs/X/brin-builtin-opclasses.zh.md b/docs/en/brin-builtin-opclasses.zh.md similarity index 100% rename from docs/X/brin-builtin-opclasses.zh.md rename to docs/en/brin-builtin-opclasses.zh.md diff --git a/docs/X/contrib-prog-server.md b/docs/en/contrib-prog-server.md similarity index 100% rename from docs/X/contrib-prog-server.md rename to docs/en/contrib-prog-server.md diff --git a/docs/X/contrib-prog-server.zh.md b/docs/en/contrib-prog-server.zh.md similarity index 100% rename from docs/X/contrib-prog-server.zh.md rename to docs/en/contrib-prog-server.zh.md diff --git a/docs/X/cube.md b/docs/en/cube.md similarity index 100% rename from docs/X/cube.md rename to docs/en/cube.md diff --git a/docs/X/cube.zh.md b/docs/en/cube.zh.md similarity index 100% rename from docs/X/cube.zh.md rename to docs/en/cube.zh.md diff --git a/docs/X/datatype-textsearch.md b/docs/en/datatype-textsearch.md similarity index 100% rename from docs/X/datatype-textsearch.md rename to docs/en/datatype-textsearch.md diff --git a/docs/X/datatype-textsearch.zh.md b/docs/en/datatype-textsearch.zh.md similarity index 100% rename from docs/X/datatype-textsearch.zh.md rename to docs/en/datatype-textsearch.zh.md diff --git a/docs/X/different-replication-solutions.md b/docs/en/different-replication-solutions.md similarity index 100% rename from docs/X/different-replication-solutions.md rename to docs/en/different-replication-solutions.md diff --git a/docs/X/different-replication-solutions.zh.md b/docs/en/different-replication-solutions.zh.md similarity index 100% rename from docs/X/different-replication-solutions.zh.md rename to docs/en/different-replication-solutions.zh.md diff --git a/docs/X/ecpg-descriptors.md b/docs/en/ecpg-descriptors.md similarity index 100% rename from docs/X/ecpg-descriptors.md rename to docs/en/ecpg-descriptors.md diff --git a/docs/X/ecpg-descriptors.zh.md b/docs/en/ecpg-descriptors.zh.md similarity index 100% rename from docs/X/ecpg-descriptors.zh.md rename to docs/en/ecpg-descriptors.zh.md diff --git a/docs/X/ecpg-pgtypes.md b/docs/en/ecpg-pgtypes.md similarity index 100% rename from docs/X/ecpg-pgtypes.md rename to docs/en/ecpg-pgtypes.md diff --git a/docs/en/ecpg-pgtypes.zh.md b/docs/en/ecpg-pgtypes.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7e5b1c306cfc001f33ff40305f977ee68f38e252 --- /dev/null +++ b/docs/en/ecpg-pgtypes.zh.md @@ -0,0 +1,765 @@ +## 36.6. pgtypes Library + +[36.6.1. Character Strings](ecpg-pgtypes.html#ECPG-PGTYPES-CSTRINGS) + +[36.6.2. The numeric Type](ecpg-pgtypes.html#ECPG-PGTYPES-NUMERIC) + +[36.6.3. The date Type](ecpg-pgtypes.html#ECPG-PGTYPES-DATE) + +[36.6.4. The timestamp Type](ecpg-pgtypes.html#ECPG-PGTYPES-TIMESTAMP) + +[36.6.5. The interval Type](ecpg-pgtypes.html#ECPG-PGTYPES-INTERVAL) + +[36.6.6. The decimal Type](ecpg-pgtypes.html#ECPG-PGTYPES-DECIMAL) + +[36.6.7. errno Values of pgtypeslib](ecpg-pgtypes.html#ECPG-PGTYPES-ERRNO) + +[36.6.8. Special Constants of pgtypeslib](ecpg-pgtypes.html#ECPG-PGTYPES-CONSTANTS) + +The pgtypes library maps PostgreSQL database types to C equivalents that can be used in C programs. It also offers functions to do basic calculations with those types within C, i.e., without the help of the PostgreSQL server. See the following example: + +``` +EXEC SQL BEGIN DECLARE SECTION; + date date1; + timestamp ts1, tsout; + interval iv1; + char *out; +EXEC SQL END DECLARE SECTION; + +PGTYPESdate_today(&date1); +EXEC SQL SELECT started, duration INTO :ts1, :iv1 FROM datetbl WHERE d=:date1; +PGTYPEStimestamp_add_interval(&ts1, &iv1, &tsout); +out = PGTYPEStimestamp_to_asc(&tsout); +printf("Started + duration: %s\n", out); +PGTYPESchar_free(out); +``` + +### 36.6.1. Character Strings + +Some functions such as`PGTYPESnumeric_to_asc`return a pointer to a freshly allocated character string. These results should be freed with`PGTYPESchar_free`instead of`free`. (This is important only on Windows, where memory allocation and release sometimes need to be done by the same library.) + +### 36.6.2. The numeric Type + +The numeric type offers to do calculations with arbitrary precision. See[Section 8.1](datatype-numeric.html)for the equivalent type in the PostgreSQL server. Because of the arbitrary precision this variable needs to be able to expand and shrink dynamically. That's why you can only create numeric variables on the heap, by means of the`PGTYPESnumeric_new`and`PGTYPESnumeric_free`functions. The decimal type, which is similar but limited in precision, can be created on the stack as well as on the heap. + +The following functions can be used to work with the numeric type: + +`PGTYPESnumeric_new` + +Request a pointer to a newly allocated numeric variable. + +``` +numeric *PGTYPESnumeric_new(void); +``` + +`PGTYPESnumeric_free` + +释放一个数字类型,释放其所有内存。 + +``` +void PGTYPESnumeric_free(numeric *var); +``` + +`PGTYPESnumeric_from_asc` + +从其字符串表示法解析数字类型。 + +``` +numeric *PGTYPESnumeric_from_asc(char *str, char **endptr); +``` + +有效格式例如:`-2`,`.794`,`+3.44`,`592.49E07`要么`-32.84e-4`.如果可以成功解析该值,则返回一个有效指针,否则返回 NULL 指针。目前 ECPG 总是解析完整的字符串,因此目前不支持将第一个无效字符的地址存储在`*endptr`.您可以安全地设置`结束点`为空。 + +`PGTYPESnumeric_to_asc` + +返回指向由分配的字符串的指针`malloc`包含数字类型的字符串表示`数`. + +``` +char *PGTYPESnumeric_to_asc(numeric *num, int dscale); +``` + +数值将打印为`缩放`十进制数字,必要时进行四舍五入。结果必须被释放`PGTYPESchar_free()`. + +`PGTYPESnumeric_add` + +将两个数值变量添加到第三个变量中。 + +``` +int PGTYPESnumeric_add(numeric *var1, numeric *var2, numeric *result); +``` + +函数添加变量`变量1`和`变量2`进入结果变量`结果`.该函数在成功时返回 0,在出错时返回 -1。 + +`PGTYPESnumeric_sub` + +减去两个数值变量并在第三个变量中返回结果。 + +``` +int PGTYPESnumeric_sub(numeric *var1, numeric *var2, numeric *result); +``` + +函数减去变量`变量2`从变量`变量1`.运算结果存储在变量中`结果`.该函数在成功时返回 0,在出错时返回 -1。 + +`PGTYPESnumeric_mul` + +将两个数值变量相乘并在第三个变量中返回结果。 + +``` +int PGTYPESnumeric_mul(numeric *var1, numeric *var2, numeric *result); +``` + +该函数将变量相乘`变量1`和`变量2`.运算结果存储在变量中`结果`.该函数在成功时返回 0,在出错时返回 -1。 + +`PGTYPESnumeric_div` + +将两个数值变量相除并在第三个变量中返回结果。 + +``` +int PGTYPESnumeric_div(numeric *var1, numeric *var2, numeric *result); +``` + +函数除变量`变量1`经过`变量2`.运算结果存储在变量中`结果`.该函数在成功时返回 0,在出错时返回 -1。 + +`PGTYPESnumeric_cmp` + +比较两个数值变量。 + +``` +int PGTYPESnumeric_cmp(numeric *var1, numeric *var2) +``` + +此函数比较两个数值变量。万一出错,`INT_MAX`被退回。成功时,该函数返回三个可能的结果之一: + +- 1,如果`变量1`大于`变量2` + +- \-1,如果`变量1`小于`变量2` + +- 0,如果`变量1`和`变量2`相等 + +`PGTYPESnumeric_from_int` + +将 int 变量转换为数值变量。 + +``` +int PGTYPESnumeric_from_int(signed int int_val, numeric *var); +``` + +此函数接受带符号的 int 类型的变量并将其存储在数值变量中`变量`.成功时返回 0,失败时返回 -1。 + +`PGTYPESnumeric_from_long` + +将 long int 变量转换为数值变量。 + +``` +int PGTYPESnumeric_from_long(signed long int long_val, numeric *var); +``` + +该函数接受一个有符号长整数类型的变量并将其存储在数值变量中`变量`.成功时返回 0,失败时返回 -1。 + +`PGTYPESnumeric_copy` + +将一个数值变量复制到另一个变量中。 + +``` +int PGTYPESnumeric_copy(numeric *src, numeric *dst); +``` + +此函数复制变量的值`源代码`指向变量`夏令时`指着。成功时返回 0,发生错误时返回 -1。 + +`PGTYPESnumeric_from_double` + +将 double 类型的变量转换为数值。 + +``` +int PGTYPESnumeric_from_double(double d, numeric *dst); +``` + +该函数接受一个 double 类型的变量并将结果存储在`夏令时`指着。成功时返回 0,发生错误时返回 -1。 + +`PGTYPESnumeric_to_double` + +将数字类型的变量转换为双精度。 + +``` +int PGTYPESnumeric_to_double(numeric *nv, double *dp) +``` + +该函数将数值从变量转换为`nv`指向双变量`dp`指着。成功时返回 0,如果发生错误(包括溢出)则返回 -1。溢出时,全局变量`错误号`将设置为`PGTYPES_NUM_OVERFLOW`此外。 + +`PGTYPESnumeric_to_int` + +将 numeric 类型的变量转换为 int。 + +``` +int PGTYPESnumeric_to_int(numeric *nv, int *ip); +``` + +该函数将数值从变量转换为`nv`指向整数变量`ip`指着。成功时返回 0,如果发生错误(包括溢出)则返回 -1。溢出时,全局变量`错误号`将设置为`PGTYPES_NUM_OVERFLOW`此外。 + +`PGTYPESnumeric_to_long` + +将数值类型的变量转换为长整型。 + +``` +int PGTYPESnumeric_to_long(numeric *nv, long *lp); +``` + +该函数将数值从变量转换为`nv`指向长整型变量`lp`指着。成功时返回 0,如果发生错误(包括溢出)则返回 -1。溢出时,全局变量`错误号`将设置为`PGTYPES_NUM_OVERFLOW`此外。 + +`PGTYPESnumeric_to_decimal` + +将数字类型的变量转换为十进制。 + +``` +int PGTYPESnumeric_to_decimal(numeric *src, decimal *dst); +``` + +该函数将数值从变量转换为`源代码`指向十进制变量`夏令时`指着。成功时返回 0,如果发生错误(包括溢出)则返回 -1。溢出时,全局变量`错误号`将设置为`PGTYPES_NUM_OVERFLOW`此外。 + +`PGTypesUnmeric_from_decimal` + +将十进制类型的变量转换为数字。 + +``` +int PGTYPESnumeric_from_decimal(decimal *src, numeric *dst); +``` + +该函数用于转换变量的十进制值`src`指向`dst`指向。成功时返回0,发生错误时返回-1。由于decimal类型是作为数值类型的有限版本实现的,因此此转换不会发生溢出。 + +### 36.6.3.日期类型 + +C中的日期类型使程序能够处理SQL类型的日期数据。看见[第8.5节](datatype-datetime.html)用于PostgreSQL server中的等效类型。 + +以下函数可用于处理日期类型: + +`PGTYPESdate_from_时间戳` + +从时间戳中提取日期部分。 + +``` +date PGTYPESdate_from_timestamp(timestamp dt); +``` + +函数接收一个时间戳作为其唯一的参数,并从该时间戳返回提取的日期部分。 + +`PGTYPESdate_来自asc` + +从文本表示中解析日期。 + +``` +date PGTYPESdate_from_asc(char *str, char **endptr); +``` + +该函数接收一个C字符\*一串`str`以及指向C字符的指针\*一串`endptr`。目前,ECPG总是解析完整的字符串,因此它目前不支持将第一个无效字符的地址存储在`*endptr`.你可以安全设置`endptr`为空。 + +请注意,该函数始终假定MDY格式的日期,并且在ECPG中当前没有变量可以更改该日期。 + +[表36.2](ecpg-pgtypes.html#ECPG-PGTYPESDATE-FROM-ASC-TABLE)显示允许的输入格式。 + +**表36.2。有效的输入格式`PGTYPESdate_来自asc`** + +| 输入 | 后果 | +| --- | --- | +| `1999年1月8日` | `1999年1月8日` | +| `1999-01-08` | `1999年1月8日` | +| `1/8/1999` | `1999年1月8日` | +| `1/18/1999` | `1999年1月18日` | +| `01/02/03` | `2003年2月1日` | +| `1999-1-08` | `1999年1月8日` | +| `1999年1月8日` | `1999年1月8日` | +| `08-Jan-1999` | `1999年1月8日` | +| `99-01-08` | `1999年1月8日` | +| `1999年1月8日` | `1999年1月8日` | +| `08-01-06` | `2006年1月8日` | +| `1999年1月8日` | `1999年1月8日` | +| `19990108` | `ISO 8601;1999年1月8日` | +| `990108` | `ISO 8601;1999年1月8日` | +| `1999.008` | `年复一日` | +| `J2451187` | `朱利安日` | +| `公元前99年1月8日` | `普通时代前99年` | + +`PGTYPESdate_至_asc` + +返回日期变量的文本表示形式。 + +``` +char *PGTYPESdate_to_asc(date dDate); +``` + +函数接收日期`达特`作为它唯一的参数。它将在表单中输出日期`1999-01-18`,即在`YYYY-MM-DD`总体安排结果必须用`PGTYPESchar_free()`. + +`PGTYPESdate_julmdy` + +从日期类型的变量中提取日期、月份和年份的值。 + +``` +void PGTYPESdate_julmdy(date d, int *mdy); +``` + +函数接收日期`d`以及指向3个整数值数组的指针`麦迪`.变量名表示顺序:`mdy[0]`将设置为包含月份的编号,`mdy[1]`将设置为当天的值`mdy[2]`将包含全年。 + +`PGTYPESdate_mdyjul` + +从指定日期的日期、月份和年份的3个整数数组中创建日期值。 + +``` +void PGTYPESdate_mdyjul(int *mdy, date *jdate); +``` + +函数接收3个整数的数组(`麦迪`)作为它的第一个参数和第二个参数,一个指向date类型的变量的指针,该变量应该保存操作的结果。 + +`PGTYPESdate_dayofweek` + +为日期值返回一个代表一周中某一天的数字。 + +``` +int PGTYPESdate_dayofweek(date d); +``` + +函数接收日期变量`d`作为其唯一参数,并返回一个整数,该整数指示该日期的星期几。 + +- 0-周日 + +- 1-星期一 + +- 2-星期二 + +- 3-星期三 + +- 4-周四 + +- 5-周五 + +- 6-周六 + +`今天是你的生日` + +获取当前日期。 + +``` +void PGTYPESdate_today(date *d); +``` + +函数接收指向日期变量的指针(`d`)将其设置为当前日期。 + +`PGTYPESdate_fmt_asc` + +使用格式掩码将日期类型的变量转换为其文本表示形式。 + +``` +int PGTYPESdate_fmt_asc(date dDate, char *fmtstring, char *outbuf); +``` + +函数接收要转换的日期(`达特`),格式掩码(`fmtstring`)以及保存日期文本表示形式的字符串(`爆发`). + +成功时,将返回0,如果发生错误,则返回负值。 + +以下文字是可以使用的字段说明符: + +- `dd`-一个月中某一天的数字。 + +- `嗯`-一年中月份的编号。 + +- `yy`-一年中的数字为两位数。 + +- `年份`-以四位数表示的年份编号。 + +- `ddd`-一天的名字(缩写)。 + +- `嗯`-月份的名称(缩写)。 + + 所有其他字符以1:1的比例复制到输出字符串中。 + +[表36.3](ecpg-pgtypes.html#ECPG-PGTYPESDATE-FMT-ASC-EXAMPLE-TABLE)表示几种可能的格式。这将让你了解如何使用这个功能。所有输出行均基于同一日期:1959年11月23日。 + +**表36.3。有效的输入格式`PGTYPESdate_fmt_asc`** + +| 总体安排 | 后果 | +| ---- | --- | +| `嗯` | `112359` | +| `ddmmyy` | `231159` | +| `yymmdd` | `591123` | +| `yy/mm/dd` | `59/11/23` | +| `yy mm dd` | `59 11 23` | +| `yy.mm.dd` | `59.11.23` | +| `.mm.yyyy.dd.` | `.11.1959.23.` | +| `mmm. dd, yyyy` | `Nov. 23, 1959` | +| `mmm dd yyyy` | `Nov 23 1959` | +| `yyyy dd mm` | `1959 23 11` | +| `ddd, mmm. dd, yyyy` | `Mon, Nov. 23, 1959` | +| `(ddd) mmm. dd, yyyy` | `(Mon) Nov. 23, 1959` | + +`PGTYPESdate_defmt_asc` + +Use a format mask to convert a C`char*`string to a value of type date. + +``` +int PGTYPESdate_defmt_asc(date *d, char *fmt, char *str); +``` + +The function receives a pointer to the date value that should hold the result of the operation (`d`), the format mask to use for parsing the date (`fmt`) and the C char\*包含日期文本表示形式的字符串(`str`).文本表示形式应与格式掩码匹配。但是,不需要将字符串与格式掩码进行1:1映射。该函数只分析顺序并查找文字`yy`或`年份`这表明了今年的形势,`嗯`表示月份的位置,以及`dd`指示当天的位置。 + +[表36.4](ecpg-pgtypes.html#ECPG-RDEFMTDATE-EXAMPLE-TABLE)表示几种可能的格式。这将让你了解如何使用这个功能。 + +**表36.4。有效的输入格式`rdefmtdate`** + +| 总体安排 | 一串 | 后果 | +| ---- | --- | --- | +| `ddmmyy` | `21-2-54` | `1954-02-21` | +| `ddmmyy` | `2-12-54` | `1954-12-02` | +| `ddmmyy` | `20111954` | `1954-11-20` | +| `ddmmyy` | `130464` | `1964-04-13` | +| `mmm.dd.yyyy` | `MAR-12-1967` | `1967-03-12` | +| `yy/mm/dd` | `1954, February 3rd` | `1954-02-03` | +| `mmm.dd.yyyy` | `041269` | `1969-04-12` | +| `yy/mm/dd` | `In the year 2525, in the month of July, mankind will be alive on the 28th day` | `2525-07-28` | +| `dd-mm-yy` | `I said on the 28th of July in the year 2525` | `2525-07-28` | +| `mmm.dd.yyyy` | `9/14/58` | `1958-09-14` | +| `yy/mm/dd` | `47/03/29` | `1947-03-29` | +| `mmm.dd.yyyy` | `oct 28 1975` | `1975-10-28` | +| `mmddyy` | `Nov 14th, 1985` | `1985-11-14` | + +### 36.6.4. The timestamp Type + +The timestamp type in C enables your programs to deal with data of the SQL type timestamp. See[第8.5节](datatype-datetime.html)用于PostgreSQL server中的等效类型。 + +以下函数可用于处理时间戳类型: + +`PGTYPEStimestamp_来自asc` + +将时间戳从其文本表示形式解析为时间戳变量。 + +``` +timestamp PGTYPEStimestamp_from_asc(char *str, char **endptr); +``` + +函数接收要分析的字符串(`str`)以及指向C字符的指针\* (`endptr`)。目前,ECPG总是解析完整的字符串,因此它目前不支持将第一个无效字符的地址存储在`*endptr`.你可以安全设置`endptr`为空。 + +函数在成功时返回解析的时间戳。一旦出错,`PGTYPESInvalidTimestamp`返回并`呃不`即将`PGTYPES_TS_BAD_时间戳`看见[`PGTYPESInvalidTimestamp`](ecpg-pgtypes.html#PGTYPESINVALIDTIMESTAMP)有关此值的重要说明。 + +通常,输入字符串可以包含允许的日期规范、空白字符和允许的时间规范的任意组合。请注意,ECPG不支持时区。它可以解析它们,但不会像PostgreSQL server那样应用任何计算。时区说明符被悄悄地丢弃。 + +[表36.5](ecpg-pgtypes.html#ECPG-PGTYPESTIMESTAMP-FROM-ASC-EXAMPLE-TABLE)包含几个输入字符串的示例。 + +**表36.5。有效的输入格式`PGTYPEStimestamp_来自asc`** + +| 输入 | 后果 | +| --- | --- | +| `1999-01-08 04:05:06` | `1999-01-08 04:05:06` | +| `太平洋标准时间1999年1月8日04:05:06` | `1999-01-08 04:05:06` | +| `1999年1月8日04:05:06.789-8` | `1999-01-08 04:05:06.789(忽略时区说明符)` | +| `J2451187 04:05-08:00` | `1999-01-08 04:05:00(忽略时区说明符)` | + +`PGTYPEStimestamp_至_asc` + +将日期转换为C字符\*一串 + +``` +char *PGTYPEStimestamp_to_asc(timestamp tstamp); +``` + +函数接收时间戳`坦普`并返回一个已分配的字符串,该字符串包含时间戳的文本表示形式。结果必须用`PGTYPESchar_free()`. + +`PGTYPEStimestamp_电流` + +检索当前时间戳。 + +``` +void PGTYPEStimestamp_current(timestamp *ts); +``` + +函数检索当前时间戳并将其保存到`ts`指向。 + +`PGTYPEStimestamp_fmt_asc` + +将时间戳变量转换为C字符\*使用格式掩码。 + +``` +int PGTYPEStimestamp_fmt_asc(timestamp *ts, char *output, int str_len, char *fmtstr); +``` + +函数接收指向要转换的时间戳的指针作为其第一个参数(`ts`),指向输出缓冲区的指针(`输出`),为输出缓冲区分配的最大长度(`斯特鲁伦`)以及用于转换的格式掩码(`fmtstr`). + +成功后,如果发生错误,函数将返回0和负值。 + +您可以对格式掩码使用以下格式说明符。格式说明符与`strftime`函数在libc中。任何非格式说明符都将被复制到输出缓冲区。 + +- `%A`-由工作日全名的国家代表性替换。 + +- `%a`-由缩写的工作日名称的国家代表代替。 + +- `%B`-由全国代表性的月份全名代替。 + +- `%b`-替换为缩写月份名称的国家代表。 + +- `%C`-被(年份/100)替换为十进制数;一位数前加一个零。 + +- `%c`-由时间和日期的国家代表代替。 + +- `%D`-相当于`%m/%d/%y`. + +- `%d`-以十进制数字(01–31)的形式替换为月份的日期。 + +- `%E*` `%O*`-POSIX本地扩展。序列`%欧共体` `%欧共体` `%前任` `%前任` `%嗯` `%嗯` `%来自` `%Oe` `%哦` `%哎` `%嗯` `%嗯` `%操作系统` `%欧` `%欧` `%奥夫` `%哦` `%哦` `%哦`都应该提供其他的表述。 + + 另外`%OB`实现用于表示替代月份名称(单独使用,未提及日期)。 + +- `%e`-被月日替换为十进制数(1-31);单个数字前面有一个空格。 + +- `%F`-相当于`%Y-%m-%d`. + +- `%G`-被一年替换为带世纪的十进制数字。今年是一周中大部分时间的一年(周一是一周的第一天)。 + +- `%g`-替换为中的同一年`%G`,但作为不带世纪的十进制数(00–99)。 + +- `%H`-被小时(24小时时钟)替换为十进制数字(00–23)。 + +- `%h`-和`%b`. + +- `%我`-被小时(12小时时钟)替换为十进制数字(01–12)。 + +- `%j`-被一年中的某一天替换为十进制数(001–366)。 + +- `%k`-被小时(24小时时钟)替换为十进制数字(0-23);单个数字前面有一个空格。 + +- `%l`-被小时(12小时时钟)替换为十进制数字(1-12);单个数字前面有一个空格。 + +- `%M`-被分钟替换为十进制数字(00–59)。 + +- `%m`-被月份替换为十进制数字(01–12)。 + +- `%n`-被一条新线取代。 + +- `%O*`-和`%E*`. + +- `%p`-由“前梅里迪姆”或“后梅里迪姆”的国家代表(视情况而定)取代。 + +- `%R`-相当于`%H:%M`. + +- `%r`-相当于`%I:%M:%S%p`. + +- `%S`-替换为第二个十进制数(00–60)。 + +- `%s`-替换为自纪元UTC起的秒数。 + +- `%T`-相当于`%H:%M:%S` + +- `%t`-替换为选项卡。 + +- `%U`-被一年中的周数(星期日为一周的第一天)替换为十进制数(00–53)。 + +- `%u`-被工作日(周一为一周的第一天)替换为十进制数字(1-7)。 + +- `%五`-被一年中的周数(星期一为一周的第一天)替换为十进制数(01–53)。如果包含1月1日的一周在新的一年中有四天或更多天,则为第1周;否则这是上一年的最后一周,下一周是第1周。 + +- `%五`-相当于`%e-%b-%Y`. + +- `%W`-被一年中的周数(星期一为一周的第一天)替换为十进制数(00–53)。 + +- `%w`-被工作日(星期日为一周的第一天)替换为十进制数(0–6)。 + +- `%X`-被当时的国家代表所取代。 + +- `%x`-替换为日期的国家代表。 + +- `%Y`-替换为以世纪为十进制数的年份。 + +- `%y`-被无世纪的年份替换为十进制数(00–99)。 + +- `%Z`-将替换为时区名称。 + +- `%z`-替换为UTC的时区偏移量;前面的加号代表UTC以东,减号代表UTC以西,小时和分钟后面各有两个数字,中间没有分隔符(通用格式为[RFC 822](https://tools.ietf.org/html/rfc822)日期标题)。 + +- `%+`-替换为日期和时间的国家代表。 + +- `%-*`-GNU libc扩展。执行数字输出时,不要进行任何填充。 + +- $\_\*-GNU libc扩展。显式指定填充空间。 + +- `%0*`-GNU libc扩展。为填充显式指定零。 + +- `%%`-被替换为`%`. + +`PGTYPEStimestamp_sub` + +从另一个时间戳中减去一个时间戳,并将结果保存在interval类型的变量中。 + +``` +int PGTYPEStimestamp_sub(timestamp *ts1, timestamp *ts2, interval *iv); +``` + +该函数将减去`ts2`从timestamp变量中指向`ts1`指向并将结果存储在`四、`指向。 + +成功后,如果发生错误,函数将返回0和负值。 + +`PGTYPEStimestamp_defmt_asc` + +使用格式掩码从文本表示中解析时间戳值。 + +``` +int PGTYPEStimestamp_defmt_asc(char *str, char *fmt, timestamp *d); +``` + +函数接收变量中时间戳的文本表示`str`以及在变量中使用的格式掩码`fmt`。结果将存储在`d`指向。 + +如果格式化掩码`fmt`如果为空,函数将返回默认的格式掩码,即`%Y-%m-%d%H:%m:%S`. + +这是与之相反的功能[`PGTYPEStimestamp_fmt_asc`](ecpg-pgtypes.html#PGTYPESTIMESTAMPFMTASC)。请参阅此处的文档,以了解可能的格式掩码条目。 + +`PGTYPEStimestamp_添加_间隔` + +将间隔变量添加到时间戳变量。 + +``` +int PGTYPEStimestamp_add_interval(timestamp *tin, interval *span, timestamp *tout); +``` + +函数接收指向时间戳变量的指针`锡`以及一个指向区间变量的指针`跨度`。它将间隔添加到时间戳,并将生成的时间戳保存在`吹牛`指向。 + +成功后,如果发生错误,函数将返回0和负值。 + +`PGTYPEStimestamp_sub_间隔` + +从时间戳变量中减去间隔变量。 + +``` +int PGTYPEStimestamp_sub_interval(timestamp *tin, interval *span, timestamp *tout); +``` + +该函数减去`跨度`从timestamp变量中指向`锡`指向并将结果保存到`吹牛`指向。 + +成功后,如果发生错误,函数将返回0和负值。 + +### 36.6.5.区间型 + +C中的interval类型使程序能够处理SQL类型interval的数据。看见[第8.5节](datatype-datetime.html)用于PostgreSQL server中的等效类型。 + +以下功能可用于处理间隔类型: + +`PGTYPESinterval_new` + +返回指向新分配的间隔变量的指针。 + +``` +interval *PGTYPESinterval_new(void); +``` + +`PGTypesInTurval_free` + +释放之前分配的间隔变量的内存。 + +``` +void PGTYPESinterval_free(interval *intvl); +``` + +`PGTYPES从asc接收` + +从文本表示中解析一个区间。 + +``` +interval *PGTYPESinterval_from_asc(char *str, char **endptr); +``` + +The function parses the input string`str`and returns a pointer to an allocated interval variable. At the moment ECPG always parses the complete string and so it currently does not support to store the address of the first invalid character in`*endptr`. You can safely set`endptr`to NULL. + +`PGTYPESinterval_to_asc` + +Convert a variable of type interval to its textual representation. + +``` +char *PGTYPESinterval_to_asc(interval *span); +``` + +The function converts the interval variable that`span`points to into a C char\*. The output looks like this example:`@ 1 day 12 hours 59 mins 10 secs`. The result must be freed with`PGTYPESchar_free()`. + +`PGTYPESinterval_copy` + +Copy a variable of type interval. + +``` +int PGTYPESinterval_copy(interval *intvlsrc, interval *intvldest); +``` + +The function copies the interval variable that`intvlsrc`points to into the variable that`intvldest`points to. Note that you need to allocate the memory for the destination variable before. + +### 36.6.6. The decimal Type + +The decimal type is similar to the numeric type. However it is limited to a maximum precision of 30 significant digits. In contrast to the numeric type which can be created on the heap only, the decimal type can be created either on the stack or on the heap (by means of the functions`PGTYPESdecimal_new`and`PGTYPESdecimal_free`)。在中描述的Informix兼容模式中,还有许多其他函数处理十进制类型[第36.15节](ecpg-informix-compat.html). + +以下函数可用于处理decimal类型,并且不仅包含在`利比卡特`图书馆 + +`PGTYPES CIMAL_新` + +请求指向新分配的十进制变量的指针。 + +``` +decimal *PGTYPESdecimal_new(void); +``` + +`PGTypesCIMAL_免费` + +释放十进制类型,释放其所有内存。 + +``` +void PGTYPESdecimal_free(decimal *var); +``` + +### 36.6.7.pgtypeslib的errno值 + +`PGTYPES_NUM_BAD_NUMERIC` + +一个参数应该包含一个数值变量(或指向一个数值变量),但实际上它在内存中的表示是无效的。 + +`PGTYPES_NUM_溢出` + +发生溢出。由于数值类型可以处理几乎任意的精度,因此将数值变量转换为其他类型可能会导致溢出。 + +`PGTYPES_NUM_下溢` + +出现下溢。由于数值类型可以处理几乎任意的精度,因此将数值变量转换为其他类型可能会导致下溢。 + +`PGTYPES_NUM_DIVIDE_ZERO` + +已尝试用零除。 + +`PGTYPES_DATE_BAD_DATE` + +传递给服务器的日期字符串无效`PGTYPESdate_来自asc`作用 + +`PGTYPES_DATE_ERR_EARGS` + +已将无效参数传递给`PGTYPESdate_defmt_asc`作用 + +`PGTYPES_DATE_ERR_ENOSHORTDATE` + +服务器在输入字符串中找到无效令牌`PGTYPESdate_defmt_asc`作用 + +`PGTYPES_INTVL_BAD_间隔` + +传递给服务器的间隔字符串无效`PGTYPES从asc接收`函数,或向`PGTYPESinterval_to_asc`作用 + +`PGTYPES_DATE_ERR_ENOTDMY` + +项目中的日/月/年分配不匹配`PGTYPESdate_defmt_asc`作用 + +`PGTYPES_DATE_BAD_DAY` + +用户发现一个无效的月天数值`PGTYPESdate_defmt_asc`作用 + +`PGTYPES_DATE_BAD_MONTH` + +用户发现一个无效的月份值`PGTYPESdate_defmt_asc`作用 + +`PGTYPES_TS_BAD_时间戳` + +传递给的时间戳字符串无效`PGTYPEStimestamp_来自asc`函数,或将无效的时间戳值传递给`PGTYPEStimestamp_至_asc`作用 + +`PGTYPES_TS_ERR_EINFTIME` + +在无法处理的上下文中遇到无限时间戳值。 + +### 36.6.8.pgtypeslib的特殊常数 + +`PGTYPESInvalidTimestamp` + +timestamp类型的值表示无效的时间戳。这是由函数返回的`PGTYPEStimestamp_来自asc`解析错误。请注意,由于`时间戳`数据类型,`PGTYPESInvalidTimestamp`同时也是一个有效的时间戳。设定为`1899-12-31 23:59:59`。为了检测错误,请确保您的应用程序不仅测试`PGTYPESInvalidTimestamp`也是为了`呃,不0`每次打电话给`PGTYPEStimestamp_来自asc`. diff --git a/docs/X/ecpg-preproc.md b/docs/en/ecpg-preproc.md similarity index 100% rename from docs/X/ecpg-preproc.md rename to docs/en/ecpg-preproc.md diff --git a/docs/en/ecpg-preproc.zh.md b/docs/en/ecpg-preproc.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..9a00392ce2e3ac98a9f035dee2124ca6201fe516 --- /dev/null +++ b/docs/en/ecpg-preproc.zh.md @@ -0,0 +1,121 @@ +## 36.9.预处理器指令 + +[36.9.1. 包括文件](ecpg-preproc.html#ECPG-INCLUDE) + +[36.9.2. define和undef指令](ecpg-preproc.html#ECPG-DEFINE) + +[36.9.3. ifdef、ifndef、elif、else和endif指令](ecpg-preproc.html#ECPG-IFDEF) + +可以使用几个预处理器指令来修改`ecpg`预处理器解析并处理文件。 + +### 36.9.1.包括文件 + +要在嵌入式SQL程序中包含外部文件,请使用: + +``` +EXEC SQL INCLUDE filename; +EXEC SQL INCLUDE ; +EXEC SQL INCLUDE "filename"; +``` + +嵌入式SQL预处理器将查找名为`*`文件名`*h`,对其进行预处理,并将其包含在生成的C输出中。因此,可以正确处理包含文件中的嵌入式SQL语句。 + +这个`ecpg`预处理器将按以下顺序在多个目录中搜索文件: + +- 当前目录 +- `/usr/本地/包括` +- PostgreSQL包含目录,在构建时定义(例如。,`/usr/local/pgsql/include`) +- `/usr/包括` + + 但是什么时候`EXEC SQL INCLUDE“*`文件名`*"`则只搜索当前目录。 + + 在每个目录中,预处理器将首先查找给定的文件名,如果找不到,将追加`H`添加到文件名,然后重试(除非指定的文件名已具有该后缀)。 + + 注意`EXEC SQL包括`是*不*一样: + + +``` +#include +``` + +因为该文件不受SQL命令预处理的约束。当然,您可以继续使用C`#包括`指令以包含其他头文件。 + +### 笔记 + +include文件名区分大小写,尽管`EXEC SQL包括`命令遵循正常的SQL区分大小写规则。 + +### 36.9.2.define和undef指令 + +与指令类似`#定义`从C语言中可以看出,嵌入式SQL有一个类似的概念: + +``` +EXEC SQL DEFINE name; +EXEC SQL DEFINE name value; +``` + +所以你可以定义一个名字: + +``` +EXEC SQL DEFINE HAVE_FEATURE; +``` + +你也可以定义常数: + +``` +EXEC SQL DEFINE MYNUMBER 12; +EXEC SQL DEFINE MYSTRING 'abc'; +``` + +使用`未定义`要删除以前的定义,请执行以下操作: + +``` +EXEC SQL UNDEF MYNUMBER; +``` + +当然,您可以继续使用C版本`#定义`和`#未定义`在嵌入式SQL程序中。不同之处在于对定义的值进行评估的地方。如果你使用`EXEC SQL定义`然后`ecpg`预处理器评估定义并替换值。例如,如果你写: + +``` +EXEC SQL DEFINE MYNUMBER 12; +... +EXEC SQL UPDATE Tbl SET col = MYNUMBER; +``` + +然后`ecpg`将已经进行替换,并且您的C编译器将永远不会看到任何名称或标识符`我的号码`.请注意,您不能使用`#定义`对于将在嵌入式SQL查询中使用的常量,因为在这种情况下,嵌入式SQL预编译器无法看到此声明。 + +### 36.9.3.ifdef、ifndef、elif、else和endif指令 + +可以使用以下指令有条件地编译代码段: + +`execsqlifdef*`名称`*;` + +检查*`名称`*如果需要,则处理后续行*`名称`*已通过定义`EXEC SQL定义*`名称`*`. + +`EXEC SQL ifndef*`名称`*;` + +检查*`名称`*如果需要,则处理后续行*`名称`*有*不*已通过定义`EXEC SQL定义*`名称`*`. + +`execsqlelif*`名称`*;` + +在`execsqlifdef*`名称`*`或`EXEC SQL ifndef*`名称`*`指令。任何数量的`否则如果`部分可以出现。在一个`否则如果`将被处理,如果*`名称`*已经定义了*和*之前没有相同的章节`条件编译`/`如果未定义`...`恩迪夫`构造已被处理。 + +`EXEC SQL else;` + +开始一个可选的、最终的可选部分`execsqlifdef*`名称`*`或`EXEC SQL ifndef*`名称`*`指令。如果没有相同的前一部分,将处理后续行`条件编译`/`如果未定义`...`恩迪夫`构造已被处理。 + +`EXEC SQL endif;` + +结束`条件编译`/`如果未定义`...`恩迪夫`建筑后续行正常处理。 + +`条件编译`/`如果未定义`...`恩迪夫`构造可以嵌套,最深可达127层。 + +本例将编译三个示例中的一个`设定时区`命令: + +``` +EXEC SQL ifdef TZVAR; +EXEC SQL SET TIMEZONE TO TZVAR; +EXEC SQL elif TZNAME; +EXEC SQL SET TIMEZONE TO TZNAME; +EXEC SQL else; +EXEC SQL SET TIMEZONE TO 'GMT'; +EXEC SQL endif; +``` diff --git a/docs/X/event-log-registration.md b/docs/en/event-log-registration.md similarity index 100% rename from docs/X/event-log-registration.md rename to docs/en/event-log-registration.md diff --git a/docs/en/event-log-registration.zh.md b/docs/en/event-log-registration.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..d7f69543ff5e0596ce01b104ca2117ca668214e9 --- /dev/null +++ b/docs/en/event-log-registration.zh.md @@ -0,0 +1,27 @@ +## 19.12.在Windows上注册事件日志 + +[](<>) + +要在操作系统中注册Windows事件日志库,请发出以下命令: + +``` +regsvr32 pgsql_library_directory/pgevent.dll +``` + +这将在名为的默认事件源下创建事件查看器使用的注册表项`PostgreSQL`. + +指定不同的事件源名称(请参见[事件\_来源](runtime-config-logging.html#GUC-EVENT-SOURCE)),使用`/n`和`/我`选项: + +``` +regsvr32 /n /i:event_source_name pgsql_library_directory/pgevent.dll +``` + +要从操作系统中注销事件日志库,请发出以下命令: + +``` +regsvr32 /u [/i:event_source_name] pgsql_library_directory/pgevent.dll +``` + +### 笔记 + +要在数据库服务器中启用事件日志记录,请修改[日志\_目的地](runtime-config-logging.html#GUC-LOG-DESTINATION)包括`事件记录`在里面`postgresql。形态`. diff --git a/docs/X/explicit-joins.md b/docs/en/explicit-joins.md similarity index 100% rename from docs/X/explicit-joins.md rename to docs/en/explicit-joins.md diff --git a/docs/en/explicit-joins.zh.md b/docs/en/explicit-joins.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..610308b50c5502cc756a794e38a5b6e8741e043b --- /dev/null +++ b/docs/en/explicit-joins.zh.md @@ -0,0 +1,72 @@ +## 14.3.用显式方法控制计划员`参加`条款 + +[](<>) + +通过使用显式`参加`语法。要了解这一点的重要性,我们首先需要一些背景知识。 + +在简单的联接查询中,例如: + +``` +SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; +``` + +计划者可以按任意顺序加入给定的表。例如,它可以使用`哪里`条件`a、 id=b.id`,然后使用另一个`哪里`条件或者它可以将B连接到C,然后将A连接到该结果。或者它可以把A和C连接起来,然后把它们和B连接起来——但这将是低效的,因为A和C的完全笛卡尔积必须形成,而在这个过程中没有适用的条件`哪里`子句以允许优化联接。(PostgreSQL executor中的所有连接都发生在两个输入表之间,因此有必要以其中一种方式建立结果。)重要的一点是,这些不同的连接可能性给出了语义上等价的结果,但可能会有巨大不同的执行成本。因此,计划者将对所有这些问题进行研究,试图找到最有效的查询计划。 + +当一个查询只涉及两个或三个表时,不需要担心太多的联接顺序。但是,随着表的数量增加,可能的连接顺序的数量呈指数增长。除了十个左右的输入表之外,对所有可能的表进行彻底的搜索已经不现实了,即使是六个或七个表,规划也可能需要很长时间。当输入表太多时,PostgreSQL planner将从穷举搜索切换到*遗传的*通过有限的可能性进行概率搜索。(切换阈值由[盖库\_门槛](runtime-config-query.html#GUC-GEQO-THRESHOLD)运行时参数。)基因搜索花费的时间更少,但不一定能找到最好的方案。 + +当查询涉及外部联接时,planner的自由度比普通(内部)联接要小。例如,考虑: + +``` +SELECT * FROM a LEFT JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); +``` + +虽然这个查询的限制表面上与前一个示例类似,但语义不同,因为在B和C的联接中,a的每一行都必须发出一行,而B和C的联接中没有匹配的行。因此,规划器在这里没有联接顺序的选择:它必须将B联接到C,然后将a联接到该结果。因此,与上一个查询相比,此查询计划所需的时间更少。在其他情况下,计划者可能能够确定多个联接顺序是安全的。例如,假设: + +``` +SELECT * FROM a LEFT JOIN b ON (a.bid = b.id) LEFT JOIN c ON (a.cid = c.id); +``` + +先将A加入B或C是有效的。目前只有`完全连接`完全约束联接顺序。大多数实际案例涉及`左连接`或`右键连接`可以在某种程度上重新安排。 + +显式内部联接语法(`内部连接`, `交叉连接`,还是朴素`参加`)语义上与在中列出输入关系相同`从…起`,因此它不约束联接顺序。 + +尽管大多数`参加`不要完全约束联接顺序,可以指示PostgreSQL查询计划器处理所有`参加`子句作为约束联接顺序的条件。例如,这三个查询在逻辑上是等价的: + +``` +SELECT * FROM a, b, c WHERE a.id = b.id AND b.ref = c.id; +SELECT * FROM a CROSS JOIN b CROSS JOIN c WHERE a.id = b.id AND b.ref = c.id; +SELECT * FROM a JOIN (b JOIN c ON (b.ref = c.id)) ON (a.id = b.id); +``` + +但如果我们告诉规划者尊重`参加`第二种和第三种方法比第一种方法花费的时间更少。这种影响不值得担心,因为只有三个表,但它可以挽救许多表。 + +强制规划者遵循explicit制定的联接顺序`参加`s、 设定[参加\_崩溃\_限度](runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT)将运行时参数设置为1。(以下讨论了其他可能的值。) + +为了缩短搜索时间,不需要完全约束联接顺序,因为可以使用它`参加`在平面中的项目中的运算符`从…起`列表例如,考虑: + +``` +SELECT * FROM a CROSS JOIN b, c, d, e WHERE ...; +``` + +具有`加入_崩溃_极限`=1时,这会强制计划程序在将A连接到B之前将其连接到其他表,但不会以其他方式限制其选择。在本例中,可能的联接顺序的数量减少了5倍。 + +以这种方式约束计划者的搜索对于减少计划时间和引导计划者选择好的查询计划都是一种有用的技术。如果计划员默认选择了一个错误的连接顺序,您可以通过`参加`语法——假设你知道更好的顺序,也就是说。建议进行实验。 + +影响计划时间的一个密切相关的问题是将子查询分解为父查询。例如,考虑: + +``` +SELECT * +FROM x, y, + (SELECT * FROM a, b, c WHERE something) AS ss +WHERE somethingelse; +``` + +这种情况可能是因为使用了包含连接的视图;景色很美`选择`规则将被插入到视图引用的位置,生成一个类似于上述的查询。通常,计划者会尝试将子查询折叠到父查询中,从而产生: + +``` +SELECT * FROM x, y, a, b, c WHERE something AND somethingelse; +``` + +这通常会产生比单独规划子查询更好的计划。(例如,外部`哪里`条件可能是,将X连接到第一行可以消除A的许多行,从而避免形成子查询的完整逻辑输出。)但与此同时,我们增加了计划时间;在这里,我们有一个五路连接问题来代替两个单独的三路连接问题。由于可能性的数量呈指数级增长,这就产生了很大的不同。规划者试图避免陷入巨大的连接搜索问题,如果子查询超过`从_崩溃_极限` `从…起`项将导致父查询。您可以通过向上或向下调整此运行时参数来权衡计划时间和计划质量。 + +[从…起\_崩溃\_限度](runtime-config-query.html#GUC-FROM-COLLAPSE-LIMIT)和[参加\_崩溃\_限度](runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT)它们的名称相似,因为它们做的事情几乎相同:一个控制规划器何时“展平”子查询,另一个控制规划器何时展平显式连接。通常你会`加入_崩溃_极限`相当于`从_崩溃_极限`(这样显式连接和子查询的作用类似)或`加入_崩溃_极限`到1(如果希望通过显式联接控制联接顺序)。但是,如果您试图微调计划时间和运行时间之间的权衡,则可能会对它们进行不同的设置。 diff --git a/docs/X/explicit-locking.md b/docs/en/explicit-locking.md similarity index 100% rename from docs/X/explicit-locking.md rename to docs/en/explicit-locking.md diff --git a/docs/en/explicit-locking.zh.md b/docs/en/explicit-locking.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..212a60117302bc697a9473f2a0e197abd48824bd --- /dev/null +++ b/docs/en/explicit-locking.zh.md @@ -0,0 +1,191 @@ +## 13.3.显式锁定 + +[13.3.1. 桌面锁](explicit-locking.html#LOCKING-TABLES) + +[13.3.2. 行级锁](explicit-locking.html#LOCKING-ROWS) + +[13.3.3. 页面级锁](explicit-locking.html#LOCKING-PAGES) + +[13.3.4. 僵局](explicit-locking.html#LOCKING-DEADLOCKS) + +[13.3.5. 顾问锁](explicit-locking.html#ADVISORY-LOCKS) + +[](<>) + +PostgreSQL提供了各种锁模式来控制对表中数据的并发访问。在MVCC不能提供所需行为的情况下,这些模式可用于应用程序控制的锁定。此外,大多数PostgreSQL命令会自动获取适当模式的锁,以确保在命令执行时不会以不兼容的方式删除或修改引用的表。(例如,`截断`无法安全地与同一表上的其他操作同时执行,因此它会获得`访问独占`锁定桌子以强制执行。) + +要检查数据库服务器中当前未完成的锁的列表,请使用[`pg_锁`](view-pg-locks.html)系统视图。有关监控锁管理器子系统状态的更多信息,请参阅[第28章](monitoring.html). + +### 13.3.1.桌面锁 + +[](<>) + +下面的列表显示了可用的锁定模式以及PostgreSQL自动使用它们的上下文。您还可以使用命令显式获取这些锁中的任何一个[锁](sql-lock.html).记住,所有这些锁模式都是表级锁,即使名称中包含单词“row”;锁定模式的名称是历史记录。在某种程度上,这些名称反映了每个锁模式的典型用法——但语义都是相同的。一种锁模式和另一种锁模式之间唯一的真正区别是它们之间冲突的锁模式集(请参见[表13.2](explicit-locking.html#TABLE-LOCK-COMPATIBILITY)).两个事务不能同时在同一个表上持有冲突模式的锁。(然而,交易从不与自身发生冲突。例如,它可能获得`访问独占`锁定并稍后获取`访问共享`锁在同一张桌子上。)非冲突锁模式可以由多个事务同时持有。请特别注意,某些锁定模式是自冲突的(例如`访问独占`锁一次不能由多个事务持有),而其他事务不会自相矛盾(例如`访问共享`锁可以由多个事务持有)。 + +**表级锁定模式** + +`访问共享` + +与`访问独占`仅限锁定模式。 + +这个`选择`命令在引用的表上获取此模式的锁。一般来说,任何查询*读到*一个表,如果不修改它,将获得此锁定模式。 + +`行共享` + +与`独家`和`访问独占`锁定模式。 + +这个`选择更新`和`选择共享`命令在目标表上获取此模式的锁(除了`访问共享`锁定已引用但未选中的任何其他表`更新/分享`). + +`排他` + +与`共有`, `共享行独占`, `独家`和`访问独占`锁定模式。 + +命令`使现代化`, `删去`和`插入`在目标表上获取此锁定模式(除了`访问共享`锁定任何其他引用的表)。通常情况下,该锁定模式将由以下命令获取:*修改数据*在桌子上。 + +`共享更新独占` + +与`共享更新独占`, `共有`, `共享行独占`, `独家`和`访问独占`锁定模式。此模式保护表不受并发架构更改和`真空`跑。 + +获得`真空`(没有`满的`), `分析`, `同时创建索引`, `创建统计数据`, `评论`, `同时重新编制索引`,而且是肯定的[`改变索引`](sql-alterindex.html)和[`改变桌子`](sql-altertable.html)变体(有关详细信息,请参阅这些命令的文档)。 + +`共有` + +与`排他`, `共享更新独占`, `共享行独占`, `独家`和`访问独占`锁定模式。此模式保护表不受并发数据更改的影响。 + +获得`创建索引`(没有`同时`). + +`共享行独占` + +与`排他`, `共享更新独占`, `共有`, `共享行独占`, `独家`和`访问独占`锁定模式。此模式保护表不受并发数据更改的影响,并且是自排他的,因此一次只能有一个会话保存它。 + +获得`创建触发器`还有一些形式的[`改变桌子`](sql-altertable.html). + +`独家` + +与`行共享`, `排他`, `共享更新独占`, `共有`, `共享行独占`, `独家`和`访问独占`锁定模式。此模式只允许并发`访问共享`锁,也就是说,只有从表中读取数据才能与持有该锁模式的事务并行进行。 + +获得`同时刷新物化视图`. + +`访问独占` + +与所有模式的锁冲突(`访问共享`, `行共享`, `排他`, `共享更新独占`, `共有`, `共享行独占`, `独家`和`访问独占`).此模式保证持有人是以任何方式访问表的唯一交易。 + +被政府收购`升降台`, `截断`, `重新索引`, `簇`, `真空满`和`刷新物化视图`(没有`同时`)命令。多种形式的`改变索引`和`改变桌子`也可以在这个级别获得一个锁。这也是默认的锁定模式`锁桌`不显式指定模式的语句。 + +### 提示 + +只有一个`访问独占`锁块a`选择`(没有`更新/分享`)声明。 + +一旦获得,锁通常会一直保持到交易结束。但是,如果在建立保存点后获得了锁,则如果保存点回滚到,锁将立即释放。这符合以下原则:`回降`取消保存点之后命令的所有效果。PL/pgSQL异常块中获取的锁也是如此:从该块中的错误转义将释放在该块中获取的锁。 + +**表13.2。冲突锁模式** + +| 请求的锁定模式 | 现有锁定模式 | | | | | | | | +| ------- | :----: | --- | --- | --- | --- | --- | --- | --- | +| `访问共享` | `行共享` | `排不包括。` | `共享更新不包括。` | `共有` | `共享行不包括。` | `不包括。` | `访问不包括。` | | +| `访问共享` | | | | | | | | 十、 | +| `行共享` | | | | | | | 十、 | 十、 | +| `排不包括。` | | | | | 十、 | 十、 | 十、 | 十、 | +| `共享更新不包括。` | | | | 十、 | 十、 | 十、 | 十、 | 十、 | +| `共有` | | | 十、 | 十、 | | 十、 | 十、 | 十、 | +| `共享行不包括。` | | | 十、 | 十、 | 十、 | 十、 | 十、 | 十、 | +| `不包括。` | | 十、 | 十、 | 十、 | 十、 | X | X | X | +| `ACCESS EXCL.` | X | X | X | X | X | X | X | X | + +### 13.3.2. Row-Level Locks + +In addition to table-level locks, there are row-level locks, which are listed as below with the contexts in which they are used automatically by PostgreSQL. See[Table 13.3](explicit-locking.html#ROW-LOCK-COMPATIBILITY)for a complete table of row-level lock conflicts. Note that a transaction can hold conflicting locks on the same row, even in different subtransactions; but other than that, two transactions can never hold conflicting locks on the same row. Row-level locks do not affect data querying; they block only*writers and lockers*to the same row. Row-level locks are released at transaction end or during savepoint rollback, just like table-level locks. + +**Row-Level Lock Modes** + +`FOR UPDATE` + +`FOR UPDATE`causes the rows retrieved by the`SELECT`statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends. That is, other transactions that attempt`UPDATE`,`DELETE`,`SELECT FOR UPDATE`, `选择无密钥更新`, `选择共享`或`选择密钥共享`在当前事务结束之前,这些行中的任何一行都将被阻止;相反地`选择更新`将等待在同一行上运行任何这些命令的并发事务,然后锁定并返回更新的行(如果行已删除,则不返回行)。在一分钟之内`可重复读取`或`可序列化`但是,如果要锁定的行在事务启动后发生了更改,则会抛出一个错误。有关进一步讨论,请参阅[第13.4节](applevel-consistency.html). + +这个`更新`锁定模式也可由任何`删去`一排,还有一个`使现代化`这会修改某些列的值。当前,考虑用于`使现代化`case是那些具有唯一的索引,可以在外键中使用的索引(因此不考虑部分索引和表达式索引),但这在将来可能会发生变化。 + +`无需密钥更新` + +行为类似于`更新`,但获取的锁较弱:此锁不会阻止`选择密钥共享`试图在同一行上获取锁的命令。该锁定模式也可由任何用户获取`使现代化`这并不意味着`更新`锁 + +`分享` + +行为类似于`无需密钥更新`,但它会在每个检索到的行上获取共享锁,而不是独占锁。共享锁会阻止其他事务执行`使现代化`, `删去`, `选择更新`或`选择无密钥更新`在这些行上,但这并不阻止它们执行`选择共享`或`选择密钥共享`. + +`关键份额` + +行为类似于`分享`,除了锁较弱外:`选择更新`被阻止了,但没有`选择无密钥更新`.密钥共享锁阻止其他事务执行`删去`或任何`使现代化`这会更改键值,但不会更改其他键值`使现代化`它也不能阻止`选择无密钥更新`, `选择共享`或`选择密钥共享`. + +PostgreSQL不记得有关内存中修改行的任何信息,因此一次锁定的行数没有限制。但是,锁定一行可能会导致磁盘写入,例如:。,`选择更新`修改选定行以将其标记为锁定,因此将导致磁盘写入。 + +**表13.3。行级锁冲突** + +| 请求的锁定模式 | 当前锁定模式 | | | | +| ------- | ------ | --- | --- | --- | +| 关键份额 | 分享 | 无需密钥更新 | 更新 | | +| 关键份额 | | | | 十、 | +| 分享 | | | 十、 | 十、 | +| 无需密钥更新 | | 十、 | 十、 | 十、 | +| 更新 | 十、 | 十、 | 十、 | 十、 | + +### 13.3.3.页面级锁 + +除了表锁和行锁之外,页级共享/排他锁还用于控制对共享缓冲池中表页的读/写访问。这些锁在获取或更新行后立即释放。应用程序开发人员通常不需要关心页面级锁,但为了完整性,这里提到了它们。 + +### 13.3.4.僵局 + +[](<>) + +使用显式锁定可以增加*僵局*,其中两个(或更多)事务各自持有对方想要的锁。例如,如果事务1获取了表A上的独占锁,然后尝试获取表B上的独占锁,而事务2已经独占锁定了表B,现在需要表A上的独占锁,那么两个事务都不能继续。PostgreSQL会自动检测死锁情况,并通过中止其中一个相关事务来解决死锁,允许其他事务完成。(准确地说,哪一笔交易将被中止很难预测,也不应该依赖。) + +请注意,死锁也可能是行级锁的结果(因此,即使不使用显式锁,死锁也可能发生)。考虑两个并发事务修改一个表的情况。第一个事务执行: + +``` +UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 11111; +``` + +这将在具有指定帐号的行上获取行级锁。然后,执行第二个事务: + +``` +UPDATE accounts SET balance = balance + 100.00 WHERE acctnum = 22222; +UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 11111; +``` + +第一个`更新`语句成功获取指定行的行级别锁,因此成功更新该行。然而,第二个`更新`语句发现它试图更新的行已经被锁定,因此它等待获取锁的事务完成。事务二现在正在等待事务1完成,然后再继续执行。现在,事务一执行: + +``` +UPDATE accounts SET balance = balance - 100.00 WHERE acctnum = 22222; +``` + +事务一试图在指定的行上获取行级别的锁,但不能:事务二已经持有这样的锁。所以它等待事务2完成。因此,事务一在事务2上被阻塞,事务二在事务一上被阻止:死锁条件。PostgreSQL将检测到这种情况并中止其中一个事务。 + +对于死锁的最好防御通常是确保所有使用数据库的应用程序都以一致的顺序获取多个对象上的锁,从而避免死锁。在上面的示例中,如果两个事务都以相同的顺序更新了行,则不会发生死锁。还应该确保在事务中获取的对象上获得的第一个锁是该对象所需的最严格的模式。如果事先验证这一点不可行,那么可以通过重试由于死锁而中止的事务来实时处理死锁。 + +只要未检测到死锁情况,寻求表级或行级锁的事务将无限期地等待释放冲突的锁。这意味着应用程序不应该长时间(例如,在等待用户输入的同时)保持事务打开。 + +### 13.3.5.咨询锁 + +[](<>)[](<>) + +PostgreSQL提供了一种创建具有应用程序定义含义的锁的方法。这些被称为*咨询锁*,因为系统没有强制使用它们——正确使用它们取决于应用程序。对于MVCC模型来说,咨询锁对于锁定策略非常有用。例如,咨询锁的一个常用方法是模拟所谓“平面文件”数据管理系统典型的悲观锁定策略。虽然存储在表中的标志可以用于相同的目的,但是咨询锁的速度更快,避免了表bloat,并且在会话结束时服务器会自动清理。 + +在PostgreSQL中获取咨询锁有两种方法:会话级或事务级。一旦在会话级别获得,咨询锁将一直保持到显式释放或会话结束为止。与标准锁请求不同,会话级咨询锁请求不遵守事务语义:在回滚之后,在随后回滚的事务期间获取的锁仍然会保持不变,同样,即使调用事务在以后失败,解锁也会有效。锁可以通过其拥有的进程多次获得;对于每个已完成的锁请求,在实际释放锁之前,必须有相应的解锁请求。另一方面,事务级锁请求的行为更像常规的锁请求:它们在事务结束时自动释放,并且没有显式的解锁操作。对于短期使用咨询锁,这种行为通常比会话级行为更方便。对于相同的咨询锁标识符,会话级别和事务级锁请求将以预期的方式相互阻止。如果会话已经持有给定的咨询锁,则即使其他会话正在等待锁,它的其他请求也将始终成功;无论现有锁保持和新请求处于会话级别还是事务级别,此语句都是真的。 + +与PostgreSQL中的所有锁一样,任何会话当前持有的一个完整的咨询锁列表可以在[`pg_锁`](view-pg-locks.html)系统视图。 + +建议锁和常规锁都存储在共享内存池中,其大小由配置变量定义[最大值\_锁\_每\_交易](runtime-config-locks.html#GUC-MAX-LOCKS-PER-TRANSACTION)和[最大值\_连接](runtime-config-connection.html#GUC-MAX-CONNECTIONS)。必须注意不要耗尽此内存,否则服务器将无法授予任何锁。这对服务器可授予的建议锁的数量施加了上限,通常为数十到几十万,具体取决于服务器的配置方式。 + +在某些情况下,使用建议锁定方法,尤其是在涉及显式排序和`限度`子句中,由于SQL表达式的求值顺序,必须小心控制获取的锁。例如: + +``` +SELECT pg_advisory_lock(id) FROM foo WHERE id = 12345; -- ok +SELECT pg_advisory_lock(id) FROM foo WHERE id > 12345 LIMIT 100; -- danger! +SELECT pg_advisory_lock(q.id) FROM +( + SELECT id FROM foo WHERE id > 12345 LIMIT 100 +) q; -- ok +``` + +在上面的查询中,第二种形式是危险的,因为`限度`不保证在执行锁定功能之前应用。这可能会导致获取一些应用程序不期望的锁,因此无法释放(直到会话结束)。从应用程序的角度来看,这样的锁将是悬挂的,尽管在应用程序中仍然可以看到`pg_锁`. + +中介绍了用于操作建议锁的功能[第9.27.10节](functions-admin.html#FUNCTIONS-ADVISORY-LOCKS). diff --git a/docs/X/external-admin-tools.md b/docs/en/external-admin-tools.md similarity index 100% rename from docs/X/external-admin-tools.md rename to docs/en/external-admin-tools.md diff --git a/docs/en/external-admin-tools.zh.md b/docs/en/external-admin-tools.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..059b998f403e52ea88543cf67e3d03cd4c1c71cc --- /dev/null +++ b/docs/en/external-admin-tools.zh.md @@ -0,0 +1,5 @@ +## H.2。管理工具 + +[](<>) + +PostgreSQL有几种可用的管理工具。最受欢迎的是[pgAdmin](https://www.pgadmin.org/),也有几种商用的。 diff --git a/docs/X/functions-admin.md b/docs/en/functions-admin.md similarity index 100% rename from docs/X/functions-admin.md rename to docs/en/functions-admin.md diff --git a/docs/en/functions-admin.zh.md b/docs/en/functions-admin.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..098969bf8e34b93bc724c88b381c813780014b0e --- /dev/null +++ b/docs/en/functions-admin.zh.md @@ -0,0 +1,285 @@ +## 9.27.系统管理功能 + +[9.27.1. 配置设置功能](functions-admin.html#FUNCTIONS-ADMIN-SET) + +[9.27.2. 服务器信令功能](functions-admin.html#FUNCTIONS-ADMIN-SIGNAL) + +[9.27.3. 备份控制功能](functions-admin.html#FUNCTIONS-ADMIN-BACKUP) + +[9.27.4. 恢复控制功能](functions-admin.html#FUNCTIONS-RECOVERY-CONTROL) + +[9.27.5. 快照同步功能](functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION) + +[9.27.6. 复制管理功能](functions-admin.html#FUNCTIONS-REPLICATION) + +[9.27.7. 数据库对象管理功能](functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT) + +[9.27.8. 索引维护功能](functions-admin.html#FUNCTIONS-ADMIN-INDEX) + +[9.27.9. 通用文件访问功能](functions-admin.html#FUNCTIONS-ADMIN-GENFILE) + +[9.27.10. 咨询锁功能](functions-admin.html#FUNCTIONS-ADVISORY-LOCKS) + +本节介绍的功能用于控制和监视PostgreSQL安装。 + +### 9.27.1.配置设置功能 + +[](<>)[](<>)[](<>) + +[表9.85](functions-admin.html#FUNCTIONS-ADMIN-SET-TABLE)显示可用于查询和更改运行时配置参数的函数。 + +**表9.85。配置设置功能** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `当前_设置`(*`设置你的名字`* `文本` [,想念你好吗*` _布尔值`* `]) →`文本`

返回设置的当前值*`设置你的名字`*.如果没有这样的设置,`当前_设置`抛出错误,除非*`想念你好吗`*是供应的,也是`符合事实的`(在这种情况下,返回NULL)。此函数对应于SQL命令[显示](sql-show.html).

`当前_设置('datestyle')` → `ISO,MDY` | +| [](<>) `设置配置` ( *`设置你的名字`* `文本`, *`新价值`* `文本`, *`你是本地人吗`* `布尔值` ) → `文本`

设置参数*`设置你的名字`*到*`新价值`*,并返回该值。如果*`你是本地人吗`*是`符合事实的`,新值仅在当前交易期间适用。如果希望新值应用于当前会话的其余部分,请使用`错误的`相反此函数对应于SQL命令[设置](sql-set.html).

`set_config('log_statement_stats','off',false)` → `关` | + +### 9.27.2.服务器信令功能 + +[](<>) + +中显示的功能[表9.86](functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE)向其他服务器进程发送控制信号。默认情况下,这些功能的使用仅限于超级用户,但使用这些功能的其他人可能会被授予访问权限`授予`,但有明显的例外。 + +每个函数都返回`符合事实的`如果成功发送了信号`错误的`如果发送信号失败。 + +**表9.86。服务器信令功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_取消后端` ( *`pid`* `整数` ) → `布尔值`

取消对其后端进程具有指定进程ID的会话的当前查询。如果调用角色是其后端正在被取消的角色的成员,或者调用角色已被授予,则也允许这样做`pg_信号_后端`但是,只有超级用户可以取消超级用户后端。 | +| [](<>) `pg_日志_后端_内存_上下文` ( *`pid`* `整数` ) → `布尔值`

请求使用指定的进程ID记录后端的内存上下文。这些内存上下文将在`日志`消息级别。它们将根据日志配置集出现在服务器日志中(请参阅[第20.8节](runtime-config-logging.html)获取更多信息),但无论[客户\_闵\_信息](runtime-config-client.html#GUC-CLIENT-MIN-MESSAGES)。只有超级用户才能请求记录内存上下文。 | +| [](<>) `pg_重新加载_形态` () → `布尔值`

使PostgreSQL server的所有进程重新加载其配置文件。(这是通过向postmaster进程发送SIGHUP信号来启动的,postmaster进程将SIGHUP发送给其每个子进程。)你可以使用[`pg_文件_设置`](view-pg-file-settings.html)和[`pg_hba_文件_规则`](view-pg-hba-file-rules.html)视图,以在重新加载之前检查配置文件是否存在可能的错误。 | +| [](<>) `pg_旋转_日志文件` () → `布尔值`

通知日志文件管理器立即切换到新的输出文件。这仅在内置日志收集器正在运行时有效,因为在其他情况下没有日志文件管理器子进程。 | +| [](<>) `pg_终止_后端` ( *`pid`* `整数`, *`暂停`* `比基特` `违约` `0` ) → `布尔值`

终止其后端进程具有指定进程ID的会话。如果调用角色是其后端正在被终止的角色的成员,或者调用角色已被授予权限,则也允许这样做`pg_信号_后端`但是,只有超级用户可以终止超级用户后端。

如果*`暂停`*如果未指定或为零,则此函数返回`符合事实的`进程是否实际终止,仅表示信号发送成功。如果*`暂停`*指定(以毫秒为单位)并大于零时,函数将等待,直到进程实际终止或给定的时间过去。如果进程终止,函数返回`符合事实的`。超时时,会发出警告并`错误的`被退回。 | + +`pg_取消后端`和`pg_终止_后端`向由进程ID标识的后端进程发送信号(分别为SIGINT或SIGTERM)。可以从`pid`专栏`pg_统计活动`查看,或通过列出`博士后`服务器上的进程(在Unix上使用ps或在Windows上使用任务管理器)。活动后端的角色可以从`usename`专栏`pg_统计活动`看法 + +`pg_日志_后端_内存_上下文`可用于记录后端进程的内存上下文。例如: + +``` +postgres=# SELECT pg_log_backend_memory_contexts(pg_backend_pid()); + pg_log_backend_memory_contexts +### 9.27.3. Backup Control Functions + +[]() + + The functions shown in [Table 9.87](functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE) assist in making on-line backups. These functions cannot be executed during recovery (except non-exclusive `pg_start_backup`, non-exclusive `pg_stop_backup`, `pg_is_in_backup`, `pg_backup_start_time` and `pg_wal_lsn_diff`). + + For details about proper usage of these functions, see [Section 26.3](continuous-archiving.html). + +**Table 9.87. Backup Control Functions** + +| Function

Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| []() `pg_create_restore_point` ( *`name`* `text` ) → `pg_lsn`

Creates a named marker record in the write-ahead log that can later be used as a recovery target, and returns the corresponding write-ahead log location. The given name can then be used with [recovery\_target\_name](runtime-config-wal.html#GUC-RECOVERY-TARGET-NAME) to specify the point up to which recovery will proceed. Avoid creating multiple restore points with the same name, since recovery will stop at the first one whose name matches the recovery target.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +| []() `pg_current_wal_flush_lsn` () → `pg_lsn`

Returns the current write-ahead log flush location (see notes below). | +| []() `pg_current_wal_insert_lsn` () → `pg_lsn`

Returns the current write-ahead log insert location (see notes below). | +| []() `pg_current_wal_lsn` () → `pg_lsn`

Returns the current write-ahead log write location (see notes below). | +| []() `pg_start_backup` ( *`label`* `text` [, *`fast`* `boolean` [, *`exclusive`* `boolean` ]] ) → `pg_lsn`

Prepares the server to begin an on-line backup. The only required parameter is an arbitrary user-defined label for the backup. (Typically this would be the name under which the backup dump file will be stored.) If the optional second parameter is given as `true`, it specifies executing `pg_start_backup` as quickly as possible. This forces an immediate checkpoint which will cause a spike in I/O operations, slowing any concurrently executing queries. The optional third parameter specifies whether to perform an exclusive or non-exclusive backup (default is exclusive).

When used in exclusive mode, this function writes a backup label file (`backup_label`) and, if there are any links in the `pg_tblspc/` directory, a tablespace map file (`tablespace_map`) into the database cluster's data directory, then performs a checkpoint, and then returns the backup's starting write-ahead log location. (The user can ignore this result value, but it is provided in case it is useful.) When used in non-exclusive mode, the contents of these files are instead returned by the `pg_stop_backup` function, and should be copied to the backup area by the user.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +|[]() `pg_stop_backup` ( *`exclusive`* `boolean` [, *`wait_for_archive`* `boolean` ] ) → `setof record` ( *`lsn`* `pg_lsn`, *`labelfile`* `text`, *`spcmapfile`* `text` )

Finishes performing an exclusive or non-exclusive on-line backup. The *`exclusive`* parameter must match the previous `pg_start_backup` call. In an exclusive backup, `pg_stop_backup` removes the backup label file and, if it exists, the tablespace map file created by `pg_start_backup`. In a non-exclusive backup, the desired contents of these files are returned as part of the result of the function, and should be written to files in the backup area (not in the data directory).

There is an optional second parameter of type `boolean`. If false, the function will return immediately after the backup is completed, without waiting for WAL to be archived. This behavior is only useful with backup software that independently monitors WAL archiving. Otherwise, WAL required to make the backup consistent might be missing and make the backup useless. By default or when this parameter is true, `pg_stop_backup` will wait for WAL to be archived when archiving is enabled. (On a standby, this means that it will wait only when `archive_mode` = `always`. If write activity on the primary is low, it may be useful to run `pg_switch_wal` on the primary in order to trigger an immediate segment switch.)

When executed on a primary, this function also creates a backup history file in the write-ahead log archive area. The history file includes the label given to `pg_start_backup`, the starting and ending write-ahead log locations for the backup, and the starting and ending times of the backup. After recording the ending location, the current write-ahead log insertion point is automatically advanced to the next write-ahead log file, so that the ending write-ahead log file can be archived immediately to complete the backup.

The result of the function is a single record. The *`lsn`* column holds the backup's ending write-ahead log location (which again can be ignored). The second and third columns are `NULL` when ending an exclusive backup; after a non-exclusive backup they hold the desired contents of the label and tablespace map files.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.| +| `pg_stop_backup` () → `pg_lsn`

Finishes performing an exclusive on-line backup. This simplified version is equivalent to `pg_stop_backup(true, true)`, except that it only returns the `pg_lsn` result.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +| []() `pg_is_in_backup` () → `boolean`

Returns true if an on-line exclusive backup is in progress. | +| []() `pg_backup_start_time` () → `timestamp with time zone`

Returns the start time of the current on-line exclusive backup if one is in progress, otherwise `NULL`. | +| []() `pg_switch_wal` () → `pg_lsn`

Forces the server to switch to a new write-ahead log file, which allows the current file to be archived (assuming you are using continuous archiving). The result is the ending write-ahead log location plus 1 within the just-completed write-ahead log file. If there has been no write-ahead log activity since the last write-ahead log switch, `pg_switch_wal` does nothing and returns the start location of the write-ahead log file currently in use.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +| []() `pg_walfile_name` ( *`lsn`* `pg_lsn` ) → `text`

Converts a write-ahead log location to the name of the WAL file holding that location. | +| []() `pg_walfile_name_offset` ( *`lsn`* `pg_lsn` ) → `record` ( *`file_name`* `text`, *`file_offset`* `integer` )

Converts a write-ahead log location to a WAL file name and byte offset within that file. | +| []() `pg_wal_lsn_diff` ( *`lsn1`* `pg_lsn`, *`lsn2`* `pg_lsn` ) → `numeric`

Calculates the difference in bytes (*`lsn1`* - *`lsn2`*) between two write-ahead log locations. This can be used with `pg_stat_replication` or some of the functions shown in [Table 9.87](functions-admin.html#FUNCTIONS-ADMIN-BACKUP-TABLE) to get the replication lag. | + +`pg_current_wal_lsn` displays the current write-ahead log write location in the same format used by the above functions. Similarly, `pg_current_wal_insert_lsn` displays the current write-ahead log insertion location and `pg_current_wal_flush_lsn` displays the current write-ahead log flush location. The insertion location is the “logical” end of the write-ahead log at any instant, while the write location is the end of what has actually been written out from the server's internal buffers, and the flush location is the last location known to be written to durable storage. The write location is the end of what can be examined from outside the server, and is usually what you want if you are interested in archiving partially-complete write-ahead log files. The insertion and flush locations are made available primarily for server debugging purposes. These are all read-only operations and do not require superuser permissions. + + You can use `pg_walfile_name_offset` to extract the corresponding write-ahead log file name and byte offset from a `pg_lsn` value. For example: +``` + +postgres=#从pg_walfile_name_offset(pg_stop_backup())中选择\*;文件|文件|偏移量 + +### 9.27.4.恢复控制功能 + +中显示的功能[表9.88](functions-admin.html#FUNCTIONS-RECOVERY-INFO-TABLE)提供有关备用服务器当前状态的信息。这些功能可以在恢复和正常运行期间执行。 + +**表9.88。恢复信息功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_正在恢复中` () → `布尔值`

如果恢复仍在进行中,则返回true。 | +| [](<>) `pg_last_wal_receive_lsn` () → `pg_lsn`

返回上次通过流式复制接收并同步到磁盘的预写日志位置。在进行流式复制时,这将单调增加。如果恢复已完成,则在恢复期间,在最后一条WAL记录接收并同步到磁盘的位置,恢复将保持静态。如果流式复制已禁用,或者尚未启动,则该函数将返回`无效的`. | +| [](<>) `pg_last_wal_replay_lsn` () → `pg_lsn`

返回在恢复过程中重放的上一个预写日志位置。如果复苏仍在进行中,这将单调增加。如果恢复已完成,则在恢复期间应用的最后一条WAL记录的位置上,该记录将保持静态。当服务器正常启动且未恢复时,该函数返回`无效的`. | +| [](<>) `pg_last_xact_replay_时间戳` () → `带时区的时间戳`

返回恢复期间重播的最后一个事务的时间戳。这是在主服务器上生成该事务的提交或中止WAL记录的时间。如果在恢复过程中没有重放任何事务,则函数返回`无效的`.否则,如果恢复仍在进行中,这将单调增加。如果恢复已完成,则在恢复过程中应用的最后一个事务发生时,这将保持静态。当服务器正常启动且未恢复时,该函数返回`无效的`. | + +中显示的功能[表9.89](functions-admin.html#FUNCTIONS-RECOVERY-CONTROL-TABLE)控制恢复的进程。这些功能只能在恢复期间执行。 + +**表9.89。恢复控制功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_是_wal_replay_暂停` () → `布尔值`

如果请求恢复暂停,则返回true。 | +| [](<>) `pg_get_wal_replay_pause_state` () → `文本`

返回恢复暂停状态。返回值为`没有停顿`如果不要求暂停,`暂停请求`如果请求暂停但恢复尚未暂停,以及`停顿`如果恢复实际上已暂停。 | +| [](<>) `pg_推广`(德)*`等待`* `布尔值` `违约` `对`"*`等一下`* `整数` `违约` `60`→`布尔值`

将备用服务器升级为主状态。与*`等待`*开始`对`(默认设置),函数将等待升级完成或*`等一下`*几秒钟过去了,又回来了`对`如果推广成功`假的`否则。如果*`等待`*即将`错误的`,函数返回`符合事实的`在发送了`SIGUSR1`向邮政局长发出信号,促使其晋升。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_wal_replay_暂停` () → `无效的`

请求暂停恢复。请求并不意味着恢复立即停止。如果希望保证恢复实际上已暂停,则需要检查`pg_get_wal_replay_pause_state()`.注意`pg_是_wal_replay_暂停()`返回是否发出请求。恢复暂停时,不会应用进一步的数据库更改。如果热备用处于活动状态,则所有新查询都将看到数据库的相同一致快照,并且在恢复之前不会生成进一步的查询冲突。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_wal_replay_resume` () → `无效的`

如果已暂停,则重新启动恢复。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。 | + +`pg_wal_replay_暂停`和`pg_wal_replay_resume`升级正在进行时无法执行。如果在恢复暂停时触发升级,暂停状态将结束,升级将继续。 + +如果流式复制被禁用,暂停状态可能会无限期地继续,而不会出现问题。如果正在进行流式复制,则将继续接收WAL记录,这将最终填满可用磁盘空间,具体取决于暂停的持续时间、WAL生成的速率和可用磁盘空间。 + +### 9.27.5.快照同步功能 + +PostgreSQL允许数据库会话同步快照。A.*快照*确定哪些数据对使用快照的事务可见。当两个或多个会话需要查看数据库中相同的内容时,需要同步快照。如果两个会话只是独立地启动它们的事务,那么在两个会话的执行之间总是有可能提交第三个事务`开始交易`命令,以便一个会话看到该事务的效果,而另一个会话看不到。 + +为了解决这个问题,PostgreSQL允许事务*出口*它正在使用的快照。只要出口交易保持打开状态,其他交易就可以继续*进口*它的快照,从而保证它们看到的数据库视图与第一个事务看到的完全相同。但请注意,这些事务中的任何一个所做的任何数据库更改对其他事务都是不可见的,这与未提交的事务所做的更改一样。因此,事务是相对于预先存在的数据进行同步的,但对于它们自己所做的更改,事务会正常运行。 + +快照将与`pg_导出_快照`函数,如中所示[表9.90](functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION-TABLE),并随[设置事务](sql-set-transaction.html)命令 + +**表9.90。快照同步功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_导出_快照` () → `文本`

保存事务的当前快照并返回`文本`标识快照的字符串。必须将此字符串(在数据库之外)传递给希望导入快照的客户端。只有在导出快照的事务结束之前,快照才可用于导入。

如果需要,事务可以导出多个快照。请注意,这样做只在以下情况下有用:`阅读承诺`交易,从年开始`可重复读取`在更高的隔离级别下,事务在其整个生命周期中使用相同的快照。一旦事务导出了任何快照,就无法使用[准备交易](sql-prepare-transaction.html). | + +### 9.27.6.复制管理功能 + +中显示的功能[表9.91](functions-admin.html#FUNCTIONS-REPLICATION-TABLE)用于控制复制功能并与之交互。看见[第27.2.5节](warm-standby.html#STREAMING-REPLICATION), [第27.2.6节](warm-standby.html#STREAMING-REPLICATION-SLOTS)和[第50章](replication-origins.html)有关基本功能的信息。默认情况下,仅允许超级用户使用复制源的函数,但通过使用`授予`命令复制插槽的功能仅限于超级用户和具有`复制`特权 + +其中许多函数在复制协议中具有等效的命令;看见[第53.4节](protocol-replication.html). + +中描述的功能[第9.27.3节](functions-admin.html#FUNCTIONS-ADMIN-BACKUP), [第9.27.4节](functions-admin.html#FUNCTIONS-RECOVERY-CONTROL)和[第9.27.5节](functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION)也与复制相关。 + +**表9.91。复制管理功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_创建_物理_复制_插槽` ( *`插槽名称`* `名称` [, *`立即保留_ `*布尔值` `,短暂的*` `*布尔值` `] ) → `记录` ( *`插槽名称`* `名称`, *`lsn`* `pg_lsn` )

创建名为的新物理复制插槽*`插槽名称`*.可选的第二个参数,当`符合事实的`,指定立即保留此复制插槽的LSN;否则,LSN将在流复制客户端的第一次连接时保留。只有使用流式复制协议才能从物理插槽进行流式更改-请参阅[第53.4节](protocol-replication.html).可选的第三个参数,*`短暂的`*,当设置为true时,指定插槽不应永久存储到磁盘,且仅用于当前会话。如果出现任何错误,也会释放临时插槽。此函数对应于复制协议命令`创建\u复制\u插槽。。。身体的`. | +| [](<>) `pg_drop_replication_slot` ( *`插槽名称`* `名称` ) → `无效的`

删除名为的物理或逻辑复制插槽*`插槽名称`*.与复制协议命令相同`删除\u复制\u插槽`。对于逻辑插槽,必须在连接到创建插槽所在的同一数据库时调用。 | +| [](<>) `pg_创建_逻辑_复制_插槽` ( *`插槽名称`* `名称`, *`插件`* `名称` [, *`短暂的`* `布尔值`, *`双相_ `*布尔值` `] ) → `记录` ( *`插槽名称`* `名称`, *`lsn`* `pg_lsn` )

创建名为的新逻辑(解码)复制插槽*`插槽名称`*使用输出插件*`插件`*.可选的第三个参数,*`短暂的`*,当设置为true时,指定插槽不应永久存储到磁盘,且仅用于当前会话。如果出现任何错误,也会释放临时插槽。可选的第四个参数,*`双相`*,当设置为true时,指定为此插槽启用已准备事务的解码。对该函数的调用与复制协议命令具有相同的效果`创建\u复制\u插槽。。。必然的`. | +| [](<>) `pg_复制_物理_复制_插槽` ( *`src_插槽_名称`* `名称`, *`dst_插槽_名称`* `名称` [, *`短暂的`* `布尔值` ] ) → `记录` ( *`插槽名称`* `名称`, *`lsn`* `pg_lsn` )

复制名为的现有物理复制插槽*`src_插槽_名称`*到名为*`dst_插槽_名称`*.复制的物理插槽开始从与源插槽相同的LSN保留WAL。*`短暂的`*是可选的。如果*`短暂的`*则使用与源插槽相同的值。 | +| [](<>) `pg_复制_逻辑_复制_插槽` ( *`src_插槽_名称`* `名称`, *`dst_插槽_名称`* `名称` \[, *`短暂的`* `布尔值` [, *`插件`* `名称` ]] ) → `记录` ( *`插槽名称`* `名称`, *`lsn`* `pg_lsn` )

复制名为的现有逻辑复制插槽*`src_插槽_名称`*到名为*`dst_插槽_名称`*,可以选择更改输出插件和持久性。复制的逻辑插槽从与源逻辑插槽相同的LSN开始。二者都*`短暂的`*和*`插件`*是可选的;如果省略,则使用源插槽的值。 | +| [](<>) `pg_逻辑_插槽_获取_更改` ( *`插槽名称`* `名称`, *`高达`* `pg_lsn`, *`直到改变`* `整数`, `可变的` *`选项`* `文本[]` ) → `一套记录` ( *`lsn`* `pg_lsn`, *`希德`* `希德`, *`数据`* `文本` )

返回插槽中的更改*`插槽名称`*,从上次使用更改的点开始。如果*`高达`*和*`直到改变`*如果为空,则逻辑解码将继续,直到WAL结束。如果*`高达`*如果为非空,则解码将仅包括在指定LSN之前提交的事务。如果*`直到改变`*为非空,则当解码产生的行数超过指定值时,解码将停止。但是,请注意,返回的实际行数可能更大,因为只有在添加解码每个新事务提交时生成的行之后,才会检查此限制。 | +| [](<>) `pg_逻辑_插槽_窥视_更改` ( *`插槽名称`* `名称`, *`高达`* `pg_lsn`, *`直到改变`* `整数`, `可变的` *`选项`* `文本[]` ) → `一套记录` ( *`lsn`* `pg_lsn`, *`希德`* `希德`, *`数据`* `文本` )

表现得就像`pg_逻辑_插槽_获取_更改()`功能,但不使用更改;也就是说,它们将在以后的通话中再次返回。 | +| [](<>) `pg_逻辑_插槽_获取_二进制_更改` ( *`插槽名称`* `名称`, *`高达`* `pg_lsn`, *`直到改变`* `整数`, `可变的` *`选项`* `文本[]`→`一套记录` ( *`lsn`* `pg_lsn`, *`希德`* `希德`, *`日期`* `二进制数据` )

需要就像`pg_逻辑_插槽_获取_更改()`函数,但更改将作为`二进制数据`. | +| [](<>) `pg_逻辑_插槽_窥视_二进制_更改` ( *`插槽名称`* `名称`, *`高达`* `pg_lsn`, *`直到改变`* `整数`, `可变的` *`选项`* `文本[]` ) → `一套记录` ( *`lsn`* `pg_lsn`, *`希德`* `希德`, *`数据`* `二进制数据` )

表现得就像`pg_逻辑_插槽_窥视_更改()`函数,但更改将作为`二进制数据`. | +| [](<>) `pg_复制_插槽_升级` ( *`插槽名称`* `名称`, *`高达`* `pg_lsn` ) → `记录` ( *`插槽名称`* `姓名`,*`end_lsn`* `pg_lsn`)

推进一个名为的复制槽的当前确认位置*`槽名`*.插槽不会向后移动,也不会移动到当前插入位置之外。返回槽的名称和它前进到的实际位置。如果进行了任何推进,则在下一个检查点写出更新的插槽位置信息。因此,在发生崩溃时,插槽可能会返回到较早的位置。 | +| [](<>) `pg_replication_origin_create`(*`节点名`* `文本`) →`样的`

创建具有给定外部名称的复制源,并返回分配给它的内部 ID。 | +| [](<>) `pg_replication_origin_drop`(*`节点名`* `文本`) →`空白`

删除先前创建的复制源,包括任何关联的重播进度。 | +| [](<>) `pg_replication_origin_oid`(*`节点名`* `文本`) →`样的`

按名称查找复制源并返回内部 ID。如果没有找到这样的复制起点,`空值`被退回。 | +| [](<>) `pg_replication_origin_session_setup`(*`节点名`* `文本`) →`空白`

将当前会话标记为从给定源重播,允许跟踪重播进度。只能在当前未选择原点时使用。采用`pg_replication_origin_session_reset`撤销。 | +| [](<>) `pg_replication_origin_session_reset`() →`空白`

取消效果`pg_replication_origin_session_setup()`. | +| [](<>) `pg_replication_origin_session_is_setup`() →`布尔值`

如果在当前会话中选择了复制源,则返回 true。 | +| [](<>) `pg_replication_origin_session_progress`(*`冲洗`* `布尔值`) →`pg_lsn`

返回当前会话中选择的复制源的重播位置。参数*`冲洗`*确定是否保证相应的本地事务已刷新到磁盘。 | +| [](<>) `pg_replication_origin_xact_setup`(*`origin_lsn`* `pg_lsn`,*`origin_timestamp`* `带时区的时间戳`) →`空白`

将当前事务标记为重放已在给定 LSN 和时间戳处提交的事务。只能在使用选择复制源时调用`pg_replication_origin_session_setup`. | +| [](<>) `pg_replication_origin_xact_reset`() →`空白`

取消效果`pg_replication_origin_xact_setup()`. | +| [](<>) `pg_replication_origin_advance`(*`节点名`* `文本`,*`lsn`* `pg_lsn`) →`空白`

将给定节点的复制进度设置到给定位置。这主要用于设置初始位置,或在配置更改后设置新位置等。请注意,不小心使用此功能可能会导致数据复制不一致。 | +| [](<>) `pg_replication_origin_progress`(*`节点名`* `文本`,*`冲洗`* `布尔值`) →`pg_lsn`

返回给定复制源的重播位置。参数*`冲洗`*确定是否保证相应的本地事务已刷新到磁盘。 | +| [](<>) `pg_logical_emit_message`(*`事务性的`* `布尔值`,*`字首`* `文本`,*`内容`* `文本`) →`pg_lsn`

`pg_logical_emit_message`(*`事务性的`* `布尔值`,*`字首`* `文本`,*`内容`* `二进制数据`) →`pg_lsn`

发出逻辑解码信息。这可以用来通过WAL将通用消息传递给逻辑解码插件。这个*`交易的`*参数指定消息是应该是当前事务的一部分,还是应该在逻辑解码器读取记录后立即写入并解码。这个*`前缀`*参数是一个文本前缀,逻辑解码插件可以使用它来轻松识别感兴趣的消息。这个*`所容纳之物`*参数是消息的内容,以文本或二进制形式给出。 | + +### 9.27.7.数据库对象管理功能 + +中显示的功能[表9.92](functions-admin.html#FUNCTIONS-ADMIN-DBSIZE)计算数据库对象的磁盘空间使用情况,或帮助演示或理解使用结果。`比基特`结果以字节为单位。如果将不代表现有对象的OID传递给其中一个函数,`无效的`被退回。 + +**表9.92。数据库对象大小函数** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_柱尺寸`(`“任何”`) →`整数`

显示用于存储任何单个数据值的字节数。如果直接应用于表列值,这将反映所做的任何压缩。 | +| [](<>) `pg_列_压缩`(`“任何”` ) → `文本`

显示了用于压缩单个可变长度值的压缩算法。退换商品`无效的`如果未压缩该值。 | +| [](<>) `pg_数据库_大小` ( `名称` ) → `比基特`

`pg_数据库_大小` ( `老年人` ) → `比基特`

计算具有指定名称或OID的数据库使用的总磁盘空间。要使用此功能,您必须`连接`在指定数据库上的权限(默认情况下是授予的)或成为`pg_读取所有数据`角色 | +| [](<>) `pg_索引_大小` ( `regclass` ) → `比基特`

计算附加到指定表的索引使用的总磁盘空间。 | +| [](<>) `pg_关系_尺寸` ( *`关系`* `regclass` [, *`叉`* `文本` ] ) → `比基特`

计算指定关系的一个“fork”使用的磁盘空间。(请注意,在大多数情况下,使用更高级的功能更方便。)`pg_总尺寸_关系_尺寸`或`pg_表_尺寸`,将所有叉子的大小相加。)对于一个参数,它返回关系的主数据分支的大小。可以提供第二个参数来指定要检查的分叉:

* `主要的`返回关系的主数据分支的大小。

* `fsm`返回可用空间贴图的大小(请参见[第70.3节](storage-fsm.html))与这种关系有关。

* `虚拟机`返回可见性贴图的大小(请参见[第70.4节](storage-vm.html))与这种关系有关。

* `初始化`返回与关系关联的初始化叉的大小(如果有)。 | +| [](<>) `pg_大小_字节` ( `文本` ) → `比基特`

将大小转换为人类可读的格式(由`pg_尺寸_漂亮`)变成字节。 | +| [](<>) `pg_尺寸_漂亮` ( `比基特` ) → `文本`

`pg_尺寸_漂亮` ( `数字的` ) → `文本`

使用大小单位(字节、kB、MB、GB或TB,视情况而定)将字节大小转换为更易于人类阅读的格式。注意,单位是2的幂而不是10的幂,所以1kB是1024字节,1MB是1024字节2.=1048576字节,依此类推。 | +| [](<>) `pg_表_尺寸` ( `regclass` ) → `比基特`

计算指定表使用的磁盘空间,不包括索引(但包括其TOAST表(如果有)、可用空间映射和可见性映射)。 | +| [](<>) `pg_表空间_大小` ( `名称` ) → `比基特`

`pg_表空间_大小` ( `老年人` ) → `比基特`

计算具有指定名称或OID的表空间中使用的总磁盘空间。要使用此功能,您必须`创造`指定表空间上的权限,或成为`pg_读取所有数据`角色,除非它是当前数据库的默认表空间。 | +| [](<>) `pg_总尺寸_关系_尺寸` ( `regclass` ) → `比基特`

计算指定表使用的总磁盘空间,包括所有索引和TOAST数据。结果相当于`pg_表_尺寸` `+` `pg_索引_大小`. | + +上面对表或索引进行操作的函数接受`regclass`参数,它只是`pg_类`系统目录。但是,由于`regclass`数据类型的输入转换器将为您完成这项工作。看见[第8.19节](datatype-oid.html)详细信息。 + +中显示的功能[表9.93](functions-admin.html#FUNCTIONS-ADMIN-DBLOCATION)帮助识别与数据库对象关联的特定磁盘文件。 + +**表9.93。数据库对象定位函数** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_关系_文件节点` ( *`关系`* `regclass` ) → `老年人`

返回当前分配给指定关系的“filenode”编号。filenode是用于关系的文件名的基本组件(请参见[第70.1节](storage-file-layout.html)更多信息)。对于大多数关系,结果与`pg_类`.`重新文件节点`,但对于某些系统目录`重新文件节点`为零,必须使用此函数才能获得正确的值。如果传递了一个没有存储的关系(如视图),则函数返回NULL。 | +| [](<>) `pg_关系_文件路径` ( *`关系`* `regclass` ) → `文本`

返回整个文件路径名(相对于数据库群集的数据目录,`PGDATA`)关系的一部分。 | +| [](<>) `pg_文件节点_关系` ( *`表空间`* `老年人`, *`文件节点`* `老年人` ) → `regclass`

根据存储关系的表空间OID和文件节点返回关系的OID。这本质上是`pg_关系_文件路径`。对于数据库默认表空间中的关系,表空间可以指定为零。退换商品`无效的`如果当前数据库中没有与给定值关联的关系。 | + +[表9.94](functions-admin.html#FUNCTIONS-ADMIN-COLLATION)列出用于管理排序规则的函数。 + +**表9.94。整理管理功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_校对_实际_版本` ( `老年人` ) → `文本`

返回当前安装在操作系统中的排序规则对象的实际版本。如果这与中的值不同`pg_校勘`.`collversion`,则可能需要重建依赖于排序规则的对象。另见[更改排序规则](sql-altercollation.html). | +| [](<>) `pg_导入_系统_排序` ( *`模式`* `regnamespace` ) → `整数`

将排序规则添加到系统目录中`pg_校勘`基于它在操作系统中找到的所有区域设置。这是什么`initdb`使用;看见[第24.2.2节](collation.html#COLLATION-MANAGING)更多细节。如果以后在操作系统中安装了其他区域设置,则可以再次运行此函数以添加新区域设置的排序规则。与中现有条目匹配的区域设置`pg_校勘`将被跳过。(但此函数不会删除基于操作系统中不再存在的区域设置的排序规则对象。)这个*`模式`*参数通常是`pg_目录`,但这不是一项要求;排序规则也可以安装到其他一些模式中。函数返回它创建的新排序规则对象的数量。此功能仅限超级用户使用。 | + +[表9.95](functions-admin.html#FUNCTIONS-INFO-PARTITION)列出提供分区表结构信息的函数。 + +**表9.95。划分信息函数** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_划分_树` ( `regclass` ) → `一套记录` ( *`重温`* `regclass`, *`parentrelid`* `regclass`, *`岛`* `布尔值`, *`数量`* `整数` )

列出给定分区表或分区索引的分区树中的表或索引,每个分区一行。提供的信息包括分区的OID、其直接父级的OID、一个布尔值(表示分区是否为叶)和一个整数(表示分区在层次结构中的级别)。输入表或索引的级别值为0,其直接子分区的级别值为1,其分区的级别值为2,依此类推。如果关系不存在或不是分区或分区表,则不返回任何行。 | +| [](<>) `pg_分区_祖先` ( `regclass` ) → `正则类集合`

列出给定分区的祖先关系,包括关系本身。如果关系不存在或不是分区或分区表,则不返回任何行。 | +| [](<>) `pg_分区_根` ( `regclass` ) → `regclass`

返回给定关系所属分区树的最顶层父级。退换商品`无效的`如果关系不存在或不是分区或分区表。 | + +例如,检查分区表中包含的数据的总大小`测量`,可以使用以下查询: + +``` +SELECT pg_size_pretty(sum(pg_relation_size(relid))) AS total_size + FROM pg_partition_tree('measurement'); +``` + +### 9.27.8.索引维护功能 + +[表9.96](functions-admin.html#FUNCTIONS-ADMIN-INDEX-TABLE)显示可用于索引维护任务的功能。(请注意,这些维护任务通常由autovacuum自动完成;只有在特殊情况下才需要使用这些功能。)恢复期间无法执行这些功能。这些函数的使用仅限于超级用户和给定索引的所有者。 + +**表9.96。索引维护功能** + +| 作用

描述 | +| -------------- | +| [](<>) `总结新的价值观` ( *`指数`* `regclass` ) → `整数`

扫描指定的BRIN索引,在基表中查找当前未由索引汇总的页面范围;对于任何这样的范围,它都会通过扫描这些表页来创建一个新的摘要索引元组。返回插入到索引中的新页面范围摘要的数量。 | +| [](<>) `布林努山脉` ( *`指数`* `regclass`, *`区块号`* `比基特` ) → `整数`

汇总覆盖给定块的页面范围(如果尚未汇总)。这就像`总结新的价值观`但它只处理覆盖给定表块号的页面范围。 | +| [](<>) `布林_去总结_山脉` ( *`指数`* `regclass`, *`区块号`* `比基特` ) → `无效的`

删除BRIN索引元组,该元组汇总覆盖给定表块的页面范围(如果有)。 | +| [](<>) `杜松子酒清洁待处理清单` ( *`指数`* `regclass` ) → `比基特`

通过将指定GIN索引的“待定”列表中的条目批量移动到主GIN数据结构,来清理该列表。返回从挂起列表中删除的页数。如果参数是用`快速更新`选项禁用时,不会进行清理,结果为零,因为索引没有挂起列表。看见[第67.4.1节](gin-implementation.html#GIN-FAST-UPDATE)和[第67.5节](gin-tips.html)有关待处理列表的详细信息,以及`快速更新`选项 | + +### 9.27.9.通用文件访问功能 + +中显示的功能[表9.97](functions-admin.html#FUNCTIONS-ADMIN-GENFILE-TABLE)提供对托管服务器的计算机上的文件的本机访问。仅数据库群集目录中的文件和`日志目录`可以访问,除非该用户是超级用户或被授予该角色`pg_读取_服务器_文件`.使用群集目录中文件的相对路径,以及与`日志目录`日志文件的配置设置。 + +请注意,向用户授予`pg_读取_文件()`,或相关函数,使它们能够读取服务器上数据库服务器进程可以读取的任何文件;这些函数绕过所有数据库内权限检查。这意味着,例如,具有这种访问权限的用户能够读取`pg_authid`存储身份验证信息的表,以及读取数据库中的任何表数据。因此,应仔细考虑是否允许访问这些功能。 + +其中一些函数采用可选的*`想念你好吗`*参数,指定文件或目录不存在时的行为。如果`符合事实的`,函数返回`无效的`或空结果集(视情况而定)。如果`错误的`,将引发一个错误。默认值是`错误的`. + +**表9.97。通用文件访问功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_ls_dir` ( *`目录名`* `文本` [, *`想念你好吗_ `*布尔值` `,包括*`dot_dirs_ `*布尔值` `] ) → `文本集`

返回指定目录中所有文件(以及目录和其他特殊文件)的名称。这个*`包括_dot_dirs`*参数指示是否为“”还有“.”将被包括在结果集中;默认设置是排除它们。当需要时,包括它们可能很有用*`想念你好吗`*是`符合事实的`,以区分空目录和不存在的目录。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_ls_logdir` () → `一套记录` ( *`名称`* `文本`, *`大小`* `比基特`, *`修改`* `带时区的时间戳` )

返回服务器日志目录中每个普通文件的名称、大小和上次修改时间(mtime)。不包括以点开头的文件名、目录和其他特殊文件。

此功能仅限于超级用户和`pg_监视器`默认情况下为角色,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `普格鲁·鲁斯·沃尔迪尔` () → `一套记录` ( *`名称`* `文本`, *`大小`* `比基特`, *`修改`* `带时区的时间戳` )

返回服务器预写日志(WAL)目录中每个普通文件的名称、大小和上次修改时间(mtime)。不包括以点开头的文件名、目录和其他特殊文件。

此功能仅限于超级用户和`pg_监视器`默认情况下为角色,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_ls_存档_状态目录` () → `一套记录` ( *`名称`* `文本`, *`大小`* `比基特`, *`修改`* `带时区的时间戳` )

返回服务器WAL存档状态目录中每个普通文件的名称、大小和上次修改时间(mtime)(`pg_wal/存档_状态`).不包括以点开头的文件名、目录和其他特殊文件。

此功能仅限于超级用户和`pg_监视器`默认情况下为角色,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_ls_tmpdir` ( [ *`表空间`* `老年人` ] ) → `一套记录` ( *`名称`* `文本`, *`大小`* `比基特`, *`修改`* `带时区的时间戳` )

返回指定时间的临时文件目录中每个普通文件的名称、大小和上次修改时间(mtime)*`表空间`*如果*`表空间`*如果没有提供`pg_违约`检查表空间。不包括以点开头的文件名、目录和其他特殊文件。

此功能仅限于超级用户和`pg_监视器`默认情况下为角色,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_读取_文件` ( *`文件名`* `文本` \[, *`抵消`* `比基特`, *`长`* `比基特` [, *`想念你好吗_ `*布尔值` `]] ) → `文本`

返回文本文件的全部或部分,从给定字节开始*`抵消`*,最多回来一次*`长`*字节(如果先到达文件末尾,则会减少)。如果*`抵消`*是负数,它相对于文件的结尾。如果*`抵消`*和*`长`*如果省略,则返回整个文件。从文件中读取的字节在数据库编码中被解释为字符串;如果它们在该编码中无效,则会引发错误。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。 | +| [](<>) `pg_读取_二进制_文件` ( *`文件名`* `文本` \[, *`抵消`* `比基特`, *`长`* `比基特` [, *`想念你好吗_ `*布尔值` `]] ) → `二进制数据`

返回文件的全部或部分。此功能与`pg_读取_文件`但它可以读取任意二进制数据,并将结果作为`二进制数据`不`文本`; 因此,不执行编码检查。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。

结合`把你从`函数,此函数可用于以指定的编码读取文本文件并转换为数据库的编码:

`
选择convert_from(pg_read_binary_file('file_in_utf8.txt'),'utf8')

` | +| [](<>) `pg_统计文件` ( *`文件名`* `文本` [, *`想念你好吗_ `*布尔值` `] ) → `记录` ( *`大小`* `比基特`, *`通道`* `带时区的时间戳`, *`修改`* `带时区的时间戳`, *`改变`* `带时区的时间戳`, *`创造`* `带时区的时间戳`, *`isdir`* `布尔值` )

返回一条记录,其中包含文件大小、上次访问时间戳、上次修改时间戳、上次文件状态更改时间戳(仅限Unix平台)、文件创建时间戳(仅限Windows)以及一个指示它是否为目录的标志。

默认情况下,此函数仅限于超级用户,但其他用户可以被授予运行该函数的EXECUTE权限。 | + +### 9.27.10.咨询锁功能 + +中显示的功能[表9.98](functions-admin.html#FUNCTIONS-ADVISORY-LOCKS-TABLE)管理顾问锁。有关正确使用这些功能的详细信息,请参阅[第13.3.5节](explicit-locking.html#ADVISORY-LOCKS). + +所有这些函数都用于锁定应用程序定义的资源,这些资源可以通过单个64位键值或两个32位键值来标识(请注意,这两个键空间不重叠)。如果另一个会话已在同一资源标识符上持有冲突锁,则函数将等待资源可用,或返回`错误的`结果,视功能而定。锁可以是共享的,也可以是独占的:共享锁与同一资源上的其他共享锁不冲突,只与独占锁冲突。锁可以在会话级别(以便在释放或会话结束前保持)或事务级别(以便在当前事务结束前保持;不提供手动释放)。多个会话级锁定请求堆栈,因此,如果同一资源标识符被锁定三次,则必须有三个解锁请求,以便在会话结束之前释放资源。 + +**表9.98。咨询锁功能** + +| 作用

描述 | +| -------------- | +| [](<>) `pg_顾问_锁` ( *`钥匙`* `比基特` ) → `无效的`

`pg_顾问_锁` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `无效的`

获取独占会话级别的建议锁,必要时等待。 | +| [](<>) `pg_咨询_锁定_共享` ( *`钥匙`* `比基特` ) → `无效的`

`pg_咨询_锁定_共享` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `无效的`

获取共享会话级别的建议锁,必要时等待。 | +| [](<>) `pg_咨询_解锁` ( *`钥匙`* `比基特` ) → `布尔值`

`pg_咨询_解锁` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `布尔值`

释放以前获得的独占会话级别建议锁。退换商品`符合事实的`如果锁成功释放。如果锁没有锁上,`错误的`返回,此外,服务器将报告SQL警告。 | +| [](<>) `pg_咨询_解锁_所有` () → `无效的`

释放当前会话持有的所有会话级建议锁。(此函数在会话结束时隐式调用,即使客户端不正常地断开连接。) | +| [](<>) `pg_咨询_解锁_共享` ( *`钥匙`* `比基特` ) → `布尔值`

`pg_咨询_解锁_共享` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `布尔值`

释放以前获取的共享会话级别建议锁。退换商品`符合事实的`如果锁成功释放。如果锁没有锁上,`错误的`返回,此外,服务器将报告SQL警告。 | +| [](<>) `pg_咨询_xact_锁` ( *`钥匙`* `比基特` ) → `无效的`

`pg_咨询_xact_锁` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `无效的`

获取独占事务级别的建议锁,必要时等待。 | +| [](<>) `pg_咨询_xact_lock_共享` ( *`钥匙`* `比基特` ) → `无效的`

`pg_咨询_xact_lock_共享` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `无效的`

获取共享事务级建议锁,必要时等待。 | +| [](<>) `pg_try_顾问_lock` ( *`钥匙`* `比基特` ) → `布尔值`

`pg_try_顾问_lock` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `布尔值`

获得独占会话级别的建议锁(如果可用)。这将立即获得锁并返回`符合事实的`,或返回`错误的`如果无法立即获取锁,则无需等待。 | +| [](<>) `pg_try_咨询_lock_共享` ( *`钥匙`* `比基特` ) → `布尔值`

`pg_try_咨询_lock_共享` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `布尔值`

获取共享会话级别的建议锁(如果可用)。这将立即获得锁并返回`符合事实的`,或返回`错误的`如果无法立即获取锁,则无需等待。 | +| [](<>) `pg_try_咨询_xact_lock` ( *`钥匙`* `比基特` ) → `布尔值`

`pg_try_咨询_xact_lock` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `布尔值`

获得独占事务级别的建议锁(如果可用)。这将立即获得锁并返回`符合事实的`,或返回`错误的`如果无法立即获取锁,则无需等待。 | +| [](<>) `pg_try_咨询_xact_lock_共享` ( *`钥匙`* `比基特` ) → `布尔值`

`pg_try_咨询_xact_lock_共享` ( *`关键1`* `整数`, *`关键2`* `整数` ) → `布尔值`

获取共享事务级别的建议锁(如果可用)。这将立即获得锁并返回`符合事实的`,或返回`错误的`如果无法立即获取锁,则无需等待。 | diff --git a/docs/X/functions-array.md b/docs/en/functions-array.md similarity index 100% rename from docs/X/functions-array.md rename to docs/en/functions-array.md diff --git a/docs/en/functions-array.zh.md b/docs/en/functions-array.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..24da37eb3f189281f91290f8d84adc1ba727e20a --- /dev/null +++ b/docs/en/functions-array.zh.md @@ -0,0 +1,47 @@ +## 9.19.数组函数和运算符 + +[表9.51](functions-array.html#ARRAY-OPERATORS-TABLE)显示可用于数组类型的专用运算符。除此之外,中还显示了常用的比较运算符[表9.1](functions-comparison.html#FUNCTIONS-COMPARISON-OP-TABLE)可用于阵列。比较运算符使用元素数据类型的默认B树比较函数逐个元素比较数组内容,并根据第一个差异进行排序。在多维数组中,元素按行的主要顺序访问(最后一个下标变化最快)。如果两个数组的内容相等,但维度不同,那么维度信息中的第一个差异将决定排序顺序。 + +**表9.51。数组运算符** + +| 操作人员

描述

例子 | +| ---------------------------- | +| `任意数组` `@>` `任意数组`→`布尔值`

第一个数组是否包含第二个数组,也就是说,第二个数组中出现的每个元素是否等于第一个数组中的某个元素?(复制品没有经过特殊处理,因此`数组[1]`和`数组[1,1]`每个都被认为包含另一个。)

`数组[1,4,3]@>数组[3,1,3]`→`t` | +| `任意数组` `<@` `任意数组`→`布尔值`

第一个数组是否包含第二个数组?

`ARRAY[2,2,7] <@ ARRAY[1,7,4,2,6]`→`t` | +| `anyarray` `&&` `anyarray`→`boolean`

Do the arrays overlap, that is, have any elements in common?

`ARRAY[1,4,3] && ARRAY[2,1]`→`t` | +| `anycompatiblearray` `||` `anycompatiblearray`→`anycompatiblearray`

Concatenates the two arrays. Concatenating a null or empty array is a no-op; otherwise the arrays must have the same number of dimensions (as illustrated by the first example) or differ in number of dimensions by one (as illustrated by the second). If the arrays are not of identical element types, they will be coerced to a common type (see[Section 10.5](typeconv-union-case.html)).

`ARRAY[1,2,3] || ARRAY[4,5,6,7]`→`{1,2,3,4,5,6,7}`

`ARRAY[1,2,3] || ARRAY[[4,5,6],[7,8,9.9]]`→ {`{1,2,3},{4,5,6},{7,8,9.9}`} | +| `anycompatible` `||` `anycompatiblearray`→`anycompatiblearray`

Concatenates an element onto the front of an array (which must be empty or one-dimensional).

`3 || ARRAY[4,5,6]`→`{3,4,5,6}` | +| `anycompatiblearray` `||` `anycompatible`→`anycompatiblearray`

Concatenates an element onto the end of an array (which must be empty or one-dimensional).

`ARRAY[4,5,6] || 7`→`{4,5,6,7}` | + +See[Section 8.15](arrays.html)for more details about array operator behavior. See[Section 11.2](indexes-types.html)for more details about which operators support indexed operations. + +[Table 9.52](functions-array.html#ARRAY-FUNCTIONS-TABLE)shows the functions available for use with array types. See[Section 8.15](arrays.html)for more information and examples of the use of these functions. + +**Table 9.52. Array Functions** + +| Function

描述

例子 | | | | | +| -------------------------------- | --- | --- | --- | --- | +| [](<>) `数组_追加` ( `随便什么`, `任何兼容的` ) → `随便什么`

将元素追加到数组的末尾(与`随便什么` `||` `任何兼容的`接线员)。

`数组_追加(数组[1,2],3)` → `{1,2,3}` | | | | | +| [](<>) `猫咪` ( `随便什么`, `随便什么` ) → `随便什么`

连接两个数组(与`随便什么` `||` `随便什么`接线员)。

`数组_cat(数组[1,2,3],数组[4,5])` → `{1,2,3,4,5}` | | | | | +| [](<>) `阵列亮度` ( `任意数组` ) → `文本`

返回数组维度的文本表示形式。

`数组_dims(数组[[1,2,3],[4,5,6])` → `[1:2][1:3]` | | | | | +| [](<>) `数组填充` ( `任何元素`, `整数[]` \[, `整数[]` ] ) → `任意数组`

返回一个由给定值的副本填充的数组,其维数为第二个参数指定的长度。可选的第三个参数为每个维度提供下限值(默认为all)`1.`).

`数组填充(11,数组[2,3])` → `{{11,11,11},{11,11,11}}`

`数组填充(7,数组[3],数组[2])` → `[2:4]={7,7,7}` | | | | | +| [](<>) `数组长度` ( `任意数组`, `整数` ) → `整数`

返回请求的数组维度的长度。

`数组长度(数组[1,2,3],1)` → `3.` | | | | | +| [](<>) `数组_较低` ( `任意数组`, `整数` ) → `整数`

返回请求的数组维度的下限。

`数组_lower('[0:2]={1,2,3}'::整数[],1)` → `0` | | | | | +| [](<>) `阵列ndims` ( `任意数组` ) → `整数`

返回数组的维数。

`数组_ndims(数组[[1,2,3],[4,5,6])` → `2.` | | | | | +| [](<>) `阵列位置` ( `随便什么`, `任何兼容的` [, `整数` ] ) → `整数`

返回数组中第二个参数第一次出现的下标,或`无效的`如果它不存在。如果给出了第三个参数,搜索将从该下标开始。数组必须是一维的。比较是使用`与…没有区别`语义,因此可以搜索`无效的`.

`数组位置(数组['sun','mon','tue','wed','thu','fri','sat','mon')` → `2.` | | | | | +| [](<>) `阵列位置` ( `随便什么`, `任何兼容的` ) → `整数[]`

返回作为第一个参数给定的数组中第二个参数的所有匹配项的下标数组。数组必须是一维的。比较是使用`与…没有区别`语义,因此可以搜索`无效的`. `无效的`仅当数组为`无效的`; 如果在数组中找不到该值,则返回一个空数组。

`数组位置(数组['A','A','B','A'],'A')` → `{1,2,4}` | | | | | +| [](<>) `数组_前置` ( `任何兼容的`, `随便什么` ) → `随便什么`

将元素前置到数组的开头(与`任何兼容的` `||` `随便什么`接线员)。

`数组_前置(1,数组[2,3])` → `{1,2,3}` | | | | | +| [](<>) `数组_移除` ( `随便什么`, `任何兼容的` ) → `随便什么`

从数组中删除与给定值相等的所有元素。数组必须是一维的。比较是使用`与…没有区别`语义,因此可以删除`无效的`s

`数组_移除(数组[1,2,3,2],2)` → `{1,3}` | | | | | +| [](<>) `数组_替换` ( `随便什么`, `任何兼容的`, `任何兼容的` ) → `随便什么`

用第三个参数替换等于第二个参数的每个数组元素。

`数组_替换(数组[1,2,5,4],5,3)` → `{1,2,3,4}` | | | | | +| [](<>) `数组到字符串` ( *`大堆`* `任意数组`, *`定界符`* `文本` [, *`空字符串_ `*文本` `] ) → `文本`

将每个数组元素转换为其文本表示形式,并将由*`定界符`*一串如果*`空字符串`*是给予的,不是`无效的`然后`无效的`数组条目由该字符串表示;否则,它们将被忽略。

`数组_到_字符串(数组[1,2,3,NULL,5],',','*')` → `1,2,3,*,5` | | | | | +| [](<>) `上排` ( `任意数组`, `整数` ) → `整数`

返回请求的数组维度的上限。

`数组_upper(数组[1,8,3,7],1)` → `4.` | | | | | +| [](<>) `基数` ( `任意数组` ) → `整数`

返回数组中的元素总数,如果数组为空,则返回0。

`基数(数组[[1,2],[3,4])` → `4.` | | | | | +| [](<>) `trim_阵列` ( *`大堆`* `任意数组`, *`n`* `整数` ) → `任意数组`

通过删除最后一个数组来修剪数组*`n`*元素。如果数组是多维的,则只修剪第一个维度。

`trim_数组(数组[1,2,3,4,5,6],2)` → `{1,2,3,4}` | | | | | +| [](<>) `不安` ( `任意数组` ) → `任意元素集`

将数组展开为一组行。阵列的元素按存储顺序读取。

`unnest(数组[1,2])` → ``

`1
2

`unest(数组['foo','bar',['baz','qux'])→``

`
foo
bar
baz
qux

` | | | | | +| `不安` ( `任意数组`, `任意数组` [, ... ] ) → `任意元素的集合,任意元素[,…]`

将多个数组(可能具有不同的数据类型)扩展为一组行。如果数组的长度不尽相同,则较短的数组将填充`无效的`s、 此表单只允许在查询的FROM子句中使用;看见[第7.2.1.4节](queries-table-expressions.html#QUERIES-TABLEFUNCTIONS).

`从unnest(数组[1,2],数组['foo','bar','baz'])中选择*作为x(a,b)` → ``

\```
A. | b
---+-----
1. | 福
2. | 酒吧
| 巴兹

\``` | + +### 笔记 + +有两种不同的行为`字符串到数组`来自PostgreSQL 9.1之前的版本。首先,它将返回一个空(零元素)数组,而不是`无效的`当输入字符串长度为零时。第二,如果分隔符字符串是`无效的`,该函数将输入拆分为单个字符,而不是返回`无效的`和以前一样。 + +另见[第9.21节](functions-aggregate.html)关于聚合函数`数组_agg`用于阵列。 diff --git a/docs/X/functions-geometry.md b/docs/en/functions-geometry.md similarity index 100% rename from docs/X/functions-geometry.md rename to docs/en/functions-geometry.md diff --git a/docs/en/functions-geometry.zh.md b/docs/en/functions-geometry.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7cf9f1188e4f28338a97ece017380655ead6d862 --- /dev/null +++ b/docs/en/functions-geometry.zh.md @@ -0,0 +1,97 @@ +## 9.11.几何函数与算子 + +几何类型`指向`,`盒`,`lseg`,`线`,`路径`,`多边形`和`圆圈`拥有大量本机支持函数和运算符,如中所示[表9.35](functions-geometry.html#FUNCTIONS-GEOMETRY-OP-TABLE),[表9.36](functions-geometry.html#FUNCTIONS-GEOMETRY-FUNC-TABLE)和[表9.37](functions-geometry.html#FUNCTIONS-GEOMETRY-CONV-TABLE). + +**表9.35。几何算子** + +| 操作人员

描述

例子 | +| ---------------------------- | +| *`几何_型`* `+` `指向`→`*`几何\_型`*`

添加第二个的坐标`指向`到第一个论点的每一点,从而进行翻译。适用于`指向`, `盒`, `路径`, `圆圈`.

`方框'(1,1),(0,0)'+点'(2,0)'` → `(3,1),(2,0)` | +| `路径` `+` `路径` → `路径`

连接两个打开的路径(如果任一路径关闭,则返回NULL)。

`路径“[(0,0),(1,1)]”+路径“[(2,2),(3,3),(4,4)]”` → `[(0,0),(1,1),(2,2),(3,3),(4,4)]` | +| *`几何_型`* `-` `指向` → `*`几何\_型`*`

减去第二个坐标`指向`从第一个论点的每一点出发,进行翻译。适用于`指向`, `盒`, `路径`, `圆圈`.

`方框'(1,1),(0,0)'-点'(2,0)'` → `(-1,1),(-2,0)` | +| *`几何_型`* `*` `指向` → `*`几何\_型`*`

将第一个参数的每个点乘以第二个参数`指向`(将点视为由实部和虚部表示的复数,并执行标准复数乘法)。如果一个人解释第二个`指向`作为向量,这相当于按向量长度缩放对象的大小和与原点的距离,并按向量与原点的角度逆时针旋转对象*`十、`*轴适用于`指向`, `盒`,[\[a\]](#ftn.FUNCTIONS-GEOMETRY-ROTATION-FN) `路径`, `圆圈`.

`路径“((0,0)、(1,0)、(1,1))”*点“(3.0,0)”` → `((0,0),(3,0),(3,3))`

`路径“((0,0)、(1,0)、(1,1))”*点(cosd(45),sind(45))` → `((0,0),​(0.7071067811865475,0.7071067811865475),​(0,1.414213562373095))` | +| *`几何_型`* `/` `指向` → `*`几何\_型`*`

将第一个论点的每一点除以第二个论点`指向`(将点视为由实部和虚部表示的复数,并执行标准复数除法)。如果一个人解释第二个`指向`作为向量,这相当于将对象的大小和与原点的距离向下缩放向量的长度,并将其围绕原点顺时针旋转向量与原点的角度*`十、`*轴适用于`指向`, `盒`,[\[a\]](functions-geometry.html#ftn.FUNCTIONS-GEOMETRY-ROTATION-FN) `路径`, `圆圈`.

`路径“((0,0),(1,0),(1,1))”/点“(2.0,0)”` → `((0,0),(0.5,0),(0.5,0.5))`

`路径“((0,0)、(1,0)、(1,1))”/点(cosd(45),sind(45))` → `((0,0),​(0.7071067811865476,-0.7071067811865476),​(1.4142135623730951,0))` | +| `@-@` *`几何_型`* → `双精度`

计算总长度。适用于`lseg`, `路径`.

`@-@路径“[(0,0)、(1,0)、(1,1)]”` → `2.` | +| `@@` *`几何_型`* → `指向`

计算中心点。适用于`盒`, `lseg`, `多边形`, `圆圈`.

`@@框'(2,2),(0,0)'` → `(1,1)` | +| `#` *`几何_型`* → `整数`

返回点数。适用于`路径`, `多边形`.

`#路径“((1,0),(0,1),(-1,0))”` → `3.` | +| *`几何_型`* `#` *`几何_型`* → `指向`

计算交点,如果没有交点,则为NULL。适用于`lseg`, `线`.

`lseg'[(0,0),(1,1)]“#lseg'[(1,0),(0,1)]”` → `(0.5,0.5)` | +| `盒` `#` `盒` → `盒`

计算两个框的交点,如果没有,则为NULL。

`盒子(2,2),(-1,-1)#盒子(1,1),(-2,-2)'` → `(1,1),(-1,-1)` | +| *`几何_型`* `##` *`几何_型`* → `指向`

计算第二个对象上距离第一个对象最近的点。适用于以下几对类型:(`指向`, `盒`), (`指向`, `lseg`), (`观点`,`线`), (`lseg`,`盒子`), (`lseg`,`lseg`), (`线`,`lseg`)。

`point '(0,0)' ## lseg '[(2,0),(0,2)]'`→`(1,1)` | +| *`几何类型`* `<->` *`几何类型`*→`双精度`

计算对象之间的距离。适用于所有几何类型,除了`多边形`, 对于所有的组合`观点`使用另一种几何类型,对于这些额外的类型对:(`盒子`,`lseg`), (`lseg`,`线`), (`多边形`,`圆圈`) (以及换向器案例)。

`圆'<(0,0),1>' <-> 圆'<(5,0),1>'`→`3` | +| *`几何类型`* `@>` *`几何类型`*→`布尔值`

第一个对象是否包含第二个?可用于这些类型对:(`盒子`,`观点`), (`盒子`,`盒子`), (`小路`,`观点`), (`多边形`,`观点`), (`多边形`,`多边形`), (`圆圈`,`观点`), (`圆圈`,`圆圈`)。

`圆'<(0,0),2>'@>点'(1,1)'`→`吨` | +| *`几何类型`* `<@` *`几何类型`*→`布尔值`

第一个对象是包含在第二个对象中还是包含在第二个对象中?可用于这些类型对:(`观点`,`盒子`), (`观点`, `lseg`), (`观点`, `线`), (`观点`, `小路`), (`观点`, `多边形`), (`观点`, `圆圈`), (`盒子`, `盒子`), (`lseg`, `盒子`), (`lseg`,`线`), (`多边形`,`多边形`), (`圆圈`,`圆圈`)。

`点 '(1,1)' <@ 圆圈 '<(0,0),2>'`→`吨` | +| *`几何类型`* `&&` *`几何类型`*→`布尔值`

这些对象是否重叠?(有一个共同点使这一点成为现实。)适用于`盒子`,`多边形`,`圆圈`.

`框'(1,1),(0,0)' && 框'(2,2),(0,0)'`→`吨` | +| *`几何类型`* `<<` *`几何类型`*→`布尔值`

第一个对象严格在第二个对象的左边吗?可以用来`观点`,`盒子`,`多边形`,`圆圈`.

`圆'<(0,0),1>' << 圆'<(5,0),1>'`→`吨` | +| *`几何类型`* `>>` *`几何类型`*→`布尔值`

第一个对象是否严格正确于第二个对象?可以用来`观点`,`盒子`,`多边形`,`圆圈`.

`圆'<(5,0),1>' >> 圆'<(0,0),1>'`→`吨` | +| *`几何类型`* `&<` *`几何类型`*→`布尔值`

第一个对象不会延伸到第二个对象的右侧吗?可以用来`盒子`,`多边形`,`圆圈`.

`框'(1,1),(0,0)' &< 框'(2,2),(0,0)'`→`吨` | +| *`几何类型`* `&>` *`几何类型`*→`布尔值`

第一个对象不会延伸到第二个对象的左侧吗?可以用来`盒子`,`多边形`,`圆圈`.

`框'(3,3),(0,0)&>框'(2,2),(0,0)'`→`t` | +| *`几何_型`* `<<|` *`几何_型`*→`布尔值`

第一个物体严格低于第二个吗?适用于`指向`,`盒`,`多边形`,`圆圈`.

`方框(3,3)、(0,0)<<方框(5,5)、(3,4)'`→`t` | +| *`几何_型`* `|>>` *`几何_型`*→`布尔值`

第一个物体严格高于第二个吗?适用于`指向`,`盒`, `多边形`, `圆圈`.

`盒子'(5,5),(3,4)|>>盒子'(3,3),(0,0)'` → `t` | +| *`几何_型`* `&<|` *`几何_型`* → `布尔值`

第一个物体不延伸到第二个物体之上吗?适用于`盒`, `多边形`, `圆圈`.

`盒子'(1,1),(0,0)&<|盒子'(2,2),(0,0)'` → `t` | +| *`几何_型`* `|&>` *`几何_型`* → `布尔值`

第一个物体不延伸到第二个物体以下吗?适用于`盒`, `多边形`, `圆圈`.

`框'(3,3),(0,0)|和>框'(2,2),(0,0)'` → `t` | +| `盒` `<^` `盒` → `布尔值`

第一个物体是否低于第二个(允许边缘接触)?

`方框“((1,1)、(0,0))”<^box“((2,2)、(1,1))”` → `t` | +| `盒` `>^` `盒` → `布尔值`

第一个物体是否在第二个物体之上(允许边缘接触)?

`框“((2,2)、(1,1))”>^box“((1,1)、(0,0))”` → `t` | +| *`几何_型`* `?#` *`几何_型`* → `布尔值`

这些物体相交吗?适用于以下几对类型:(`盒`, `盒`), (`lseg`, `盒`), (`lseg`, `lseg`), (`lseg`, `线`), (`线`, `盒`), (`线`, `线`), (`路径`, `路径`).

`lseg'[(-1,0),(1,0)]?#方框'(2,2),(-2,-2)'` → `t` | +| `?-` `线` → `布尔值`

`?-` `lseg` → `布尔值`

这条线是水平的吗?

`?- lseg'[(-1,0),(1,0)]'` → `t` | +| `指向` `?-` `指向` → `布尔值`

点是否水平对齐(即y坐标相同)?

`(1,0)点?-点'(0,0)'` → `t` | +| `?|` `线` → `布尔值`

`?|` `lseg` → `布尔值`

这条线是垂直的吗?

`?| lseg'[(-1,0),(1,0)]'` → `f` | +| `指向` `?|` `指向` → `布尔值`

点是否垂直对齐(即具有相同的x坐标)?

`(0,1)点?|点'(0,0)'` → `t` | +| `线` `?-|` `线` → `布尔值`

`lseg` `?-|` `lseg` → `布尔值`

直线垂直吗?

`lseg'[(0,0),(0,1)]'?-|lseg'[(0,0),(1,0)]'` → `t` | +| `线` `?||` `线` → `布尔值`

`lseg` `?||` `lseg` → `布尔值`

直线平行吗?

`lseg'[(-1,0),(1,0)]?||lseg'[(-1,2),(1,2)]'` → `t` | +| *`几何_型`* `~=` *`几何_型`* → `布尔值`

这些物体是一样的吗?适用于`指向`, `盒`, `多边形`, `圆圈`.

`多边形“((0,0),(1,1))”~=多边形“((1,1),(0,0))”` → `t` | +| [\[a\] ](#FUNCTIONS-GEOMETRY-ROTATION-FN)使用这些操作符“旋转”长方体只会移动其角点:长方体的边仍被认为与轴平行。因此,不会像真正的旋转那样保留长方体的大小。 | + +### 小心 + +请注意,“与”运算符,`~=`,代表了社会平等的通常观念`指向`, `盒`, `多边形`和`圆圈`类型。一些几何类型也有一个`=`接线员,但是`=`比较平等*地区*只有其他标量比较运算符(`<=`以此类推),在这些类型可用的情况下,同样地比较区域。 + +### 笔记 + +在PostgreSQL 14之前,这一点严格低于/高于比较运算符`指向` `<<|` `指向`和`指向` `|>>` `指向`分别被称为`<^`和`>^`。这些名称仍然可用,但已被弃用,最终将被删除。 + +**表9.36。几何函数** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `地区` ( *`几何_型`* ) → `双精度`

计算面积。适用于`盒`, `路径`, `圆圈`A.`路径`输入必须关闭,否则返回NULL。此外,如果`路径`是自交的,结果可能毫无意义。

`面积(方框(2,2)、(0,0)’)` → `4.` | +| [](<>) `居中` ( *`几何_型`* ) → `指向`

计算中心点。适用于`盒`, `圆圈`.

`中心(1,2)、(0,0)框)` → `(0.5,1)` | +| [](<>) `斜线的` ( `盒` ) → `lseg`

将长方体的对角线提取为线段(与`lseg(盒子)`).

`对角线(方框’(1,2),(0,0)’)` → `[(1,2),(0,0)]` | +| [](<>) `直径` ( `圆圈` ) → `双精度`

计算圆的直径。

`直径(圆圈“<(0,0),2>”)` → `4.` | +| [](<>) `身高` ( `盒` ) → `双精度`

计算长方体的垂直大小。

`高度(1,2)、(0,0)框)` → `2.` | +| [](<>) `是封闭的` ( `路径` ) → `布尔值`

这条路封闭了吗?

`isclosed(路径“((0,0)、(1,1)、(2,0))”)` → `t` | +| [](<>) `伊索彭` ( `路径` ) → `布尔值`

这条路通吗?

`等参线(路径“[(0,0),(1,1),(2,0)]”` → `t` | +| [](<>) `长` ( *`几何_型`* ) → `双精度`

计算总长度。适用于`lseg`, `路径`.

`长度(路径“(-1,0),(1,0))”` → `4.` | +| [](<>) `n点` ( *`几何_型`* ) → `整数`

返回点数。适用于`路径`, `多边形`.

`n点(路径“[(0,0)、(1,1)、(2,0)]”` → `3.` | +| [](<>) `pclose` ( `路径` ) → `路径`

将路径转换为封闭形式。

`pclose(路径“[(0,0)、(1,1)、(2,0)]”` → `((0,0),(1,1),(2,0))` | +| [](<>) `波本` ( `路径` ) → `路径`

将路径转换为开放形式。

`popen(路径“((0,0)、(1,1)、(2,0))”)` → `[(0,0),(1,1),(2,0)]` | +| [](<>) `半径` ( `圆圈` ) → `双精度`

计算圆的半径。

`半径(圆“<(0,0),2>”)` → `2.` | +| [](<>) `斜坡` ( `指向`, `指向` ) → `双精度`

计算通过两点绘制的直线的坡度。

`坡度(点“(0,0)”,点“(2,1)”)` → `0.5` | +| [](<>) `宽度` ( `盒` ) → `双精度`

计算长方体的水平大小。

`宽度(框’(1,2),(0,0)’)` → `1.` | + +**表9.37。几何类型转换函数** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `盒` ( `圆圈` ) → `盒`

计算圆内的内接框。

`框(圈“<(0,0),2>”)` → `(1.414213562373095,1.414213562373095),​(-1.414213562373095,-1.414213562373095)` | +| `盒` ( `指向` ) → `盒`

将点转换为空框。

`方框(1,0点)` → `(1,0),(1,0)` | +| `盒` ( `指向`, `指向` ) → `盒`

将任意两个角点转换为长方体。

`方框(点“(0,1)”,点“(1,0)”)` → `(1,1),(0,0)` | +| `盒` ( `多边形` ) → `盒`

计算多边形的边界框。

`长方体(多边形'((0,0),(1,1),(2,0)))` → `(2,1),(0,0)` | +| [](<>) `装订盒` ( `盒`, `盒` ) → `盒`

计算两个框的边界框。

`装订盒(盒’(1,1),(0,0)’,盒’(4,4),(3,3)’)` → `(4,4),(0,0)` | +| [](<>) `圆圈` ( `盒` ) → `圆圈`

计算包围盒的最小圆。

`圆圈(方框’(1,1)、(0,0)’)` → `<(0.5,0.5),0.7071067811865476>` | +| `圆圈` ( `指向`, `双精度` ) → `圆圈`

从圆心和半径构建圆。

`圆(点“(0,0)”,2.0)` → `<(0,0),2>` | +| `圆圈` ( `多边形` ) → `圆圈`

将多边形转换为圆。圆心是多边形点位置的平均值,半径是多边形点与圆心的平均距离。

`圆(多边形“((0,0)、(1,3)、(2,0))”)` → `<(1,1),1.6094757082487299>` | +| [](<>) `线` ( `指向`, `指向` ) → `线`

将两点转换为穿过它们的直线。

`直线(点'-1,0',点'-1,0')` → `{0,-1,0}` | +| [](<>) `lseg` ( `盒` ) → `lseg`

将长方体的对角线提取为线段。

`lseg(框(1,0),(-1,0)` → `[(1,0),(-1,0)]` | +| `lseg` ( `指向`, `指向` ) → `lseg`

从两个端点构造线段。

`lseg(点'-1,0',点'-1,0')` → `[(-1,0),(1,0)]` | +| [](<>) `路径` ( `多边形` ) → `路径`

将多边形转换为具有相同点列表的闭合路径。

`路径(多边形“((0,0)、(1,1)、(2,0))”)` → `((0,0),(1,1),(2,0))` | +| [](<>) `指向` ( `双精度`, `双精度` ) → `指向`

从其坐标构造点。

`第(23.4,-44.5)点` → `(23.4,-44.5)` | +| `指向` ( `盒` ) → `指向`

计算框的中心。

`点(框'(1,0),(-1,0)')` → `(0,0)` | +| `观点` ( `圆圈` ) → `观点`

计算圆心。

`点(圆圈'<(0,0),2>')` → `(0,0)` | +| `观点` ( `lseg` ) → `观点`

计算线段的中心。

`点(lseg'[(-1,0),(1,0)]')` → `(0,0)` | +| `观点` ( `多边形` ) → `观点`

计算多边形的中心(多边形点位置的平均值)。

`点(多边形'((0,0),(1,1),(2,0))')` → `(1,0.3333333333333333)` | +| [](<>) `多边形`(`盒`) →`多边形`

将长方体转换为4点多边形。

`多边形(框'(1,1),(0,0)')`→`((0,0),(0,1),(1,1),(1,0))` | +| `多边形`(`圆圈`) →`多边形`

将圆转换为12点多边形。

`多边形(圆“<(0,0),2>”)`→`((-2,0),​(-1.7320508075688774,0.9999999999999999),​(-1.0000000000000002,1.7320508075688772),​(-1.2246063538223773e-16,2),​(0.9999999999999996,1.7320508075688774),​(1.732050807568877,1.0000000000000007),​(2,2.4492127076447545e-16),​(1.7320508075688776,-0.9999999999999994),​(1.0000000000000009,-1.7320508075688767),​(3.673819061467132e-16,-2),​(-0.9999999999999987,-1.732050807568878),​(-1.7320508075688767,-1.0000000000000009))` | +| `多边形`(`整数`,`圆圈`) →`多边形`

将圆转换为*`n`*-点多边形。

`多边形(4,圆“<(3,0),1>”)` → `((2,0),​(3,1),​(4,1.2246063538223773e-16),​(3,-1))` | +| `多边形` ( `路径` ) → `多边形`

将闭合路径转换为具有相同点列表的多边形。

`多边形(路径“((0,0)、(1,1)、(2,0))”)` → `((0,0),(1,1),(2,0))` | + +可以访问a的两个部件号`指向`就像点是一个索引为0和1的数组。例如,如果`t、 p`是一个`指向`那么专栏呢`从t中选择p[0]`检索X坐标并`更新t集p[1]=。。。`更改Y坐标。同样,类型为`盒`或`lseg`可以被视为两个数组`指向`价值观 diff --git a/docs/X/functions-json.md b/docs/en/functions-json.md similarity index 100% rename from docs/X/functions-json.md rename to docs/en/functions-json.md diff --git a/docs/en/functions-json.zh.md b/docs/en/functions-json.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..059f7a94c9865682359c6311fe8c13c8eaeca4b4 --- /dev/null +++ b/docs/en/functions-json.zh.md @@ -0,0 +1,335 @@ +## 9.16.JSON函数和运算符 + +[9.16.1. 处理和创建JSON数据](functions-json.html#FUNCTIONS-JSON-PROCESSING) + +[9.16.2. SQL/JSON路径语言](functions-json.html#FUNCTIONS-SQLJSON-PATH) + +[](<>) + +本节介绍: + +- 用于处理和创建JSON数据的函数和运算符 + +- SQL/JSON路径语言 + + 要了解有关SQL/JSON标准的更多信息,请参阅[\[sqltr-19075-6\]](biblio.html#SQLTR-19075-6)。有关PostgreSQL支持的JSON类型的详细信息,请参阅[第8.14节](datatype-json.html). + +### 9.16.1.处理和创建JSON数据 + +[表9.44](functions-json.html#FUNCTIONS-JSON-OP-TABLE)显示可用于JSON数据类型的运算符(请参阅[第8.14节](datatype-json.html))。此外,中所示的常用比较运算符[表9.1](functions-comparison.html#FUNCTIONS-COMPARISON-OP-TABLE)可供`jsonb`,但不是为了`json`.比较运算符遵循中概述的B树操作的排序规则[第8.14.4节](datatype-json.html#JSON-INDEXING). + +**表9.44。`json`和`jsonb`操作员** + +| 操作人员

描述

例子 | +| ---------------------------- | +| `杰森` `->` `整数` → `杰森`

`jsonb` `->` `整数` → `jsonb`

摘录*`n`*JSON数组的第个元素(数组元素从零开始索引,但负整数从末尾开始计数)。

`“[{a:“foo”},{b:“bar”},{c:“baz”}]::json` → `{“c”:“baz”}`

`“[{a:“foo”},{b:“bar”},{c:“baz”}]::json->-3` → `{“a”:“foo”}` | +| `杰森` `->` `文本` → `杰森`

`jsonb` `->` `文本` → `jsonb`

使用给定的键提取JSON对象字段。

`“{a:{b:“foo”}}”::json->“a”` → `{“b”:“foo”}` | +| `json` `->>` `整数` → `文本`

`jsonb` `->>` `整数` → `文本`

摘录*`n`*'JSON数组的第个元素,如`文本`.

`[1,2,3]::json->2` → `3.` | +| `json` `->>` `文本` → `文本`

`jsonb` `->>` `文本` → `文本`

使用给定的键提取JSON对象字段,如下所示:`文本`.

`“{”a:1,“b:2}”::json->“b”` → `2.` | +| `json` `#>` `文本[]` → `json`

`jsonb` `#>` `文本[]` → `jsonb`

在指定路径提取JSON子对象,其中路径元素可以是字段键或数组索引。

`“{a:{b:[“foo”,“bar”]}}”:` → `“酒吧”` | +| `json` `#>>` `文本[]` → `文本`

`jsonb` `#>>` `文本[]` → `文本`

在指定路径提取JSON子对象,如下所示:`文本`.

`“{a:{b:[“foo”,“bar”]}}”::json#>>“{a,b,1}”` → `酒吧` | + +### 笔记 + +如果JSON输入没有与请求匹配的正确结构,字段/元素/路径提取运算符将返回NULL,而不是失败;例如,如果不存在这样的键或数组元素。 + +还有一些运营商只是为了`jsonb`,如中所示[表9.45](functions-json.html#FUNCTIONS-JSONB-OP-TABLE). [第8.14.4节](datatype-json.html#JSON-INDEXING)描述如何使用这些运算符有效搜索索引`jsonb`数据 + +**表9.45。额外的`jsonb`操作员** + +| 操作人员

描述

例子 | +| ---------------------------- | +| `jsonb` `@>` `jsonb` → `布尔值`

第一个JSON值是否包含第二个JSON值?(见[第8.14.3节](datatype-json.html#JSON-CONTAINMENT)有关遏制的详细信息。)

`“{”a:1,“b:2}”::jsonb@>“{”b:2}”::jsonb` → `t` | +| `jsonb` `<@` `jsonb` → `布尔值`

第一个JSON值是否包含在第二个JSON值中?

`“{b:2}”:jsonb<@{a:1,“b:2}”:jsonb` → `t` | +| `jsonb` `?` `文本` → `布尔值`

文本字符串在JSON值中是否作为顶级键或数组元素存在?

`“{a:1,“b:2}”::jsonb?”b'` → `t`

`“[“a”、“b”、“c”]::jsonb?”b'` → `t` | +| `jsonb` `?|` `文本[]` → `布尔值`

文本数组中是否有任何字符串作为顶级键或数组元素存在?

`“{a:1,b:2,c:3}”:数组['b','d']` → `t` | +| `jsonb` `?&` `文本[]` → `布尔值`

文本数组中的所有字符串都作为顶级键或数组元素存在吗?

`“[“a”、“b”、“c”]::jsonb?&数组['a','b']` → `t` | +| `jsonb` `||` `jsonb` → `jsonb`

连接两个`jsonb`价值观连接两个数组会生成一个包含每个输入的所有元素的数组。连接两个对象将生成一个包含其关键帧并集的对象,当存在重复关键帧时,将获取第二个对象的值。所有其他情况都是通过将非数组输入转换为单个元素数组来处理的,然后继续处理两个数组。不递归操作:只合并顶级数组或对象结构。

`“[“a”,“b”]::jsonb | |”[“a”,“d”]::jsonb` → `[“a”、“b”、“a”、“d”]`

`“{a:“b”}::jsonb | |”{c:“d”}::jsonb` → `{a:“b”,“c:“d”}`

`[1,2]::jsonb |'3'::jsonb` → `[1, 2, 3]`

`{a:“b”}::jsonb |'42'::jsonb` → `[{“a”:“b”},42]`

要将一个数组作为单个条目附加到另一个数组中,请将其包装到另一个数组层中,例如:

`[1,2]::jsonb | | jsonb_build_array(“[3,4]”::jsonb)` → `[1, 2, [3, 4]]` | +| `jsonb` `-对。` `文本` → `jsonb`

从JSON对象中删除键(及其值),或从JSON数组中删除匹配的字符串值。

`{“a”:“b”,“c”:“d”}'::jsonb` → `{“c”:“d”}`

`“[“a”、“b”、“c”、“b”]::jsonb` → `[“a”,“c”]` | +| `jsonb` `-对。` `文本[]` → `jsonb`

从左操作数中删除所有匹配的键或数组元素。

`“{a:“b”,“c:“d”}”::jsonb j'{a,c}'::text[]` → `{}` | +| `jsonb` `-` `整数` → `jsonb`

删除具有指定索引的数组元素(负整数从末尾开始计数)。如果JSON值不是数组,则引发错误。

`“[“a”,“b”]::jsonb-1` → `[“a”]` | +| `jsonb` `#-` `文本[]` → `jsonb`

删除指定路径上的字段或数组元素,其中路径元素可以是字段键或数组索引。

`“[“a”,“b”:1}]::jsonb#-“{1,b}”` → `[“a”,{}]` | +| `jsonb` `@?` `jsonpath` → `布尔值`

JSON路径是否为指定的JSON值返回任何项?

`“{a:[1,2,3,4,5]}::jsonb@?”$。a[*]?(@ > 2)'` → `t` | +| `jsonb` `@@` `jsonpath` → `布尔值`

返回指定JSON值的JSON路径谓词检查结果。只考虑结果的第一项。如果结果不是布尔值,则`无效的`被退回。

`“{a:[1,2,3,4,5]}”::jsonb@@@。a[*]>2'` → `t` | + +### 笔记 + +这个`jsonpath`操作员`@?`和`@@`抑制以下错误:缺少对象字段或数组元素、意外的JSON项类型、日期时间和数字错误。这个`jsonpath`-下面描述的相关功能也可以用来抑制这些类型的错误。在搜索不同结构的JSON文档集合时,此行为可能会有所帮助。 + +[表9.46](functions-json.html#FUNCTIONS-JSON-CREATION-TABLE)显示可用于构造的函数`json`和`jsonb`价值观 + +**表9.46。JSON创建函数** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `给_json`(咯咯笑)`任何元素`→`杰森`

[](<>) `给_jsonb`(咯咯笑)`任何元素`→`jsonb`

将任何SQL值转换为`杰森`或`jsonb`数组和组合被递归地转换为数组和对象(多维数组在JSON中成为数组的数组)。否则,如果SQL数据类型转换为`杰森`cast函数将用于执行转换;[\[a\]](#ftn.id-1.5.8.22.5.9.2.2.1.1.3.4)否则,将生成标量JSON值。对于除数字、布尔值或空值以外的任何标量,将使用文本表示,并根据需要进行转义,使其成为有效的JSON字符串值。

`致_json('弗雷德说了声“嗨”。'::(文本)` → `弗雷德打招呼。""`

`致_jsonb(第42排,弗雷德说“嗨”。::(文本)` → `{“f1”:42,“f2”:“弗雷德说了一声你好。”` | +| [](<>) `数组到json`(咯咯笑)`任意数组` [(笑声),布尔值` `]→`json`

将SQL数组转换为JSON数组。行为与`给_json`但是,如果可选的布尔参数为true,则会在顶级数组元素之间添加换行符。

`数组到json` ( `{1,5},{99100}::int[]` ) → `[[1,5],[99,100]]` | +| [](<>) `第_行至第_行` ( `记录` [, `布尔值` ] ) → `json`

将SQL复合值转换为JSON对象。行为与`给_json`但是,如果可选布尔参数为true,则将在顶级元素之间添加换行符。

`第_行到第json行(第(1行,'foo'))` → `{“f1”:1,“f2”:“foo”}` | +| [](<>) `json_构建_数组` ( `可变的` `“任何”` ) → `json`

[](<>) `jsonb_构建_阵列` ( `可变的` `“任何”` ) → `jsonb`

从可变参数列表中构建一个可能是异构类型的JSON数组。每个参数都按照`给_json`或`给_jsonb`.

`json_构建_数组(1,2,'foo',4,5)` → `[1,2,“foo”,4,5]` | +| [](<>) `json_build_对象` ( `可变的` `“任何”` ) → `json`

[](<>) `jsonb_构建_对象` ( `可变的` `“任何”` ) → `jsonb`

从可变参数列表中构建JSON对象。按照惯例,参数列表由交替的键和值组成。关键论点被强制转换成文本;值参数按照`给_json`或`给_jsonb`.

`json_build_对象('foo',1,2,row(3,'bar'))` → `{“foo”:1,2:{“f1”:3,f2:“bar”}` | +| [](<>) `json_对象` ( `文本[]` ) → `json`

[](<>) `jsonb_对象` ( `文本[]` ) → `jsonb`

从文本数组中构建JSON对象。数组必须只有一个维度的成员数为偶数,在这种情况下,它们被视为交替的键/值对,或者只有两个维度的成员数为偶数,这样每个内部数组就只有两个元素,它们被视为键/值对。所有值都转换为JSON字符串。

`json_对象(“{a,1,b,“def”,c,3.5}”)` → `{“a”:“1”,“b”:“def”,“c”:“3.5”}`

`json_对象(“{a,1},{b,def},{c,3.5}”)` → `{“a”:“1”,“b”:“def”,“c”:“3.5”}` | +| `json_对象` ( *`钥匙`* `文本[]`, *`价值观`* `文本[]` ) → `json`

`jsonb_对象` ( *`钥匙`* `文本[]`, *`价值观`* `文本[]` ) → `jsonb`

这种形式的`json_对象`从单独的文本数组中成对获取键和值。否则,它与单参数形式相同。

`json_对象({a,b},{1,2})` → `{“a”:“1”,“b”:“2”}` | +| [\[a\] ](#id-1.5.8.22.5.9.2.2.1.1.3.4)例如[商店](hstore.html)extension的演员阵容来自`商店`到`json`因此`商店`通过JSON创建函数转换的值将表示为JSON对象,而不是原始字符串值。 | + +[表9.47](functions-json.html#FUNCTIONS-JSON-PROCESSING-TABLE)显示可用于处理的函数`json`和`jsonb`价值观 + +**表9.47。JSON处理函数** + +| 作用

描述

例子 | | | | | | | | | +| -------------------------- | --- | --- | --- | --- | --- | --- | --- | --- | +| [](<>) `json_数组_元素` ( `json` ) → `json集合`

[](<>) `jsonb_数组_元素` ( `jsonb` ) → `jsonb集合`

将顶级JSON数组扩展为一组JSON值。

`从json_数组_元素中选择*` → ``

`

------------
1

[2,假]

` | | | | | | | | | +| [](<>) `json_数组_元素_文本` ( `json` ) → `文本集`

[](<>) `jsonb_数组_元素_文本` ( `jsonb` ) → `文本集`

将顶级JSON数组扩展为一组`文本`价值观

`从json_数组_元素_文本(“[“foo”,“bar”]”)中选择*` → ``

`
value
------------
foo
bar

` | | | | | | | | | +| [](<>) `json_数组_长度` ( `json` ) → `整数`

[](<>) `jsonb_数组_长度` ( `jsonb` ) → `整数`

返回顶级JSON数组中的元素数。

`json_数组_长度('[1,2,3,{f1:1,f2:[5,6]},4]'))` → `5.` | | | | | | | | | +| [](<>) `每个人` ( `json` ) → `一套记录` ( *`钥匙`* `文本`, *`价值`* `json` )

[](<>) `各就各位` ( `jsonb` ) → `一套记录` ( *`钥匙`* `文本`, *`价值`* `jsonb` )

将顶级JSON对象扩展为一组键/值对。

`分别从json_中选择*(“{”a:“foo”,“b:“bar”}”)` → ``

\```
钥匙 | 价值
-----+-------
A. | “福”
b | “酒吧”

\``` | | | | | | +| [](<>) `json_每个_文本` ( `json` ) → `一套记录` ( *`钥匙`* `文本`(笑声)*`价值`* `文本` )

[](<>) `jsonb_每个_文本`(咯咯笑)`jsonb`→`一套记录`(咯咯笑)*`钥匙`* `文本`(笑声)*`价值`* `文本` )

将顶级JSON对象扩展为一组键/值对。孩子们回来了*`价值`*这将是一种`文本`.

`(从json_中)选择每个_文本(“{”a:“foo”,“b:“bar”}”)` → ``

"``
钥匙 | 价值
---+-----
和 | 福
b | 酒吧

"`` | | | | | | +| [](<>) `json_提取_路径`(咯咯笑)*`来自_json`* `杰森`(笑声)`可变的` *`路径元素`* `文本[]`→`杰森`

[](<>) `jsonb_提取_路径`(咯咯笑)*`来自_json`* `jsonb`(笑声)`可变的` *`路径元素`* `文本[]`→`jsonb`

在指定路径提取JSON子对象。(这在功能上等同于`#>`运算符,但在某些情况下,将路径输出写入变量列表可能更方便。)

`json_extract_path(“{”f2“:{”f3“:1}”,f4“:{”f5“:99,“f6“:“foo”},“,”f4“,”f6“)` → `“福”` | | | | | | | | | +| [](<>) `json_提取_路径_文本`(咯咯笑)*`来自_json`* `杰森`(笑声)`可变的` *`路径元素`* `文本[]`→`文本`

[](<>) `jsonb_提取路径_文本`(咯咯笑)*`来自_json`* `jsonb`(笑声)`可变的` *`路径元素`* `文本[]`→`文本`

在指定路径提取JSON子对象,如下所示:`文本`(这在功能上等同于`#>>`接线员。)

`json_extract_path_text(“{”f2“:{”f3:1},“f4“:{”f5:99,“f6“:“foo”},“,”f4“,”f6“)` → `福` | | | | | | | | | +| [](<>) `json_对象_键`(咯咯笑)`杰森`→`文本集`

[](<>) `jsonb_对象_键`(咯咯笑)`jsonb`→`文本集`

返回顶级JSON对象中的键集。

`从json_对象_键(“{”f1:“abc”,“f2:{”f3:“a”,“f4:“b”}”)中选择` → ``

`
json_object_keys
---------------------------f1
f2

` | | | | | | | | | +| [](<>) `人口记录`(咯咯笑)*`基础`* `任何元素`(笑声)*`来自_json`* `杰森`→`任何元素`

[](<>) `jsonb_人口记录`(咯咯笑)*`基础`* `任何元素`(笑声)*`来自_json`* `jsonb`→`任何元素`

将顶级JSON对象扩展为具有*`基础`*论点JSON对象将被扫描,以查找名称与输出行类型的列名匹配的字段,并将其值插入到输出的这些列中。(与任何输出列名不对应的字段将被忽略。)在典型的使用中*`基础`*只是`无效的`,这意味着任何与任何对象字段不匹配的输出列都将被空值填充。然而,如果*`基础`*不是吗`无效的`然后,它包含的值将用于不匹配的列。

要将JSON值转换为输出列的SQL类型,请按顺序应用以下规则:

*JSON空值在所有情况下都会转换为SQL空值。

*如果输出列的类型为`json`或`jsonb`,JSON值正好被复制。

*如果输出列是复合(行)类型,而JSON值是JSON对象,则通过递归应用这些规则,将对象的字段转换为输出行类型的列。

*同样,如果输出列是数组类型,JSON值是JSON数组,则通过递归应用这些规则,将JSON数组的元素转换为输出数组的元素。

*否则,如果JSON值是一个字符串,则该字符串的内容将提供给列数据类型的输入转换函数。

*否则,JSON值的普通文本表示形式将提供给列数据类型的输入转换函数。

虽然下面的示例使用一个常量JSON值,但典型的用法是引用`json`或`jsonb`从查询的`从…起`条款写`json_填充_记录`在`从…起`子句是一种很好的做法,因为所有提取的列都可以使用,而无需重复的函数调用。

`创建类型子类型为(d int,e text);` `创建类型myrowtype为(a int,b text[],c subrowtype);`

`从json填充记录中选择*` → ``

\```
A. | b | c
---+-----------+-------------
1. | {2,“a b”} | (4,“a b c”)

\``` | | | | | +| [](<>) `json_填充_记录集` ( *`基础`* `任何元素`, *`来自_json`* `json` ) → `任意元素集`

[](<>) `jsonb_填充_记录集` ( *`基础`* `任何元素`, *`来自_json`* `jsonb` ) → `任意元素集`

将对象的顶级JSON数组扩展为一组具有*`基础`*论点JSON数组中的每个元素都按照上述步骤进行处理`json[b]_填充_记录`.

`将类型twoint创建为(a int,b int);`

`从json_populate_记录集(null::twoints,[{a:1,b:2},{a:3,b:4}])中选择*` → ``

\```
A. | b
---+---
1. | 2.
3. | 4.

\``` | | | | | | +| [](<>) `json_to_记录` ( `json` ) → `记录`

[](<>) `jsonb_to_记录` ( `jsonb` ) → `记录`

将顶级JSON对象扩展到具有由`像`条款(与所有函数一样`记录`,调用查询必须使用`像`条款。)输出记录是从JSON对象的字段中填充的,其填充方式与前面为`json[b]_填充_记录`。由于没有输入记录值,因此不匹配的列总是用空值填充。

`创建类型myrowtype为(a int,b text);`

`选择*从json_到_记录('{a:1,b:[1,2,3],“c:[1,2,3],“e:[1,2,3],“e:[1,2,3],“bar”,“r:{a:123,b:[ABC”}}])作为x(a int,b text,c int[],d text,r myrowtype)` → ``

\```
A. | b | c | d | r
---+---------+---------+---+---------------
1. | [1,2,3] | {1,2,3} | | (123,“a b c”)

\``` | +| [](<>) `json_到_记录集` ( `json` ) → `一套记录`

[](<>) `jsonb_to_记录集`(`jsonb`) →`setof record`

Expands the top-level JSON array of objects to a set of rows having the composite type defined by an`AS`clause. (As with all functions returning`record`, the calling query must explicitly define the structure of the record with an`AS`clause.) Each element of the JSON array is processed as described above for`json[b]_populate_record`.

`select * from json_to_recordset('[{"a":1,"b":"foo"}, {"a":"2","c":"bar"}]') as x(a int, b text)`→``

\```
a | b
---+-----
1 | foo
2 |

\``` | | | | | | +| [](<>) `jsonb_set`(*`target`* `jsonb`,*`path`* `text[]`, *`新价值`* `jsonb` [, *`如果缺少,请创建_ _布尔值`* `] ) → `jsonb`

退换商品*`目标`*与指定的项目*`路径`*取而代之的*`新价值`*,或与*`新价值`*添加如果*`如果缺少,请创建`*为true(这是默认值),并且由*`路径`*不存在。路径中的所有早期步骤都必须存在,或者*`目标`*返回时保持不变。与面向路径的运算符一样,出现在*`路径`*从JSON数组的末尾开始计数。如果最后一个路径步骤是超出范围的数组索引,并且*`如果缺少,请创建`*如果为true,则如果索引为负,则在数组开头添加新值,如果索引为正,则在数组末尾添加新值。

`jsonb_集(“[{f1:1,f2:null},2,null,3]”,“{0,f1},“[2,3,4]”,false)` → `[{“f1”:[2,3,4],“f2”:null},2,null,3]`

`jsonb_set('[{"f1":1,"f2":null},2]', '{0,f3}', '[2,3,4]')`→`[{"f1": 1, "f2": null, "f3": [2, 3, 4]}, 2]` | | | | | | | | | +| [](<>) `jsonb_set_lax`(*`目标`* `jsonb`,*`小路`* `文本[]`,*`新值`* `jsonb`\[,*`create_if_missing`* `布尔值` [,null*`value_treatment_ `*文本` `]] ) →`jsonb`

如果*`新值`*不是`空值`, 行为与`jsonb_set`.否则按照*`null_value_treatment`*必须是其中之一`'raise_exception'`,`'use_json_null'`,`'删除键'`, 要么`'return_target'`.默认是`'use_json_null'`.

`jsonb_set_lax('[{"f1":1,"f2":null},2,null,3]', '{0,f1}', null)`→`[{"f1":null,"f2":null},2,null,3]`

`jsonb_set_lax('[{"f1":99,"f2":null},2]', '{0,f3}', null, true, 'return_target')`→`[{“f1”:99,“f2”:空},2]` | | | | | | | | | +| [](<>) `jsonb_insert`(*`目标`* `jsonb`,*`小路`* `文本[]`,*`新值`* `jsonb` [,插入后*` _布尔值`* `]) →`jsonb`

退货*`目标`*和*`新值`*插入。如果指定的项目*`小路`*是一个数组元素,*`新值`*将插入该项目之前,如果*`插入后`*为假(这是默认值),或者在它之后如果*`插入后`*是真的。如果指定的项目*`小路`*是一个对象场,*`新值`*仅当对象尚未包含该键时才会插入。路径中的所有早期步骤都必须存在,否则*`目标`*原样返回。与面向路径的运算符一样,出现在*`小路`*从 JSON 数组的末尾开始计数。如果最后一个路径步骤是超出范围的数组索引,则如果索引为负,则将新值添加到数组的开头,如果索引为正,则将新值添加到数组的末尾。

`jsonb_insert('{"a": [0,1,2]}', '{a, 1}', '"new_value"')`→`{"a": [0, "new_value", 1, 2]}`

`jsonb_insert('{"a": [0,1,2]}', '{a, 1}', '"new_value"', true)`→`{"a": [0, 1, "new_value", 2]}` | | | | | | | | | +| [](<>) `json_strip_nulls`(`json`) →`json`

[](<>) `jsonb_strip_nulls`(`jsonb`) →`jsonb`

从给定的 JSON 值中递归删除所有具有空值的对象字段。不是对象字段的空值保持不变。

`json_strip_nulls('[{"f1":1, "f2":null}, 2, null, 3]')`→`[{"f1":1},2,null,3]` | | | | | | | | | +| [](<>) `jsonb_path_exists`(*`目标`* `jsonb`,*`小路`* `json路径`\[,*`变量`* `jsonb` [,沉默的*` `*布尔值` `]] ) →`布尔值`

检查 JSON 路径是否返回指定 JSON 值的任何项目。如果*`变量`*参数被指定,它必须是一个 JSON 对象,并且它的字段提供命名值以替换到`json路径`表达。如果*`无声`*参数被指定并且是`真的`,该函数抑制与`@?`和`@@`运营商做。

`jsonb_path_exists('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2,“最大”:4}')`→`吨` | | | | | | | | | +| [](<>) `jsonb_path_match`(*`目标`* `jsonb`,*`小路`* `json路径`\[,*`变量`* `jsonb` [,无声*` `*布尔值` `]] ) →`布尔值`

返回指定 JSON 值的 JSON 路径谓词检查的结果。只考虑结果的第一项。如果结果不是布尔值,则`空值`被退回。可选的*`变量`*和*`无声`*参数的作用与 for 相同`jsonb_path_exists`.

`jsonb_path_match('{"a":[1,2,3,4,5]}', 'exists($.a[*] ? (@ >= $min && @ <= $max))', '{“最小”:2,“最大”:4}')`→`吨` | | | | | | | | | +| [](<>) `jsonb_path_query`(*`目标`* `jsonb`,*`小路`* `json路径`\[,*`变量`* `jsonb` [,无声*` `*布尔值` `]] ) →`jsonb 集合`

返回指定 JSON 值的 JSON 路径返回的所有 JSON 项。可选的*`变量`*和*`无声`*参数的作用与 for 相同`jsonb_path_exists`.

`select * from jsonb_path_query('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{“最小”:2,“最大”:4}')`→``

`
jsonb_path_query
------------------
2
3
4

` | | | | | | | | | +| [](<>) `jsonb_path_query_array`(*`目标`* `jsonb`,*`小路`* `json路径`\[,*`变量`* `jsonb` [,沉默的*` `*布尔值` `]] ) →`jsonb`

以 JSON 数组的形式返回指定 JSON 值的 JSON 路径返回的所有 JSON 项。可选的*`变量`*和*`沉默的`*参数的作用与 for 相同`jsonb_path_exists`.

`jsonb_path_query_array('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2,“最大”:4}')`→`[2, 3, 4]` | | | | | | | | | +| [](<>) `jsonb_path_query_first`(*`目标`* `jsonb`,*`小路`* `json路径`\[,*`变量`* `jsonb` [,无声*` `*布尔值` `]] ) →`jsonb`

返回指定 JSON 值的 JSON 路径返回的第一个 JSON 项。退货`空值`如果没有结果。可选的*`变量`*和*`无声`*参数的作用与 for 相同`jsonb_path_exists`.

`jsonb_path_query_first('{"a":[1,2,3,4,5]}', '$.a[*] ? (@ >= $min && @ <= $max)', '{"min":2,“最大”:4}')`→`2` | | | | | | | | | +| [](<>) `jsonb_path_exists_tz`(*`目标`* `jsonb`, *`小路`* `json路径`\[,*`变量`* `jsonb` [, *`无声`* `布尔值` ]] ) → `布尔值`

[](<>) `jsonb_path_match_tz` ( *`目标`* `jsonb`, *`小路`* `json路径`\[,*`变量`* `jsonb` [, *`无声`* `布尔值` ]] ) → `布尔值`

[](<>) `jsonb_path_query_tz` ( *`目标`* `jsonb`, *`小路`* `json路径`\[,*`变量`* `jsonb` [, *`无声`* `布尔值` ]] ) → `jsonb 集合`

[](<>) `jsonb_path_query_array_tz` ( *`目标`* `jsonb`, *`小路`* `json路径`\[,*`变量`* `jsonb` [, *`无声`* `布尔值` ]] ) → `jsonb`

[](<>) `jsonb_path_query_first_tz` ( *`目标`* `jsonb`(笑声)*`路径`* `jsonpath`\[,*`瓦尔斯`* `jsonb` [(笑声),沉默的*` `*布尔值` `]→`jsonb`

这些函数与上面描述的对应函数一样,没有`_tz`后缀,但这些函数支持需要时区识别转换的日期/时间值的比较。下面的示例需要解释仅日期值`2015-08-02`作为带有时区的时间戳,因此结果取决于当前[时区](runtime-config-client.html#GUC-TIMEZONE)背景。由于这种依赖关系,这些函数被标记为稳定的,这意味着这些函数不能在索引中使用。它们的对应项是不可变的,因此可以在索引中使用;但如果被要求进行这样的比较,他们会出错。

`jsonb_路径_存在(“[”2015-08-01 12:00:00-05“]),“$[*]?(@.datetime()<”2015-08-02)。datetime())`→`t` | | | | | | | | | +| [](<>) `杰森布`(咯咯笑)`jsonb`→`文本`

将给定的JSON值转换为打印精美的缩进文本。

`jsonb_pretty(“[{”f1:1,“f2:null},2]”)`→``

`
[
{
"f1": 1,
"f2": null
},
2
]

` | | | | | | | | | +| [](<>) `json_typeof`(`json`) →`text`

[](<>) `jsonb_typeof`(`jsonb`) →`text`

Returns the type of the top-level JSON value as a text string. Possible types are`object`,`array`,`string`,`number`,`boolean`, and`null`. (The`null`result should not be confused with an SQL NULL; see the examples.)

`json_typeof('-123.4')`→`number`

`json_typeof('null'::json)` → `无效的`

`json_typeof(NULL::json)为NULL` → `t` | | | | | | | | | + +另见[第9.21节](functions-aggregate.html)对于聚合函数`json_agg`它将记录值聚合为JSON,即聚合函数`json_object_agg`它将成对的值聚合到一个JSON对象中`马`等价物,`jsonb_agg`和`jsonb_object_agg`. + +### 9.16.2.SQL/JSON路径语言 + +[](<>) + +SQL/JSON路径表达式指定要从JSON数据中检索的项,类似于用于SQL访问XML的XPath表达式。在PostgreSQL中,路径表达式实现为`jsonpath`数据类型,可以使用中描述的任何元素[第8.14.7节](datatype-json.html#DATATYPE-JSONPATH). + +JSON查询函数和运算符将提供的路径表达式传递给*路径引擎*用于评估。如果表达式与查询的JSON数据匹配,则返回相应的JSON项或项集。路径表达式是用SQL/JSON路径语言编写的,可以包括算术表达式和函数。 + +路径表达式由`jsonpath`数据类型。路径表达式通常从左向右求值,但可以使用括号更改操作顺序。如果计算成功,将生成一系列JSON项,并将计算结果返回给完成指定计算的JSON查询函数。 + +要引用被查询的JSON值*上下文项*),使用`$`路径表达式中的变量。它后面可以跟一个或多个[存取运算符](datatype-json.html#TYPE-JSONPATH-ACCESSORS),它逐级向下遍历JSON结构,以检索上下文项的子项。接下来的每个操作符都处理上一个评估步骤的结果。 + +例如,假设您想解析来自GPS跟踪器的一些JSON数据,例如: + +``` +{ + "track": { + "segments": [ + { + "location": [ 47.763, 13.4034 ], + "start time": "2018-10-14 10:05:14", + "HR": 73 + }, + { + "location": [ 47.706, 13.2635 ], + "start time": "2018-10-14 10:39:21", + "HR": 135 + } + ] + } +} +``` + +要检索可用的轨迹段,需要使用`.*`钥匙`*`访问器操作符向下遍历周围的JSON对象: + +``` +$.track.segments +``` + +要检索数组的内容,通常使用`[*]`操作人员例如,以下路径将返回所有可用轨迹段的位置坐标: + +``` +$.track.segments[*].location +``` + +要仅返回第一段的坐标,可以在`[]`接线员。回想一下,JSON数组索引是0相对的: + +``` +$.track.segments[0].location +``` + +每个路径评估步骤的结果可以由一个或多个`jsonpath`中列出的运算符和方法[第9.16.2.2节](functions-json.html#FUNCTIONS-SQLJSON-PATH-OPERATORS).每个方法名称前面必须有一个点。例如,可以获得数组的大小: + +``` +$.track.segments.size() +``` + +更多使用`jsonpath`路径表达式中的运算符和方法如下所示[第9.16.2.2节](functions-json.html#FUNCTIONS-SQLJSON-PATH-OPERATORS). + +定义路径时,还可以使用一个或多个路径*过滤表达式*这项工作类似于`哪里`SQL中的子句。筛选器表达式以问号开头,并在括号中提供条件: + +``` +? (condition) +``` + +过滤器表达式必须在应用它们的路径求值步骤之后写入。该步骤的结果将被过滤,以仅包括满足所提供条件的项目。SQL/JSON定义了三值逻辑,因此条件可以是`符合事实的`, `错误的`或`未知的`这个`未知的`值的作用与SQL相同`无效的`并且可以用`未知`谓语进一步的路径计算步骤只使用筛选器表达式返回的那些项`符合事实的`. + +中列出了可在筛选表达式中使用的函数和运算符[表9.49](functions-json.html#FUNCTIONS-SQLJSON-FILTER-EX-TABLE).在筛选器表达式中`@`变量表示被过滤的值(即,前面路径步骤的一个结果)。之后可以编写访问器运算符`@`检索组件项。 + +例如,假设要检索所有高于130的心率值。可以使用以下表达式实现这一点: + +``` +$.track.segments[*].HR ? (@ > 130) +``` + +要获取具有此类值的段的开始时间,必须在返回开始时间之前过滤掉不相关的段,因此过滤器表达式应用于上一步,条件中使用的路径不同: + +``` +$.track.segments[*] ? (@.HR > 130)."start time" +``` + +如果需要,可以按顺序使用多个过滤器表达式。例如,以下表达式选择包含具有相关坐标和高心率值的位置的所有分段的开始时间: + +``` +$.track.segments[*] ? (@.location[1] < 13.4) ? (@.HR > 130)."start time" +``` + +还允许在不同嵌套级别使用过滤器表达式。以下示例首先按位置过滤所有分段,然后返回这些分段的高心率值(如果可用): + +``` +$.track.segments[*] ? (@.location[1] < 13.4).HR ? (@ > 130) +``` + +也可以将过滤器表达式嵌套在彼此之间: + +``` +$.track ? (exists(@.segments[*] ? (@.HR > 130))).segments.size() +``` + +如果该表达式包含任何具有高心率值的片段,或包含空序列,则该表达式返回轨迹的大小。 + +PostgreSQL对SQL/JSON路径语言的实现与SQL/JSON标准存在以下偏差: + +- 路径表达式可以是布尔谓词,尽管SQL/JSON标准只允许在过滤器中使用谓词。这对于执行《公约》是必要的`@@`操作人员例如,以下`jsonpath`表达式在PostgreSQL中有效: + + ``` + $.track.segments[*].HR < 70 + ``` + +- 在对应用程序中使用的正则表达式模式的解释上存在细微差异`比如_regex`过滤器,如中所述[第9.16.2.3节](functions-json.html#JSONPATH-REGULAR-EXPRESSIONS). + +#### 9.16.2.1.严与松 + +查询JSON数据时,路径表达式可能与实际的JSON数据结构不匹配。试图访问对象或数组元素中不存在的成员会导致结构错误。SQL/JSON路径表达式有两种处理结构错误的模式: + +- lax(默认)-路径引擎隐式地将查询的数据调整到指定的路径。任何剩余的结构错误都将被抑制并转换为空的SQL/JSON序列。 + +- 严格-如果发生结构错误,则会引发错误。 + + 如果JSON数据不符合预期的模式,lax模式有助于JSON文档结构和路径表达式的匹配。如果操作数与特定操作的要求不匹配,则可以在执行此操作之前将其自动包装为SQL/JSON数组,或通过将其元素转换为SQL/JSON序列将其展开。此外,比较运算符会在lax模式下自动展开其操作数,因此您可以直接比较SQL/JSON数组。大小为1的数组被视为等于其唯一元素。只有在以下情况下,才会执行自动展开: + +- 路径表达式包含`类型()`或`大小()`方法,分别返回数组中元素的类型和数量。 + +- 查询的JSON数据包含嵌套数组。在这种情况下,只有最外层的数组被展开,而所有内部数组保持不变。因此,隐式展开在每个路径计算步骤中只能向下一级。 + + 例如,当查询上面列出的GPS数据时,您可以从使用lax模式时它存储了一组段的事实中提取: + + +``` +lax $.track.segments.location +``` + +在严格模式下,指定的路径必须与查询的JSON文档的结构完全匹配,才能返回SQL/JSON项,因此使用此路径表达式将导致错误。要获得与lax模式相同的结果,必须显式打开`部分`数组: + +``` +strict $.track.segments[*].location +``` + +这个`.**`当使用lax模式时,访问器可能会导致令人惊讶的结果。例如,下面的查询选择每个`人力资源`价值两倍: + +``` +lax $.**.HR +``` + +这是因为`.**`访问器选择两个`部分`数组及其每个元素,而`.人力资源`当使用lax模式时,accessor会自动打开阵列。为了避免意外的结果,我们建议使用`.**`仅在严格模式下访问器。下面的查询将选择每个`人力资源`价值只有一次: + +``` +strict $.**.HR +``` + +#### 9.16.2.2.SQL/JSON路径运算符和方法 + +[表9.48](functions-json.html#FUNCTIONS-SQLJSON-OP-TABLE)显示中可用的运算符和方法`jsonpath`。请注意,虽然一元运算符和方法可以应用于前面路径步骤产生的多个值,但二元运算符(加法等)只能应用于单个值。 + +**表9.48。 `jsonpath`算子和方法** + +| 操作员/方法

描述

例子 | +| ------------------------------ | +| *`数字`* `+` *`数字`* → `*`数字`*`

附加

`jsonb_路径_查询(“[2]”,“$[0]+3”)` → `5.` | +| `+` *`数字`* → `*`数字`*`

一元加号(无操作);与加法不同,它可以迭代多个值

`jsonb_路径_查询_数组('{“x”:[2,3,4]}','+$.x')` → `[2, 3, 4]` | +| *`数字`* `-` *`数字`* → `*`数字`*`

扣除

`jsonb_路径_查询(“[2]”,“[7-$[0]”)` → `5.` | +| `-` *`数字`* → `*`数字`*`

反面与减法不同,它可以迭代多个值

`jsonb_路径_查询_数组('{“x”:[2,3,4]}','-$.x')` → `[-2, -3, -4]` | +| *`数字`* `*` *`数字`* → `*`数字`*`

乘法

`jsonb_路径_查询(“[4]”,“[2*$[0]”)` → `8.` | +| *`数字`* `/` *`数字`* → `*`数字`*`

分开

`jsonb_路径_查询(“[8.5]”,“$[0]/2”)` → `4.2500000000000000` | +| *`数字`* `%` *`数字`* → `*`数字`*`

模(余数)

`jsonb_路径_查询(“[32]”,“$[0]%10”)` → `2.` | +| *`价值`* `.` `类型()` → `*`一串`*`

JSON项的类型(请参阅`json_类型`)

`jsonb_路径_查询_数组('[1,2',{}]','$[*].type()')` → `[“数字”、“字符串”、“对象”]` | +| *`价值`* `.` `大小()` → `*`数字`*`

JSON项的大小(数组元素数,如果不是数组,则为1)

`jsonb_路径_查询(“{”m:[11,15]},“$.m.size()”)` → `2.` | +| *`价值`* `.` `双()` → `*`数字`*`

从JSON数字或字符串转换而来的近似浮点数

`jsonb_路径_查询('{“len”:“1.9”}','$.len.double()*2')` → `3.8` | +| *`数字`* `.` `天花板()` → `*`数字`*`

大于或等于给定数字的最近整数

`jsonb_路径_查询(“{”h“:1.3},“$.h.天花板()”)` → `2.` | +| *`数字`* `.` `楼层()` → `*`数字`*`

小于或等于给定数字的最近整数

`jsonb_路径_查询(“{”h“:1.7},“$.h.floor()”)` → `1.` | +| *`数字`* `.` `abs()` → `*`数字`*`

给定数字的绝对值

`jsonb_路径_查询('{“z”:-0.3}','$.z.abs()')` → `0.3` | +| *`一串`* `.` `日期时间()` → `*`datetime\_类型`*`(见注)

从字符串转换的日期/时间值

`jsonb_路径_查询('[“2015-8-1”,“2015-08-12”],'$[*]?(@.datetime()<“2015-08-2”)。datetime())')` → `"2015-8-1"` | +| *`一串`* `.` `约会时间(*`样板`*)` → `*`datetime\_类型`*`(见注)

使用指定的`到时间戳`样板

`jsonb_路径_查询_数组('[“12:30”,“18:40”],'$[*].datetime(“HH24:MI”))` → `["12:30:00", "18:40:00"]` | +| *`对象`* `.` `keyvalue()` → `*`大堆`*`

对象的键值对,表示为包含三个字段的对象数组:`“钥匙”`, `“价值”`和`“身份证”`; `“身份证”`是键值对所属对象的唯一标识符

`jsonb_路径_查询_数组('{“x”:“20”,“y”:32}','$.keyvalue()'))` → `[{id:0,key:x,value:20},{id:0,key:y,value:32}]` | + +### 笔记 + +的结果类型`日期时间()`和`约会时间(*`样板`*)`方法可以是`日期`, `蒂梅茨`, `时间`, `时间戳`或`时间戳`.这两种方法都动态确定其结果类型。 + +这个`日期时间()`方法按顺序尝试将其输入字符串与`日期`, `蒂梅茨`, `时间`, `时间戳`和`时间戳`。它在第一个匹配格式上停止,并发出相应的数据类型。 + +这个`约会时间(*`样板`*)`方法根据提供的模板字符串中使用的字段确定结果类型。 + +这个`日期时间()`和`约会时间(*`样板`*)`方法使用与`到时间戳`SQL函数(请参见[第9.8节](functions-formatting.html)),但有三个例外。首先,这些方法不允许不匹配的模板模式。其次,模板字符串中只允许使用以下分隔符:减号、句号、斜杠、逗号、撇号、分号、冒号和空格。第三,模板字符串中的分隔符必须与输入字符串完全匹配。 + +如果需要比较不同的日期/时间类型,则应用隐式强制转换。A.`日期`价值可以被赋予`时间戳`或`时间戳`, `时间戳`可以投给`时间戳`和`时间`到`蒂梅茨`.然而,除第一次转换外,所有转换都取决于当前[时区](runtime-config-client.html#GUC-TIMEZONE)设置,因此只能在时区感知范围内执行`jsonpath`功能。 + +[表9.49](functions-json.html#FUNCTIONS-SQLJSON-FILTER-EX-TABLE)显示可用的筛选器表达式元素。 + +**表9.49。 `jsonpath`过滤表达式元素** + +| 谓词/值

描述

例子 | +| ---------------------------- | +| *`价值`* `==` *`价值`* → `布尔值`

相等比较(此运算符和其他比较运算符适用于所有JSON标量值)

`jsonb_路径_查询_数组(“[1,“a”,1,3]”,“$[*]?(@==1)”` → `[1, 1]`

`jsonb_路径_查询_数组('[1,“a”,1,3]','$[*]?(@==“a”))` → `[“a”]` | +| *`价值`* `!=` *`价值`* → `布尔值`

*`价值`* `<>` *`价值`* → `布尔值`

不平等比较

`jsonb_路径_查询_数组(“[1,2,1,3]”,“$[*]?(@!=1)”` → `[2, 3]`

`jsonb_路径_查询_数组('[“a”、“b”、“c”],'$[*]?(@<>“b”))` → `[“a”,“c”]` | +| *`价值`* `<` *`价值`* → `布尔值`

少于比较

`jsonb_路径_查询_数组(“[1,2,3]”,“$[*]?(@<2)”` → `[1]` | +| *`价值`* `<=` *`价值`* → `布尔值`

小于或等于比较

`jsonb_路径_查询_数组('[“a”、“b”、“c”]'、'$[*]?(@<=“b”))` → `[“a”,“b”]` | +| *`价值`* `>` *`价值`* → `布尔值`

大于比较

`jsonb_路径_查询_数组(“[1,2,3]”,“$[*]?(@>2)”` → `[3]` | +| *`价值`* `>=` *`价值`* → `布尔值`

大于或等于比较

`jsonb_路径_查询_数组(“[1,2,3]”,“$[*]?(@>=2)”` → `[2, 3]` | +| `符合事实的` → `布尔值`

JSON常量`符合事实的`

`jsonb_路径_查询(“[{”name:“John”,“parent”:false},“{”name:“Chris”,“parent”:true}]”,“$[*]?(@.parent==true)”` → `{“name”:“Chris”,“parent”:true}` | +| `错误的` → `布尔值`

JSON常量`错误的`

`jsonb_路径_查询(“[{”name:“John”,“parent”:false},“{”name:“Chris”,“parent”:true}]”,“$[*]?(@.parent==false)”` → `{“name”:“John”,“parent”:false}` | +| `无效的` → `*`价值`*`

JSON常量`无效的`(请注意,与SQL不同的是`无效的`(正常工作)

`jsonb_路径_查询('[{“name”:“Mary”,“job”:null},{“name”:“Michael”,“job”:“driver”}],'$[*]?(@.job==null)。名称')` → `“玛丽”` | +| *`布尔值`* `&&` *`布尔值`* → `布尔值`

布尔与

`jsonb_路径_查询(“[1,3,7]”,“$[*]?(@>1&&@<5)”` → `3` | +| *`布尔值`* `||` *`布尔值`* → `布尔值`

布尔或

`jsonb_path_query('[1, 3, 7]', '$[*] ? (@ < 1 || @ > 5)')` → `7` | +| `!` *`布尔值`* → `布尔值`

布尔非

`jsonb_path_query('[1, 3, 7]', '$[*] ? (!(@ < 5))')` → `7` | +| *`布尔值`* `未知` → `布尔值`

测试布尔条件是否为`未知`.

`jsonb_path_query('[-1, 2, 7, "foo"]', '$[*] ? ((@ > 0) 未知)')` → `“富”` | +| *`细绳`* `like_regex` *`细绳`* [ `旗帜` *`细绳`* ] → `布尔值`

测试第一个操作数是否与第二个操作数给出的正则表达式匹配,可选地用字符串描述的修改`旗帜`字符(见[第 9.16.2.3 节](functions-json.html#JSONPATH-REGULAR-EXPRESSIONS))。

`jsonb_path_query_array('["abc", "abd", "aBdC", "abdacb", "babc"]', '$[*] ? (@like_regex "^ab.*c")')` → `[“abc”,“abdacb”]`

`jsonb_path_query_array('["abc", "abd", "aBdC", "abdacb", "babc"]', '$[*] ? (@like_regex "^ab.*c" flag "i")')` → `[“abc”,“aBdC”,“abdacb”]` | +| *`细绳`* `以。。开始` *`细绳`* → `布尔值`

测试第二个操作数是否是第一个操作数的初始子字符串。

`jsonb_path_query('["John Smith", "Mary Stone", "Bob Johnson"]', '$[*] ? (@以 "John" 开头)')` → `“约翰·史密斯”` | +| `存在` `(` *`路径表达式`* `)`→`布尔值`

测试路径表达式是否至少与一个SQL/JSON项匹配。退换商品`未知的`如果路径表达式会导致错误;第二个示例使用此选项来避免在严格模式下出现无此类密钥错误。

`jsonb_路径_查询(“{”x:[1,2],“y:[2,4]}”,“strict$.*?(存在(@?(@[*]>2)))`→`[2, 4]`

`jsonb_路径_查询_数组('{“value”:41}',strict$?(exists(@.name))。名称')`→`[]` | + +#### 9.16.2.3.SQL/JSON正则表达式 + +[](<>) + +SQL/JSON路径表达式允许将文本匹配到具有`比如_regex`滤器例如,以下SQL/JSON路径查询将不区分大小写地匹配以英语元音开头的数组中的所有字符串: + +``` +$[*] ? (@ like_regex "^[aeiou]" flag "i") +``` + +可选的`旗帜`字符串可能包含一个或多个字符`我`对于不区分大小写的匹配,`m`容许`^`和`$`为了配合新线,`s`容许`.`匹配一条新线`q`引用整个模式(将行为简化为简单的子字符串匹配)。 + +SQL/JSON标准借用了`比如_REGEX`运算符,它反过来使用XQuery标准。PostgreSQL目前不支持`比如_REGEX`操作人员因此`比如_regex`过滤器使用中描述的POSIX正则表达式引擎实现[第9.7.3节](functions-matching.html#FUNCTIONS-POSIX-REGEXP)。这导致与标准SQL/JSON行为存在各种细微差异,这些差异在[第9.7.3.8节](functions-matching.html#POSIX-VS-XQUERY)但是,请注意,这里描述的标志字母不兼容不适用于SQL/JSON,因为它会将XQuery标志字母转换为与POSIX引擎期望的匹配。 + +请记住,参数的模式`比如_regex`是一个JSON路径字符串文本,根据中给出的规则编写[第8.14.7节](datatype-json.html#DATATYPE-JSONPATH)。这尤其意味着要在正则表达式中使用的任何反斜杠都必须加倍。例如,要匹配只包含数字的根文档的字符串值: + +``` +$.* ? (@ like_regex "^\\d+$") +``` diff --git a/docs/X/functions-logical.md b/docs/en/functions-logical.md similarity index 100% rename from docs/X/functions-logical.md rename to docs/en/functions-logical.md diff --git a/docs/en/functions-logical.zh.md b/docs/en/functions-logical.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..97378a0e5b5651cc60e0b4d3c0628c611aa676d6 --- /dev/null +++ b/docs/en/functions-logical.zh.md @@ -0,0 +1,30 @@ +## 9.1.逻辑运算符 + +[](<>)[](<>) + +常见的逻辑运算符有:[](<>) [](<>) [](<>) [](<>) [](<>) [](<>) + +``` +boolean AND boolean → boolean +boolean OR boolean → boolean +NOT boolean → boolean +``` + +SQL使用真、假和假三值逻辑系统`无效的`,表示“未知”。观察以下真值表: + +| *`A.`* | *`b`* | *`A.`*和*`b`* | *`A.`*或*`b`* | +| ------ | ----- | ------------ | ------------ | +| 符合事实的 | 符合事实的 | 符合事实的 | 符合事实的 | +| 符合事实的 | 错误的 | 错误的 | 符合事实的 | +| 符合事实的 | 无效的 | 无效的 | 符合事实的 | +| 错误的 | 错误的 | FALSE | FALSE | +| FALSE | NULL | FALSE | NULL | +| NULL | NULL | NULL | NULL | + +| *`a`* | NOT*`a`* | +| ----- | -------- | +| TRUE | FALSE | +| FALSE | TRUE | +| NULL | NULL | + +The operators`AND`and`OR`are commutative, that is, you can switch the left and right operands without affecting the result. (However, it is not guaranteed that the left operand is evaluated before the right operand. See[Section 4.2.14](sql-expressions.html#SYNTAX-EXPRESS-EVAL)for more information about the order of evaluation of subexpressions.) diff --git a/docs/X/functions-net.md b/docs/en/functions-net.md similarity index 100% rename from docs/X/functions-net.md rename to docs/en/functions-net.md diff --git a/docs/en/functions-net.zh.md b/docs/en/functions-net.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a3c51fd2a878c5aa5eabf23e8431812ff465ddbe --- /dev/null +++ b/docs/en/functions-net.zh.md @@ -0,0 +1,55 @@ +## 9.12.网络地址功能和运营商 + +IP网络地址类型,`苹果酒`和`内特`,支持中所示的常用比较运算符[表9.1](functions-comparison.html#FUNCTIONS-COMPARISON-OP-TABLE)以及中所示的专用运算符和函数[表9.38](functions-net.html#CIDR-INET-OPERATORS-TABLE)和[表9.39](functions-net.html#CIDR-INET-FUNCTIONS-TABLE). + +任何`苹果酒`价值可以被赋予`内特`含蓄地因此,如下所示的操作人员和功能在`内特`也在努力`苹果酒`价值观(如果有单独的功能`内特`和`苹果酒`,这是因为这两种情况下的行为应该不同。)此外,它还被允许进行投票`内特`重视`苹果酒`。完成此操作后,网络掩码右侧的任何位都会自动归零,以创建有效的`苹果酒`价值 + +**表9.38。IP地址操作员** + +| 操作人员

描述

例子 | +| ---------------------------- | +| `内特` `<<` `内特` → `布尔值`

子网是否严格由子网控制?这个操作符和接下来的四个操作符测试子网是否包含在内。他们只考虑两个地址的网络部分(忽略网络掩码右边的任何位),并确定一个网络是否与另一个网络相同或子网。

`inet'192.168.1.5'<
`inet'192.168.0.5'<
`inet'192.168.1/24'<
子网是否由子网包含或等于子网?

`inet '192.168.1/24' <<= inet '192.168.1/24'` → `吨` | +| `网络` `>>` `网络` → `布尔值`

子网是否严格包含子网?

`inet '192.168.1/24' >> inet '192.168.1.5'` → `吨` | +| `网络` `>>=` `网络` → `布尔值`

子网是否包含或等于子网?

`inet '192.168.1/24' >>= inet '192.168.1/24'` → `吨` | +| `网络` `&&` `网络` → `布尔值`

任一子网是否包含或等于另一个子网?

`inet '192.168.1/24' && inet '192.168.1.80/28'` → `吨`

`inet '192.168.1/24' && inet '192.168.2.0/28'` → `f` | +| `~` `网络` → `网络`

按位计算 NOT。

`〜净'192.168.1.6'` → `63.87.254.249` | +| `网络` `&` `网络` → `网络`

计算按位与。

`inet '192.168.1.6' & inet '0.0.0.255'` → `0.0.0.6` | +| `网络` `|` `网络` → `网络`

计算按位或。

`inet '192.168.1.6' | inet '0.0.0.255'`→`192.168.1.255` | +| `inet` `+` `bigint`→`inet`

Adds an offset to an address.

`inet '192.168.1.6' + 25`→`192.168.1.31` | +| `bigint` `+` `inet`→`inet`

Adds an offset to an address.

`200 + inet '::ffff:fff0:1'`→`::ffff:255.240.0.201` | +| `inet` `-` `bigint`→`inet`

Subtracts an offset from an address.

`inet '192.168.1.43' - 36`→`192.168.1.7` | +| `内特` `-` `内特`→`比基特`

计算两个地址的差。

`inet'192.168.1.43'-inet'192.168.1.19'`→`24`

`inet'::1'-inet'::ffff:1'`→`-4294901760` | + +**表9.39。IP地址功能** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `阿伯雷夫`(`内特`) →`文本`

将缩写显示格式创建为文本。(结果与`内特`输出函数产生;它仅在与显式强制转换的结果进行比较时被“缩写”`文本`,由于历史原因,它永远不会抑制网络掩码部分。)

`abbrev(inet '10.1.0.0/32')`→`10.1.0.0` | +| `abbrev`(`cidr`) →`text`

Creates an abbreviated display format as text. (The abbreviation consists of dropping all-zero octets to the right of the netmask; more examples are in[Table 8.22](datatype-net-types.html#DATATYPE-NET-CIDR-TABLE).)

`abbrev(cidr '10.1.0.0/16')`→`10.1/16` | +| [](<>) `broadcast`(`inet`) →`inet`

Computes the broadcast address for the address's network.

`broadcast(inet '192.168.1.5/24')`→`192.168.1.255/24` | +| [](<>) `family`(`inet`) →`integer`

返回地址的族:`4.`对于IPv4,`6.`对于IPv6。

`家庭(inet':1')` → `6.` | +| [](<>) `主办` ( `内特` ) → `文本`

以文本形式返回IP地址,忽略网络掩码。

`主机(inet'192.168.1.0/24')` → `192.168.1.0` | +| [](<>) `人质面具` ( `内特` ) → `内特`

计算地址网络的主机掩码。

`主机掩码(inet'192.168.23.20/30')` → `0.0.0.3` | +| [](<>) `inet_合并` ( `内特`, `内特` ) → `苹果酒`

计算包含两个给定网络的最小网络。

`inet_merge(inet'192.168.1.5/24',inet'192.168.2.5/24')` → `192.168.0.0/22` | +| [](<>) `同一个家庭` ( `内特`, `内特` ) → `布尔值`

测试地址是否属于同一IP系列。

`inet_同一系列(inet'192.168.1.5/24',inet':1')` → `f` | +| [](<>) `蒙面人` ( `内特` ) → `整数`

以位为单位返回网络掩码长度。

`masklen(inet'192.168.1.5/24')` → `24` | +| [](<>) `网络掩码` ( `内特` ) → `内特`

计算地址网络的网络掩码。

`网络掩码(inet'192.168.1.5/24')` → `255.255.255.0` | +| [](<>) `网络` ( `内特` ) → `苹果酒`

返回地址的网络部分,将网络掩码右侧的内容置零。(这相当于将值转换为`苹果酒`.)

`网络(inet'192.168.1.5/24')` → `192.168.1.0/24` | +| [](<>) `戴上面具` ( `内特`, `整数` ) → `内特`

设置网络掩码的长度`内特`价值地址部分不变。

`梅斯克伦(inet'192.168.1.5/24',16)` → `192.168.1.5/16` | +| `戴上面具` ( `苹果酒`, `整数` ) → `苹果酒`

设置网络掩码的长度`苹果酒`价值新网络掩码右侧的地址位设置为零。

`梅斯克伦(cidr'192.168.1.0/24',16)` → `192.168.0.0/16` | +| [](<>) `文本` ( `内特` ) → `文本`

以文本形式返回未修改的IP地址和网络掩码长度。(这与显式强制转换的结果相同。)`文本`.)

`文本(inet'192.168.1.5')` → `192.168.1.5/32` | + +### 提示 + +这个`阿伯雷夫`, `主办`和`文本`这些功能主要用于为IP地址提供可选的显示格式。 + +MAC地址类型,`马卡德尔`和`macaddr8`,支持中所示的常用比较运算符[表9.1](functions-comparison.html#FUNCTIONS-COMPARISON-OP-TABLE)以及中显示的特殊功能[表9.40](functions-net.html#MACADDR-FUNCTIONS-TABLE).此外,它们还支持按位逻辑运算符`~`, `&`和`|`(不是,和或),正如上面显示的IP地址。 + +**表9.40。MAC地址函数** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `特鲁克` ( `马卡德尔` ) → `马卡德尔`

将地址的最后3个字节设置为零。剩余的前缀可以与特定制造商关联(使用PostgreSQL中未包含的数据)。

`trunc(macaddr'12:34:56:78:90:ab')` → `12:34:56:00:00:00` | +| `特鲁克` ( `macaddr8` ) → `macaddr8`

将地址的最后5个字节设置为零。剩余的前缀可以与特定制造商关联(使用PostgreSQL中未包含的数据)。

`trunc(macaddr8'12:34:56:78:90:ab:cd:ef')` → `12:34:56:00:00:00:00:00` | +| [](<>) `macaddr8_set7bit` ( `macaddr8` ) → `macaddr8`

将地址的第7位设置为1,创建被称为修改的EUI-64,以包含在IPv6地址中。

`macaddr8_set7bit(macaddr8'00:34:56:ab:cd:ef')` → `02:34:56:ff:fe:ab:cd:ef` | diff --git a/docs/X/functions-subquery.md b/docs/en/functions-subquery.md similarity index 100% rename from docs/X/functions-subquery.md rename to docs/en/functions-subquery.md diff --git a/docs/en/functions-subquery.zh.md b/docs/en/functions-subquery.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a929ca933f7edd6c39aa6bd2bad657259ed4776e --- /dev/null +++ b/docs/en/functions-subquery.zh.md @@ -0,0 +1,135 @@ +## 9.23.子查询表达式 + +[9.23.1.`存在`](functions-subquery.html#FUNCTIONS-SUBQUERY-EXISTS) + +[9.23.2.`在里面`](functions-subquery.html#FUNCTIONS-SUBQUERY-IN) + +[9.23.3.`不在`](functions-subquery.html#FUNCTIONS-SUBQUERY-NOTIN) + +[9.23.4.`任何`/`一些`](functions-subquery.html#FUNCTIONS-SUBQUERY-ANY-SOME) + +[9.23.5.`全部的`](functions-subquery.html#FUNCTIONS-SUBQUERY-ALL) + +[9.23.6. 单行比较](functions-subquery.html#id-1.5.8.29.15) + +[](<>)[](<>)[](<>)[](<>)[](<>)[](<>)[](<>) + +本节介绍PostgreSQL中可用的与SQL兼容的子查询表达式。本节中记录的所有表达式形式都返回布尔(真/假)结果。 + +### 9.23.1.`存在` + +``` +EXISTS (subquery) +``` + +的论点`存在`这是一种武断的行为`选择`声明,或*子查询*.对子查询进行求值,以确定它是否返回任何行。如果它返回至少一行,则`存在`是“真的”;如果子查询不返回任何行,则`存在`是“假”。 + +子查询可以引用周围查询中的变量,这些变量将在子查询的任何一次计算中充当常量。 + +子查询通常只执行足够长的时间,以确定是否至少返回一行,而不是一直执行到完成。编写具有副作用(例如调用序列函数)的子查询是不明智的;副作用是否发生可能无法预测。 + +由于结果只取决于是否返回任何行,而不取决于这些行的内容,因此子查询的输出列表通常并不重要。一种常见的编码约定是编写所有`存在`表格中的测试`存在(选择1,其中…)`。但是,此规则也有例外,例如使用`横断`. + +这个简单的例子就像一个内部连接`可乐2`,但它最多为每一行生成一个输出行`表1`行,即使有多个匹配`表2`排: + +``` +SELECT col1 +FROM tab1 +WHERE EXISTS (SELECT 1 FROM tab2 WHERE col2 = tab1.col2); +``` + +### 9.23.2. `在里面` + +``` +expression IN (subquery) +``` + +右边是一个带括号的子查询,它必须只返回一列。对左侧表达式求值,并与子查询结果的每一行进行比较。结果`在里面`如果找到任何相等的子查询行,则为“true”。如果没有找到相等的行(包括子查询不返回行的情况),则结果为“false”。 + +请注意,如果左侧表达式产生null,或者如果没有相等的右侧值,并且至少有一行右侧表达式产生null,则`在里面`构造将为null,而不是false。这符合SQL关于空值布尔组合的常规规则。 + +就像`存在`,假设子查询将被完全计算是不明智的。 + +``` +row_constructor IN (subquery) +``` + +这张照片的左边`在里面`是行构造函数,如中所述[第4.2.13节](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS).右侧是一个带括号的子查询,它返回的列数必须与左侧行中的表达式数完全相同。左侧表达式将按行计算并与子查询结果的每一行进行比较。结果`在里面`如果找到任何相等的子查询行,则为“true”。如果没有找到相等的行(包括子查询不返回行的情况),则结果为“false”。 + +通常,行中的空值是按照SQL布尔表达式的常规规则组合的。如果两行的所有对应成员都非空且相等,则认为这两行相等;如果任何对应的成员非空且不相等,则行不相等;否则,该行比较的结果未知(null)。如果所有每行结果都不相等或为空,且至少有一个为空,则`在里面`是空的。 + +### 9.23.3. `不在` + +``` +expression NOT IN (subquery) +``` + +右边是一个带括号的子查询,它必须只返回一列。对左侧表达式求值,并与子查询结果的每一行进行比较。结果`不在`如果只找到不相等的子查询行(包括子查询不返回行的情况),则为“true”。如果找到任何相等的行,则结果为“false”。 + +请注意,如果左侧表达式产生null,或者如果没有相等的右侧值,并且至少有一行右侧表达式产生null,则`不在`构造将为null,而不是true。这符合SQL关于空值布尔组合的常规规则。 + +就像`存在`,假设子查询将被完全计算是不明智的。 + +``` +row_constructor NOT IN (subquery) +``` + +这张照片的左边`不在`是行构造函数,如中所述[第4.2.13节](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS).右侧是一个带括号的子查询,它返回的列数必须与左侧行中的表达式数完全相同。左侧表达式将按行计算并与子查询结果的每一行进行比较。结果`不在`如果只找到不相等的子查询行(包括子查询不返回行的情况),则为“true”。如果找到任何相等的行,则结果为“false”。 + +通常,行中的空值是按照SQL布尔表达式的常规规则组合的。如果两行的所有对应成员都非空且相等,则认为这两行相等;如果任何对应的成员非空且不相等,则行不相等;否则,该行比较的结果未知(null)。如果所有每行结果都不相等或为空,且至少有一个为空,则`不在`是空的。 + +### 9.23.4. `任何`/`一些` + +``` +expression operator ANY (subquery) +expression operator SOME (subquery) +``` + +右边是一个带括号的子查询,它必须只返回一列。使用给定的表达式计算左侧表达式,并与子查询结果的每一行进行比较*`操作人员`*,它必须产生一个布尔结果。结果`任何`如果得到任何真实结果,则为“真”。如果没有找到真正的结果(包括子查询不返回任何行的情况),则结果为“false”。 + +`一些`是的同义词`任何`. `在里面`相当于`=任何`. + +请注意,如果没有成功,并且至少有一行右边的运算符结果为null,则`任何`构造将为null,而不是false。这符合SQL关于空值布尔组合的常规规则。 + +就像`存在`,假设子查询将被完全计算是不明智的。 + +``` +row_constructor operator ANY (subquery) +row_constructor operator SOME (subquery) +``` + +这张照片的左边`任何`是行构造函数,如中所述[第4.2.13节](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS).右侧是一个带括号的子查询,它返回的列数必须与左侧行中的表达式数完全相同。使用给定的表达式,按行计算左侧表达式,并将其与子查询结果的每一行进行比较*`操作人员`*.结果`任何`如果对任何子查询行的比较返回true,则为“true”。如果每个子查询行的比较结果都返回false(包括子查询不返回行的情况),则结果为“false”。如果与子查询行的比较没有返回true,并且至少有一个比较返回NULL,则结果为NULL。 + +看见[第9.24.5节](functions-comparisons.html#ROW-WISE-COMPARISON)有关行构造函数比较的含义的详细信息。 + +### 9.23.5. `全部的` + +``` +expression operator ALL (subquery) +``` + +右边是一个带括号的子查询,它必须只返回一列。使用给定的表达式计算左侧表达式,并与子查询结果的每一行进行比较*`操作人员`*,它必须产生一个布尔结果。结果`全部的`如果所有行都返回true(包括子查询不返回行的情况),则为“true”。如果发现任何错误结果,则结果为“错误”。如果与子查询行的比较没有返回false,并且至少有一个比较返回NULL,则结果为NULL。 + +`不在`相当于`<>全部`. + +就像`存在`,假设子查询将被完全计算是不明智的。 + +``` +row_constructor operator ALL (subquery) +``` + +这张照片的左边`全部的`是行构造函数,如中所述[第4.2.13节](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS).右侧是一个带括号的子查询,它返回的列数必须与左侧行中的表达式数完全相同。使用给定的表达式,按行计算左侧表达式,并将其与子查询结果的每一行进行比较*`操作人员`*.结果`全部的`如果对所有子查询行(包括子查询不返回行的情况)进行比较,则为“true”。如果对任何子查询行的比较返回false,则结果为“false”。如果与子查询行的比较没有返回false,并且至少有一个比较返回NULL,则结果为NULL。 + +看见[第9.24.5节](functions-comparisons.html#ROW-WISE-COMPARISON)有关行构造函数比较的含义的详细信息。 + +### 9.23.6.单行比较 + +[](<>) + +``` +row_constructor operator (subquery) +``` + +左侧是行构造函数,如中所述[第4.2.13节](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS).右侧是一个带括号的子查询,它返回的列数必须与左侧行中的表达式数完全相同。此外,子查询不能返回多行。(如果返回零行,则结果为空。)左侧将按行计算并与单个子查询结果行进行比较。 + +看见[第9.24.5节](functions-comparisons.html#ROW-WISE-COMPARISON)有关行构造函数比较的含义的详细信息。 diff --git a/docs/X/functions-textsearch.md b/docs/en/functions-textsearch.md similarity index 100% rename from docs/X/functions-textsearch.md rename to docs/en/functions-textsearch.md diff --git a/docs/en/functions-textsearch.zh.md b/docs/en/functions-textsearch.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a8ea96c823f8854fd64321a7e0d1979eacc8b7c6 --- /dev/null +++ b/docs/en/functions-textsearch.zh.md @@ -0,0 +1,73 @@ +## 9.13.文本搜索函数和运算符 + +[](<>)[](<>) + +[表9.41](functions-textsearch.html#TEXTSEARCH-OPERATORS-TABLE),[表9.42](functions-textsearch.html#TEXTSEARCH-FUNCTIONS-TABLE)和[表9.43](functions-textsearch.html#TEXTSEARCH-FUNCTIONS-DEBUG-TABLE)总结全文搜索提供的函数和运算符。看见[第12章](textsearch.html)有关PostgreSQL文本搜索功能的详细说明。 + +**表9.41。文本搜索操作员** + +| 操作人员

描述

例子 | +| ---------------------------- | +| `tsvector` `@@` `tsquery`→`布尔值`

`tsquery` `@@` `tsvector`→`布尔值`

做`tsvector`火柴`tsquery`? (参数可以按任意顺序给出。)

`to_tsvector(“肥猫吃老鼠”)@@to_tsquery(“猫和老鼠”)`→`t` | +| `文本` `@@` `tsquery` → `布尔值`

在隐式调用`to_tsvector()`火柴`tsquery`?

`“肥猫吃老鼠”@@to_tsquery(“猫和老鼠”)` → `t` | +| `tsvector` `@@@` `tsquery` → `布尔值`

`tsquery` `@@@` `tsvector` → `布尔值`

这是一个不推荐使用的同义词`@@`.

`to_tsvector(“肥猫吃老鼠”)@@@to_tsquery(“猫和老鼠”)` → `t` | +| `tsvector` `||` `tsvector`→`tsvector`

Concatenates two`tsvector`s. If both inputs contain lexeme positions, the second input's positions are adjusted accordingly.

`'a:1 b:2'::tsvector || 'c:1 d:2 b:3'::tsvector`→`'a':1 'b':2,5 'c':3 'd':4` | +| `tsquery` `&&` `tsquery`→`tsquery`

ANDs two`tsquery`s together, producing a query that matches documents that match both input queries.

`'fat | rat'::tsquery && 'cat'::tsquery`→`( 'fat' | 'rat' ) & 'cat'` | +| `tsquery` `||` `tsquery`→`tsquery`

ORs two`tsquery`s together, producing a query that matches documents that match either input query.

`'fat | rat'::tsquery || 'cat'::tsquery`→`'fat' | 'rat' | 'cat'` | +| `!!` `tsquery`→`tsquery`

Negates a`tsquery`, producing a query that matches documents that do not match the input query.

`!! 'cat'::tsquery`→`!'cat'` | +| `tsquery` `<->` `tsquery`→`tsquery`

Constructs a phrase query, which matches if the two input queries match at successive lexemes.

`to_tsquery('fat') <-> to_tsquery('rat')`→`'fat' <-> 'rat'` | +| `tsquery` `@>` `tsquery`→`boolean`

Does first`tsquery`contain the second? (This considers only whether all the lexemes appearing in one query appear in the other, ignoring the combining operators.)

`'cat'::tsquery @> 'cat & rat'::tsquery`→`f` | +| `tsquery` `<@` `tsquery`→`boolean`

Is first`tsquery`contained in the second? (This considers only whether all the lexemes appearing in one query appear in the other, ignoring the combining operators.)

`'cat'::tsquery <@ 'cat & rat'::tsquery`→`t`

`'cat'::tsquery <@ '!cat & rat'::tsquery`→`t` | + +In addition to these specialized operators, the usual comparison operators shown in[Table 9.1](functions-comparison.html#FUNCTIONS-COMPARISON-OP-TABLE)are available for types`tsvector`and`tsquery`. These are not very useful for text searching but allow, for example, unique indexes to be built on columns of these types. + +**Table 9.42. Text Search Functions** + +| Function

Description

Example(s) | | | | | | | | | +| ------------------------------------------------- | --- | --- | --- | --- | --- | --- | --- | --- | +| [](<>) `array_to_tsvector`(`文本[]` ) → `tsvector`

将词素数组转换为`tsvector`.给定的字符串按原样使用,无需进一步处理。

`数组_to_tsvector(“{fat,cat,rat}”)::text[]` → `“猫”“胖”“老鼠”` | | | | | | | | | +| [](<>) `获取当前配置` ( ) → `regconfig`

返回当前默认文本搜索配置(由设置)的OID[违约\_文本\_搜索\_配置](runtime-config-client.html#GUC-DEFAULT-TEXT-SEARCH-CONFIG)).

`获取当前配置` → `英语` | | | | | | | | | +| [](<>) `长` ( `tsvector` ) → `整数`

返回列表中的词素数`tsvector`.

`长度('脂肪:2,4猫:3大鼠:5A'::tsvector)` → `3.` | | | | | | | | | +| [](<>) `纽诺德` ( `tsquery` ) → `整数`

返回列表中词素加运算符的数目`tsquery`.

`numnode(“(脂肪和老鼠)|猫”::tsquery)` → `5.` | | | | | | | | | +| [](<>) `普莱托·尤茨基` ( [ *`配置`* `regconfig`, ] *`查询`* `文本` ) → `tsquery`

将文本转换为`tsquery`,根据指定或默认配置规范化单词。忽略字符串中的任何标点符号(它不确定查询运算符)。结果查询匹配文本中包含所有非停止词的文档。

`普兰托·尤茨基(‘英语’、‘胖老鼠’)` → `“胖子”和“老鼠”` | | | | | | | | | +| [](<>) `短语to_tsquery` ( [ *`配置`* `regconfig`, ] *`查询`* `文本` ) → `tsquery`

将文本转换为`tsquery`,根据指定或默认配置规范化单词。忽略字符串中的任何标点符号(它不确定查询运算符)。结果查询匹配包含文本中所有非停止词的短语。

`短语to_tsquery(“英语”,“肥鼠”)` → `“肥胖”<->“老鼠”`

`短语to_tsquery(‘英语’、‘猫和老鼠’)` → `“猫”<2>“老鼠”` | | | | | | | | | +| [](<>) `网络搜索` ( [ *`配置`* `regconfig`, ] *`查询`* `文本` ) → `tsquery`

将文本转换为`tsquery`,根据指定或默认配置规范化单词。引用的单词序列被转换为短语测试。“or”一词被理解为产生or运算符,破折号产生NOT运算符;其他标点符号被忽略。这近似于一些常见的web搜索工具的行为。

`websearch_to_tsquery('english'、'fat rat'或cat dog')` → `“胖”<->“老鼠”|“猫”和“狗”` | | | | | | | | | +| [](<>) `栎树` ( `tsquery` ) → `文本`

生成可转位部分的表示形式`tsquery`.结果是空的或只是`T`指示不可索引的查询。

`querytree('foo&!bar':tsquery)` → `“福”` | | | | | | | | | +| [](<>) `设定重量` ( *`矢量`* `tsvector`, *`重量`* `“char”` ) → `tsvector`

分配指定的*`重量`*每个元素的*`矢量`*.

`设定体重('脂肪:2,4猫:3大鼠:5B'::tsvector'A')` → `“猫”:3A“脂肪”:2A,4A“老鼠”:5A` | | | | | | | | | +| [](<>) `设定重量` ( *`矢量`* `tsvector`, *`重量`* `“char”`, *`词位`* `文本[]` ) → `tsvector`

分配指定的*`重量`*到*`矢量`*列在*`词位`*.

`设定体重('脂肪:2,4猫:3鼠:5,6B'::tsvector'A','{cat,rat}')` → `“猫”:3A“脂肪”:2,4“老鼠”:5A、6A` | | | | | | | | | +| [](<>) `带` ( `tsvector` ) → `tsvector`

从中删除位置和权重`tsvector`.

`条状('脂肪:2,4猫:3鼠:5A'::tsvector)` → `“猫”“胖”“老鼠”` | | | | | | | | | +| [](<>) `质疑` ( [ *`配置`* `regconfig`, ] *`查询`* `文本` ) → `tsquery`

将文本转换为`tsquery`,根据指定或默认配置规范化单词。这些单词必须用有效的字母组合`tsquery`接线员。

`to_tsquery('english'、'The&Fat&Rats')` → `“胖子”和“老鼠”` | | | | | | | | | +| [](<>) `到_tsvector` ( [ *`配置`* `regconfig`, ] *`文件`* `文本` ) → `tsvector`

将文本转换为`tsvector`,根据指定或默认配置规范化单词。结果中包含位置信息。

`to_tsvector(“英语”,“肥鼠”)` → `“肥胖”:2“老鼠”:3` | | | | | | | | | +| `到_tsvector` ( [ *`配置`* `regconfig`, ] *`文件`* `json` ) → `tsvector`

`到_tsvector` ( [ *`配置`* `regconfig`, ] *`文件`* `jsonb` ) → `tsvector`

将JSON文档中的每个字符串值转换为`tsvector`,根据指定或默认配置规范化单词。然后将结果连接到文档中以生成输出。位置信息的生成就像每对字符串值之间存在一个停止字一样。(注意,当输入为空时,JSON对象字段的“文档顺序”取决于实现。)`jsonb`; 观察示例中的差异。)

`to_tsvector('english','{“aa”:“肥鼠”,“b”:“狗”}'::json)` → `“狗”:5“脂肪”:2“老鼠”:3`

`to_tsvector('english','{“aa”:“肥鼠”,“b”:“狗”}'::jsonb)` → `“狗”:1“脂肪”:4“老鼠”:5` | | | | | | | | | +| [](<>) `json_to_tsvector` ( [ *`配置`* `regconfig`, ] *`文件`* `json`, *`滤器`* `jsonb` ) → `tsvector`

[](<>) `jsonb_to_tsvector` ( [ *`配置`* `regconfig`, ] *`文件`* `jsonb`, *`滤器`* `jsonb` ) → `tsvector`

选择JSON文档中由*`滤器`*把每一个都变成一个`tsvector`,根据指定或默认配置规范化单词。然后将结果连接到文档中以生成输出。位置信息的生成就像每对选定项目之间存在一个停止字一样。(注意,当输入为空时,JSON对象字段的“文档顺序”取决于实现。)`jsonb`)The*`滤器`*一定是个`jsonb`包含零个或多个以下关键字的数组:`“字符串”`(包括所有字符串值),`“数字”`(包括所有数值),`“布尔”`(包括所有布尔值),`“钥匙”`(包括所有钥匙),或`“全部”`(包括上述所有内容)。作为特例*`滤器`*也可以是这些关键字之一的简单JSON值。

`json_to_tsvector('english','{a:'The Fat Rats',b:'123}'::json,[“string”,“numeric”])` → `“123”:5“脂肪”:2“老鼠”:3`

`json_to_tsvector('english','{cat:“肥鼠”,“狗”:123}'::json,'all')` → `“123”:9“猫”:1“狗”:7“肥”:4“鼠”:5` | | | | | | | | | +| [](<>) `删除` ( *`矢量`* `tsvector`, *`词素`* `文本` ) → `tsvector`

删除给定事件的任何出现*`词素`*从*`矢量`*.

`ts_delete('fat:2,4 cat:3 rat:5A'::tsvector'fat')` → `“猫”:3“老鼠”:5A` | | | | | | | | | +| `删除` ( *`矢量`* `tsvector`, *`词位`* `文本[]` ) → `tsvector`

删除中出现的所有词素*`词位`*从*`矢量`*.

`ts_delete('fat:2,4 cat:3 rat:5A'::tsvector,数组['fat','rat']))` → `“猫”:3` | | | | | | | | | +| [](<>) `ts_过滤器` ( *`矢量`* `tsvector`, *`砝码`* `“char”[]` ) → `tsvector`

仅选择具有给定属性的元素*`砝码`*从*`矢量`*.

`ts_过滤器('fat:2,4 cat:3b,7c rat:5A'::tsvector,{a,b}')` → `“猫”:3B“老鼠”:5A` | | | | | | | | | +| [](<>) `标题` ( [ *`配置`* `regconfig`, ] *`文件`* `文本`, *`查询`* `tsquery` [, *`选项`* `文本` ] ) → `文本`

以缩写形式显示*`查询`*在*`文件`*,必须是原始文本,而不是`tsvector`.在匹配到查询之前,文档中的单词将根据指定或默认配置进行规范化。中讨论了此函数的使用[第12.3.4节](textsearch-controls.html#TEXTSEARCH-HEADLINE),它还描述了可用的*`选项`*.

`标题(‘肥猫吃老鼠’,‘猫’)` → `肥猫吃掉了老鼠。` | | | | | | | | | +| `标题` ( [ *`配置`* `regconfig`, ] *`文件`* `json`, *`查询`* `tsquery` [, *`选项`* `文本` ] ) → `文本`

`标题` ( [ *`配置`* `regconfig`, ] *`文件`* `jsonb`, *`查询`* `tsquery` [, *`选项`* `文本` ] ) → `文本`

以缩写形式显示与*`查询`*出现在JSON中的字符串值中*`文件`*看见[第12.3.4节](textsearch-controls.html#TEXTSEARCH-HEADLINE)更多细节。

`ts_标题(“{”cat:“倾盆大雨的猫和狗“}”)::jsonb,'cat')` → `{“猫”:“猫和狗”}` | | | | | | | | | +| [](<>) `T_秩` ( \[ *`砝码`* `真实的`, ] *`矢量`* `tsvector`, *`查询`* `tsquery` [, *`规范化`* `整数` ] ) → `真实的`

计算一个分数,显示*`矢量`*匹配*`查询`*看见[第12.3.3节](textsearch-controls.html#TEXTSEARCH-RANKING)详细信息。

`T_rank(to_tsvector(“雨中的猫和狗”),“猫”)` → `0.06079271` | | | | | | | | | +| [](<>) `T_rank_cd` ( \[ *`砝码`* `真实的`, ] *`矢量`* `tsvector`, *`查询`* `tsquery` [, *`规范化`* `整数` ] ) → `真实的`

计算一个分数,显示*`矢量`*匹配*`查询`*,使用覆盖密度算法。看见[第12.3.3节](textsearch-controls.html#TEXTSEARCH-RANKING)详细信息。

`ts_rank_cd(to_tsvector(‘雨中的猫和狗’),‘猫’)` → `0.1` | | | | | | | | | +| [](<>) `重写` ( *`查询`* `tsquery`, *`目标`* `tsquery`, *`代替`* `tsquery` ) → `tsquery`

替换出现的*`目标`*具有*`代替`*在*`查询`*看见[第12.4.2.1节](textsearch-features.html#TEXTSEARCH-QUERY-REWRITING)详细信息。

`t|u rewrite('a&b'::tsquery,'a'::tsquery,'foo | bar'::tsquery)` → `“b”和(“foo”|“bar”)` | | | | | | | | | +| `重写` ( *`查询`* `tsquery`, *`选择`* `文本` ) → `tsquery`

替换部分*`查询`*根据通过执行`选择`命令看见[第12.4.2.1节](textsearch-features.html#TEXTSEARCH-QUERY-REWRITING)详细信息。

`选择t_rewrite('a&b'::tsquery,'SELECT t,s FROM alias')` → `“b”和(“foo”|“bar”)` | | | | | | | | | +| [](<>) `Tsu_短语` ( *`问题1`* `tsquery`, *`问题2`* `tsquery` ) → `tsquery`

构造一个短语查询,用于搜索*`问题1`*和*`问题2`*在连续的词素中(与`<->`接线员)。

`短语(to_tsquery('fat')、to_tsquery('cat'))` → `“胖”<->“猫”` | | | | | | | | | +| `Tsu_短语` ( *`问题1`* `tsquery`, *`问题2`* `tsquery`, *`距离`* `整数` ) → `tsquery`

构造一个短语查询,用于搜索*`问题1`*和*`问题2`*这种情况确实发生了*`距离`*词素分离。

`短语(to_tsquery('fat')、to_tsquery('cat')、10)` → `“肥猫”` | | | | | | | | | +| [](<>) `TSU-to-U阵列` ( `tsvector` ) → `文本[]`

皈依`tsvector`到一系列词汇表。

`tsvector_to_数组('fat:2,4 cat:3 rat:5A'::tsvector)` → `{猫、肥、鼠}` | | | | | | | | | +| [](<>) `不安` ( `tsvector` ) → `一套记录` ( *`词素`* `文本`, *`位置`* `smallint[]`, *`砝码`* `文本` )

扩展`tsvector`分成一组行,每个词素一行。

`从unnest中选择*('cat:3脂肪:2,4大鼠:5A'::tsvector)` → ``

\```
词素 | 位置 | 砝码
--------+-----------+---------
猫 | {3} | {D}
脂肪 | {2,4} | {D,D}
老鼠 | {5} | {A}

\``` | + +### 笔记 + +所有接受可选文本的文本搜索功能`regconfig`参数将使用[违约\_文本\_搜索\_配置](runtime-config-client.html#GUC-DEFAULT-TEXT-SEARCH-CONFIG)当这个论点被省略时。 + +中的功能[表9.43](functions-textsearch.html#TEXTSEARCH-FUNCTIONS-DEBUG-TABLE)单独列出,因为它们通常不用于日常文本搜索操作。它们主要有助于开发和调试新的文本搜索配置。 + +**表9.43。文本搜索调试功能** + +| 作用

描述

例子 | +| -------------------------- | +| [](<>) `调试` ( [ *`配置`* `regconfig`, ] *`文件`* `文本` ) → `一套记录` ( *`别名`* `文本`,*`描述`* `文本`,*`令牌`* `文本`,*`字典`* `字典[]`,*`字典`* `规范的`,*`词位`* `文本[]`)

从*`文档`*根据指定的或默认的文本搜索配置,并返回有关如何处理每个令牌的信息。看[第 12.8.1 节](textsearch-debugging.html#TEXTSEARCH-CONFIGURATION-TESTING)详情。

`ts_debug('english', '最亮的超新星')`→`(asciiword,"Word, all ASCII",The,{english_stem},english_stem,{}) ...` | +| [](<>) `ts_lexize`(*`听写`* `规范的`,*`令牌`* `文本`) →`文本[]`

如果字典已知输入标记,则返回替换词位数组;如果字典已知标记但它是停用词,则返回空数组;如果不是已知词,则返回 NULL。看[第 12.8.3 节](textsearch-debugging.html#TEXTSEARCH-DICTIONARY-TESTING)详情。

`ts_lexize('english_stem', '星星')`→`{星星}` | +| [](<>) `ts_parse`(*`解析器名称`* `文本`,*`文档`* `文本`) →`记录集`(*`小孩子`* `整数`,*`令牌`* `文本`)

从*`文档`*使用命名解析器。看[第 12.8.2 节](textsearch-debugging.html#TEXTSEARCH-PARSER-TESTING)详情。

`ts_parse('default', 'foo - bar')`→`(1,富) ...` | +| `ts_parse`(*`parser_oid`* `样的`,*`文档`* `文本`) →`记录集`(*`小孩子`* `整数`,*`令牌`* `文本`)

从*`文档`*使用由 OID 指定的解析器。看[第 12.8.2 节](textsearch-debugging.html#TEXTSEARCH-PARSER-TESTING)详情。

`ts_parse(3722, 'foo - bar')`→`(1,富) ...` | +| [](<>) `ts_token_type`(*`解析器名称`* `文本`) →`记录集`(*`小孩子`* `整数`,*`别名`* `文本`,*`描述`* `文本`)

返回一个表,该表描述了命名解析器可以识别的每种类型的标记。看[第 12.8.2 节](textsearch-debugging.html#TEXTSEARCH-PARSER-TESTING)详情。

`ts_token_type('默认')`→`(1,asciiword,"Word, all ASCII") ...` | +| `ts_token_type`(*`parser_oid`* `样的`) →`记录集`(*`小孩子`* `整数`,*`别名`* `文本`,*`描述`* `文本`)

返回一个表,该表描述了由 OID 指定的解析器可以识别的每种类型的标记。看[第 12.8.2 节](textsearch-debugging.html#TEXTSEARCH-PARSER-TESTING)详情。

`ts_token_type(3722)`→`(1,asciiword,"Word, all ASCII") ...` | +| [](<>) `ts_stat`(*`查询`* `文本` [,权重*` `*文本` `]) →`记录集`(*`单词`* `文本`,*`ndoc`* `整数`,*`入门`* `整数`)

执行*`查询`*,它必须返回一个`向量`列,并返回有关数据中包含的每个不同词位的统计信息。看[第 12.4.4 节](textsearch-features.html#TEXTSEARCH-STATISTICS)详情。

`ts_stat('从 apod 中选择向量')`→`(foo,10,15) ...` | diff --git a/docs/X/functions-xml.md b/docs/en/functions-xml.md similarity index 100% rename from docs/X/functions-xml.md rename to docs/en/functions-xml.md diff --git a/docs/en/functions-xml.zh.md b/docs/en/functions-xml.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..b6a61e6526e889ca87f25844623b50422f11ea53 --- /dev/null +++ b/docs/en/functions-xml.zh.md @@ -0,0 +1,496 @@ +## 9.15.XML函数 + +[9.15.1. 生成XML内容](functions-xml.html#FUNCTIONS-PRODUCING-XML) + +[9.15.2. XML谓词](functions-xml.html#FUNCTIONS-XML-PREDICATES) + +[9.15.3. 处理XML](functions-xml.html#FUNCTIONS-XML-PROCESSING) + +[9.15.4. 将表映射到XML](functions-xml.html#FUNCTIONS-XML-MAPPING) + +[](<>) + +本节中描述的函数和类似函数的表达式对类型为`xml`看见[第8.13节](datatype-xml.html)有关`xml`类型函数式表达式`xmlparse`和`xmlserialize`用于转换类型和从类型转换`xml`记录在那里,而不是本节中。 + +使用这些函数中的大多数都需要使用PostgreSQL构建`配置--使用libxml`. + +### 9.15.1.生成XML内容 + +一组函数和类似函数的表达式可用于从SQL数据生成XML内容。因此,它们特别适合将查询结果格式化为XML文档,以便在客户端应用程序中进行处理。 + +#### 9.15.1.1.`xmlcomment` + +[](<>) + +``` +xmlcomment ( text ) → xml +``` + +功能`xmlcomment`创建一个包含XML注释的XML值,该注释的内容为指定的文本。文本不能包含“`--`“或者以`-`,否则生成的构造将不是有效的XML注释。如果参数为null,则结果为null。 + +例子: + +``` +SELECT xmlcomment('hello'); + + xmlcomment +#### 9.15.1.2. `xmlconcat` + +[]() +``` + +xmlcontat(xml)[, ...] ) → xml + +``` + The function `xmlconcat` concatenates a list of individual XML values to create a single value containing an XML content fragment. Null values are omitted; the result is only null if there are no nonnull arguments. + + Example: +``` + +选择xmlconcat('', ''); + +``` + xmlconcat +``` + +#### 9.15.1.3. `xmlelement` + +[](<>) + +``` +xmlelement ( NAME name [, XMLATTRIBUTES ( attvalue [ AS attname ] [, ...] ) ] [, content [, ...]] ) → xml +``` + +这个`xmlelement`表达式生成具有给定名称、属性和内容的XML元素。这个*`名称`*和*`阿特名`*语法中显示的项是简单的标识符,而不是值。这个*`attvalue`*和*`所容纳之物`*项是表达式,可以生成任何PostgreSQL数据类型。其中的论点`XMLATTRIBUTES`生成XML元素的属性;这个*`所容纳之物`*值被连接以形成其内容。 + +例如: + +``` +SELECT xmlelement(name foo); + + xmlelement +#### 9.15.1.4. `xmlforest` + +[]() +``` + +xmlforest(内容)[作为名字][, ...] ) → xml + +``` + The `xmlforest` expression produces an XML forest (sequence) of elements using the given names and content. As for `xmlelement`, each *`name`* must be a simple identifier, while the *`content`* expressions can have any data type. + + Examples: +``` + +选择xmlforest(“abc”作为foo,123作为bar); + +``` + xmlforest +``` + +#### 9.15.1.5. `xmlpi` + +[](<>) + +``` +xmlpi ( NAME name [, content ] ) → xml +``` + +这个`xmlpi`表达式创建XML处理指令。至于`xmlelement`这个*`名称`*必须是简单标识符,而*`所容纳之物`*表达式可以有任何数据类型。这个*`所容纳之物`*,如果存在,则不能包含字符序列`?>`. + +例子: + +``` +SELECT xmlpi(name php, 'echo "hello world";'); + + xmlpi +#### 9.15.1.6. `xmlroot` + +[]() +``` + +xmlroot(xml,版本{text | NO VALUE}[,独立,YES { NO | NO VALUE|] ) → xml + +``` + The `xmlroot` expression alters the properties of the root node of an XML value. If a version is specified, it replaces the value in the root node's version declaration; if a standalone setting is specified, it replaces the value in the root node's standalone declaration. +``` + +选择xmlroot(xmlparse(文档)abc,版本“1.0”,是); + +``` + xmlroot +``` + +#### 9.15.1.7. `xmlagg` + +[](<>) + +``` +xmlagg ( xml ) → xml +``` + +功能`xmlagg`与这里描述的其他函数不同,它是一个聚合函数。它将输入值连接到聚合函数调用,就像`xmlconcat`除了连接发生在行之间,而不是发生在单行中的表达式之间。看见[第9.21节](functions-aggregate.html)有关聚合函数的更多信息。 + +例子: + +``` +CREATE TABLE test (y int, x xml); +INSERT INTO test VALUES (1, 'abc'); +INSERT INTO test VALUES (2, ''); +SELECT xmlagg(x) FROM test; + xmlagg +### 9.15.2. XML Predicates + + The expressions described in this section check properties of `xml` values. + +#### 9.15.2.1. `IS DOCUMENT` + +[]() +``` + +xml是一种文档→ 布尔值 + +``` + The expression `IS DOCUMENT` returns true if the argument XML value is a proper XML document, false if it is not (that is, it is a content fragment), or null if the argument is null. See [Section 8.13](datatype-xml.html) about the difference between documents and content fragments. + +#### 9.15.2.2. `IS NOT DOCUMENT` + +[]() +``` + +xml不是文档→ 布尔值 + +``` + The expression `IS NOT DOCUMENT` returns false if the argument XML value is a proper XML document, true if it is not (that is, it is a content fragment), or null if the argument is null. + +#### 9.15.2.3. `XMLEXISTS` + +[]() +``` + +XMLEXISTS(文本传递)[通过{REF | VALUE}] xml [通过{REF | VALUE}]) → 布尔值 + +``` + The function `xmlexists` evaluates an XPath 1.0 expression (the first argument), with the passed XML value as its context item. The function returns false if the result of that evaluation yields an empty node-set, true if it yields any other value. The function returns null if any argument is null. A nonnull value passed as the context item must be an XML document, not a content fragment or any non-XML value. + + Example: +``` + +选择xmlexists('//town)[text()()“多伦多”=]“按价值传递”多伦多渥太华'); + + xmlexists + +#### 9.15.2.4. `xml格式良好` + +[](<>)[](<>)[](<>) + +``` +xml_is_well_formed ( text ) → boolean +xml_is_well_formed_document ( text ) → boolean +xml_is_well_formed_content ( text ) → boolean +``` + +这些函数检查`文本`字符串表示格式良好的XML,返回布尔结果。`xml是格式良好的文档`检查格式良好的文档,而`xml是格式良好的内容`检查格式良好的内容。`xml格式良好`前者是不是[xmloption](runtime-config-client.html#GUC-XMLOPTION)配置参数设置为`文件`,如果设置为`所容纳之物`.这意味着`xml格式良好`用于查看简单的强制转换是否需要键入`xml`将成功,而其他两个函数对于查看`XMLPARSE`他会成功的。 + +例如: + +``` +SET xmloption TO DOCUMENT; +SELECT xml_is_well_formed('<>'); + xml_is_well_formed +### 9.15.3. Processing XML + + To process values of data type `xml`, PostgreSQL offers the functions `xpath` and `xpath_exists`, which evaluate XPath 1.0 expressions, and the `XMLTABLE` table function. + +#### 9.15.3.1. `xpath` + +[]() +``` + +xpath(xpath文本、xml\[、nsarray文本\[])→ xml\[] + +``` + The function `xpath` evaluates the XPath 1.0 expression *`xpath`* (given as text) against the XML value *`xml`*. It returns an array of XML values corresponding to the node-set produced by the XPath expression. If the XPath expression returns a scalar value rather than a node-set, a single-element array is returned. + + The second argument must be a well formed XML document. In particular, it must have a single root node element. + + The optional third argument of the function is an array of namespace mappings. This array should be a two-dimensional `text` array with the length of the second axis being equal to 2 (i.e., it should be an array of arrays, each of which consists of exactly 2 elements). The first element of each array entry is the namespace name (alias), the second the namespace URI. It is not required that aliases provided in this array be the same as those being used in the XML document itself (in other words, both in the XML document and in the `xpath` function context, aliases are *local*). + + Example: +``` + +选择xpath(“/my:a/text()”,“\测试](http://example.com">test)\,数组\[ARRAY][“我的”http'example',com']]); + + xpath + +#### 9.15.3.2. `xpath_存在` + +[](<>) + +``` +xpath_exists ( xpath text, xml xml [, nsarray text[] ] ) → boolean +``` + +功能`xpath_存在`是一种特殊形式的`xpath`作用该函数不返回满足XPath 1.0表达式的单个XML值,而是返回一个布尔值,指示查询是否满足(具体来说,它是否生成除空节点集以外的任何值)。此函数相当于`XMLEXISTS`谓词,但它还提供对命名空间映射参数的支持。 + +例子: + +``` +SELECT xpath_exists('/my:a/text()', 'test', + ARRAY[ARRAY['my', 'http://example.com']]); + + xpath_exists +#### 9.15.3.3. `xmltable` + +[]()[]() +``` + +XMLTABLE(\[XMLNAMESPACES(namespace_uri作为namespace_name[, ...]),]行\_表达式传递[通过{REF | VALUE}]文档\_表达式[通过{REF | VALUE}]列名称{type[路径列\_表达式][default default_expression] [非空|空]|对于一般性}[, ...]) → 一套记录 + +``` + The `xmltable` expression produces a table based on an XML value, an XPath filter to extract rows, and a set of column definitions. Although it syntactically resembles a function, it can only appear as a table in a query's `FROM` clause. + + The optional `XMLNAMESPACES` clause gives a comma-separated list of namespace definitions, where each *`namespace_uri`* is a `text` expression and each *`namespace_name`* is a simple identifier. It specifies the XML namespaces used in the document and their aliases. A default namespace specification is not currently supported. + + The required *`row_expression`* argument is an XPath 1.0 expression (given as `text`) that is evaluated, passing the XML value *`document_expression`* as its context item, to obtain a set of XML nodes. These nodes are what `xmltable` transforms into output rows. No rows will be produced if the *`document_expression`* is null, nor if the *`row_expression`* produces an empty node-set or any value other than a node-set. + +*`document_expression`* provides the context item for the *`row_expression`*. It must be a well-formed XML document; fragments/forests are not accepted. The `BY REF` and `BY VALUE` clauses are accepted but ignored, as discussed in [Section D.3.2](xml-limits-conformance.html#FUNCTIONS-XML-LIMITS-POSTGRESQL). + + In the SQL standard, the `xmltable` function evaluates expressions in the XML Query language, but PostgreSQL allows only XPath 1.0 expressions, as discussed in [Section D.3.1](xml-limits-conformance.html#FUNCTIONS-XML-LIMITS-XPATH1). + + The required `COLUMNS` clause specifies the column(s) that will be produced in the output table. See the syntax summary above for the format. A name is required for each column, as is a data type (unless `FOR ORDINALITY` is specified, in which case type `integer` is implicit). The path, default and nullability clauses are optional. + + A column marked `FOR ORDINALITY` will be populated with row numbers, starting with 1, in the order of nodes retrieved from the *`row_expression`*'s result node-set. At most one column may be marked `FOR ORDINALITY`. + +### Note + + XPath 1.0 does not specify an order for nodes in a node-set, so code that relies on a particular order of the results will be implementation-dependent. Details can be found in [Section D.3.1.2](xml-limits-conformance.html#XML-XPATH-1-SPECIFICS). + + The *`column_expression`* for a column is an XPath 1.0 expression that is evaluated for each row, with the current node from the *`row_expression`* result as its context item, to find the value of the column. If no *`column_expression`* is given, then the column name is used as an implicit path. + + If a column's XPath expression returns a non-XML value (which is limited to string, boolean, or double in XPath 1.0) and the column has a PostgreSQL type other than `xml`, the column will be set as if by assigning the value's string representation to the PostgreSQL type. (If the value is a boolean, its string representation is taken to be `1` or `0` if the output column's type category is numeric, otherwise `true` or `false`.) + + If a column's XPath expression returns a non-empty set of XML nodes and the column's PostgreSQL type is `xml`, the column will be assigned the expression result exactly, if it is of document or content form. [[8]](#ftn.id-1.5.8.21.7.5.15.2) + + A non-XML result assigned to an `xml` output column produces content, a single text node with the string value of the result. An XML result assigned to a column of any other type may not have more than one node, or an error is raised. If there is exactly one node, the column will be set as if by assigning the node's string value (as defined for the XPath 1.0 `string` function) to the PostgreSQL type. + + The string value of an XML element is the concatenation, in document order, of all text nodes contained in that element and its descendants. The string value of an element with no descendant text nodes is an empty string (not `NULL`). Any `xsi:nil` attributes are ignored. Note that the whitespace-only `text()` node between two non-text elements is preserved, and that leading whitespace on a `text()` node is not flattened. The XPath 1.0 `string` function may be consulted for the rules defining the string value of other XML node types and non-XML values. + + The conversion rules presented here are not exactly those of the SQL standard, as discussed in [Section D.3.1.3](xml-limits-conformance.html#FUNCTIONS-XML-LIMITS-CASTS). + + If the path expression returns an empty node-set (typically, when it does not match) for a given row, the column will be set to `NULL`, unless a *`default_expression`* is specified; then the value resulting from evaluating that expression is used. + + A *`default_expression`*, rather than being evaluated immediately when `xmltable` is called, is evaluated each time a default is needed for the column. If the expression qualifies as stable or immutable, the repeat evaluation may be skipped. This means that you can usefully use volatile functions like `nextval` in *`default_expression`*. + + Columns may be marked `NOT NULL`. If the *`column_expression`* for a `NOT NULL` column does not match anything and there is no `DEFAULT` or the *`default_expression`* also evaluates to null, an error is reported. + + Examples: +``` + +创建表xmldata作为SELECT xml$$ + \AU\\澳大利亚\ + \JP\\日本\\安倍晋三\145935 + + \SG\\新加坡\697 + +$$作为数据; + +选择xmltable.\*从xmldata,XMLTABLE(“//ROWS/ROW”传递数据列id int PATH'@id',序数表示序数,“COUNTRY_NAME”文本,COUNTRY_id文本路径'COUNTRY_id',size_sq_km浮动路径'size[@单位=“平方公里”"]'大小\_其他文本路径'concat(大小[@单位!="“平方公里”_]尺寸[@单位!="“平方公里”_]/@单元),premier_name文本路径“premier_name”默认值“未指定”); + +id |普通性|国家|名称|国家| id |大小|平方公里|大小|其他|总理|名称 + +### 9.15.4.将表映射到XML + +[](<>) + +以下函数将关系表的内容映射为XML值。它们可以被视为XML导出功能: + +``` +table_to_xml ( table regclass, nulls boolean, + tableforest boolean, targetns text ) → xml +query_to_xml ( query text, nulls boolean, + tableforest boolean, targetns text ) → xml +cursor_to_xml ( cursor refcursor, count integer, nulls boolean, + tableforest boolean, targetns text ) → xml +``` + +`表_to _xml`映射作为参数传递的命名表的内容*`桌子`*这个`regclass`类型接受使用常用符号标识表的字符串,包括可选的模式限定和双引号(请参阅[第8.19节](datatype-oid.html)详细信息)。`查询到xml`执行其文本作为参数传递的查询*`查询`*并映射结果集。`光标指向xml`从参数指定的游标中获取指定的行数*`光标`*。如果必须映射大型表,建议使用此变量,因为每个函数都会在内存中建立结果值。 + +如果*`台地森林`*如果为false,则生成的XML文档如下所示: + +``` + + + data + data + + + + ... + + + ... + +``` + +如果*`台地森林`*如果为true,则结果是一个XML内容片段,如下所示: + +``` + + data + data + + + + ... + + +... +``` + +如果没有可用的表名,即在映射查询或光标时,字符串`桌子`在第一种格式中使用,`一行`第二种格式。 + +这些格式之间的选择取决于用户。第一种格式是正确的XML文档,这在许多应用程序中都很重要。第二种格式在未来更有用`光标指向xml`如果以后要将结果值重新组合到一个文档中,则该函数将起作用。特别是上面讨论的用于生成XML内容的函数`xmlelement`,可用于根据口味改变结果。 + +数据值的映射方式与函数中描述的相同`xmlelement`在上面 + +参数*`空值`*确定输出中是否应包含空值。如果为true,则列中的空值表示为: + +``` + +``` + +哪里`南印第安湖`是XML架构实例的XML命名空间前缀。将向结果值中添加适当的命名空间声明。如果为false,则只需从输出中忽略包含null值的列。 + +参数*`目标`*指定结果所需的XML命名空间。如果不需要特定的名称空间,则应传递空字符串。 + +以下函数返回描述上述相应函数执行的映射的XML模式文档: + +``` +table_to_xmlschema ( table regclass, nulls boolean, + tableforest boolean, targetns text ) → xml +query_to_xmlschema ( query text, nulls boolean, + tableforest boolean, targetns text ) → xml +cursor_to_xmlschema ( cursor refcursor, nulls boolean, + tableforest boolean, targetns text ) → xml +``` + +为了获得匹配的XML数据映射和XML模式文档,必须传递相同的参数。 + +以下函数在一个文档(或林)中生成XML数据映射和相应的XML模式,并链接在一起。在需要自包含和自描述的结果时,它们可能很有用: + +``` +table_to_xml_and_xmlschema ( table regclass, nulls boolean, + tableforest boolean, targetns text ) → xml +query_to_xml_and_xmlschema ( query text, nulls boolean, + tableforest boolean, targetns text ) → xml +``` + +此外,以下函数可用于生成整个模式或整个当前数据库的类似映射: + +``` +schema_to_xml ( schema name, nulls boolean, + tableforest boolean, targetns text ) → xml +schema_to_xmlschema ( schema name, nulls boolean, + tableforest boolean, targetns text ) → xml +schema_to_xml_and_xmlschema ( schema name, nulls boolean, + tableforest boolean, targetns text ) → xml + +database_to_xml ( nulls boolean, + tableforest boolean, targetns text ) → xml +database_to_xmlschema ( nulls boolean, + tableforest boolean, targetns text ) → xml +database_to_xml_and_xmlschema ( nulls boolean, + tableforest boolean, targetns text ) → xml +``` + +这些函数忽略当前用户无法读取的表。此外,数据库范围的函数会忽略当前用户没有的模式`用法`(查找)的特权。 + +请注意,这些操作可能会产生大量数据,这些数据需要存储在内存中。当请求大型模式或数据库的内容映射时,可能值得考虑单独映射表,甚至可能通过光标映射。 + +架构内容映射的结果如下所示: + +``` + + +table1-mapping + +table2-mapping + +... + + +``` + +其中,表映射的格式取决于*`台地森林`*参数,如上所述。 + +数据库内容映射的结果如下所示: + +``` + + + + ... + + + + ... + + +... + + +``` + +其中模式映射如上所述。 + +作为使用这些函数产生的输出的示例,[例9.1](functions-xml.html#XSLT-XML-HTML)显示一个XSLT样式表,用于转换`表_to _xml _和_xmlschema`指向包含表格数据格式副本的HTML文档。以类似的方式,这些函数的结果可以转换为其他基于XML的格式。 + +**例9.1。用于将SQL/XML输出转换为HTML的XSLT样式表** + +``` + + + + + + + + + + + + + <xsl:value-of select="name(current())"/> + + + + + + + + + + + + + + + + +
+ + +
+ +
+``` diff --git a/docs/X/fuzzystrmatch.md b/docs/en/fuzzystrmatch.md similarity index 100% rename from docs/X/fuzzystrmatch.md rename to docs/en/fuzzystrmatch.md diff --git a/docs/en/fuzzystrmatch.zh.md b/docs/en/fuzzystrmatch.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..79115b96099269b46cd52aacc3db0979f5ec7b2c --- /dev/null +++ b/docs/en/fuzzystrmatch.zh.md @@ -0,0 +1,110 @@ +## F.15。模糊格式 + +[F.15.1。Soundex](fuzzystrmatch.html#id-1.11.7.24.6)[F.15.2。莱文斯坦](fuzzystrmatch.html#id-1.11.7.24.7)[F.15.3。变音](fuzzystrmatch.html#id-1.11.7.24.8)[F.15.4。双变音](fuzzystrmatch.html#id-1.11.7.24.9) + +[](<>) + +这个`模糊格式`模块提供了几个函数来确定字符串之间的相似性和距离。 + +### 小心 + +目前`soundex`,`变音`,`数字电话`和`dmetaphone_alt`函数不能很好地与多字节编码(如UTF-8)配合使用。 + +该模块被认为是“受信任的”,也就是说,它可以由拥有`创造`当前数据库的权限。 + +### F.15.1。Soundex + +Soundex系统是一种通过将相似的发音名称转换为相同代码来匹配它们的方法。它最初在1880年、1900年和1910年的美国人口普查中使用。请注意,Soundex对于非英语名称不是很有用。 + +这个`模糊格式`模块提供两个使用Soundex代码的功能: + +[](<>)[](<>) + +``` +soundex(text) returns text +difference(text, text) returns int +``` + +这个`soundex`函数将字符串转换为其Soundex代码。这个`差别`函数将两个字符串转换为它们的Soundex代码,然后报告匹配代码位置的数量。由于Soundex代码有四个字符,因此结果的范围从零到四,零表示不匹配,四表示完全匹配。(因此,该函数被错误命名为-`相似性`这是一个更好的名字。) + +以下是一些用法示例: + +``` +SELECT soundex('hello world!'); + +SELECT soundex('Anne'), soundex('Ann'), difference('Anne', 'Ann'); +SELECT soundex('Anne'), soundex('Andrew'), difference('Anne', 'Andrew'); +SELECT soundex('Anne'), soundex('Margaret'), difference('Anne', 'Margaret'); + +CREATE TABLE s (nm text); + +INSERT INTO s VALUES ('john'); +INSERT INTO s VALUES ('joan'); +INSERT INTO s VALUES ('wobbly'); +INSERT INTO s VALUES ('jack'); + +SELECT * FROM s WHERE soundex(nm) = soundex('john'); + +SELECT * FROM s WHERE difference(s.nm, 'john') > 2; +``` + +### F.15.2。莱文施坦 + +此函数用于计算两个字符串之间的Levenshtein距离: + +[](<>)[](<>) + +``` +levenshtein(text source, text target, int ins_cost, int del_cost, int sub_cost) returns int +levenshtein(text source, text target) returns int +levenshtein_less_equal(text source, text target, int ins_cost, int del_cost, int sub_cost, int max_d) returns int +levenshtein_less_equal(text source, text target, int max_d) returns int +``` + +二者都`来源`和`目标`可以是任何非空字符串,最多255个字符。成本参数分别指定字符插入、删除或替换的费用。您可以省略成本参数,就像在函数的第二个版本中一样;在这种情况下,它们都默认为1。 + +`levenshtein_less_equal`是Levenshtein函数的一个加速版本,仅适用于感兴趣的小距离。如果实际距离小于或等于`麦克斯`然后`levenshtein_less_equal`返回正确的距离;否则,它将返回大于`麦克斯`如果`麦克斯`如果是负面的,那么行为与`莱文施坦`. + +例如: + +``` +test=# SELECT levenshtein('GUMBO', 'GAMBOL'); + levenshtein +### F.15.3. Metaphone + + Metaphone, like Soundex, is based on the idea of constructing a representative code for an input string. Two strings are then deemed similar if they have the same codes. + + This function calculates the metaphone code of an input string: + +[]() +``` + +变音(文本源,int max_output_length)返回文本 + +``` +`source` has to be a non-null string with a maximum of 255 characters. `max_output_length` sets the maximum length of the output metaphone code; if longer, the output is truncated to this length. + + Example: +``` + +测试=#选择变音('GUMBO',4);变音 + +### F.15.4。双变音 + +双变音系统为给定的输入字符串计算两个“听起来像”的字符串——“主”和“备用”。在大多数情况下,它们是相同的,但对于非英语名称,它们可能会有点不同,这取决于发音。这些函数计算主代码和备用代码: + +[](<>)[](<>) + +``` +dmetaphone(text source) returns text +dmetaphone_alt(text source) returns text +``` + +输入字符串没有长度限制。 + +例子: + +``` +test=# SELECT dmetaphone('gumbo'); + dmetaphone +``` diff --git a/docs/X/geqo-biblio.md b/docs/en/geqo-biblio.md similarity index 100% rename from docs/X/geqo-biblio.md rename to docs/en/geqo-biblio.md diff --git a/docs/en/geqo-biblio.zh.md b/docs/en/geqo-biblio.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..9f4b08c8c5a10412d1892518b9c3303c0f752a91 --- /dev/null +++ b/docs/en/geqo-biblio.zh.md @@ -0,0 +1,11 @@ +## 60.4.进一步阅读 + +以下资源包含有关遗传算法的其他信息: + +- [《搭便车者进化计算指南》](http://www.faqs.org/faqs/ai-faq/genetic/part1/),(常见问题解答) + +- [进化计算及其在艺术设计中的应用](https://www.red3d.com/cwr/evolve.html)克雷格·雷诺兹著 + +- [\[elma04\]](biblio.html#ELMA04) + +- [\[方\]](biblio.html#FONG) diff --git a/docs/X/geqo-intro.md b/docs/en/geqo-intro.md similarity index 100% rename from docs/X/geqo-intro.md rename to docs/en/geqo-intro.md diff --git a/docs/en/geqo-intro.zh.md b/docs/en/geqo-intro.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..1b9b0002eed581802f212a9d088b2cdec03a5caf --- /dev/null +++ b/docs/en/geqo-intro.zh.md @@ -0,0 +1,9 @@ +## 60.1.查询处理是一个复杂的优化问题 + +在所有关系运算符中,最难处理和优化的是*参加*。可能的查询计划数随查询中的联接数呈指数增长。进一步的优化工作是由各种*联接方法*(例如,PostgreSQL中的嵌套循环、散列联接、合并联接)来处理单个联接和各种*索引*(例如,PostgreSQL中的B树、哈希、GiST和GIN)作为关系的访问路径。 + +普通的PostgreSQL查询优化器执行*近乎穷尽的搜索*在替代战略的空间上。该算法首先在IBM的System R数据库中引入,生成一个接近最优的联接顺序,但当查询中的联接数量增加时,可能会占用大量的时间和内存空间。这使得普通的PostgreSQL查询优化器不适合连接大量表的查询。 + +在德国弗赖贝格的矿业和技术大学自动控制研究所遇到了一些问题,当它想使用PostgreSQL作为一个基于决策支持知识的系统维护电网的后端时。DBMS需要为基于知识的系统的推理机处理大型连接查询。使用普通查询优化器进行的这些查询中的连接数不可行。 + +下面我们将介绍*遗传算法*以对涉及大量联接的查询有效的方式解决联接排序问题。 diff --git a/docs/X/geqo-intro2.md b/docs/en/geqo-intro2.md similarity index 100% rename from docs/X/geqo-intro2.md rename to docs/en/geqo-intro2.md diff --git a/docs/en/geqo-intro2.zh.md b/docs/en/geqo-intro2.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f531a977a3695942672942c4d2d424acc4ac59c2 --- /dev/null +++ b/docs/en/geqo-intro2.zh.md @@ -0,0 +1,11 @@ +## 60.2.遗传算法 + +遗传算法(GA)是一种通过随机搜索操作的启发式优化方法。优化问题的可能解集被视为*人口*属于*个人*.个人对环境的适应程度由其*健身*. + +搜索空间中个体的坐标由*染色体*,本质上是一组字符串。A.*基因*是染色体的一个分段,它编码被优化的单个参数的值。基因的典型编码可能是*二进制的*或*整数*. + +通过模拟进化操作*重组*,*突变*和*选择*新一代的搜索点显示出比他们的祖先更高的平均适合度。[图60.1](geqo-intro2.html#GEQO-FIGURE)说明了这些步骤。 + +**图60.1。遗传算法的结构** + +根据公司的说法。人工智能。遗传常见问题解答遗传算法不是对问题解决方案的纯随机搜索,这一点无论怎样强调都不过分。遗传算法使用随机过程,但结果明显是非随机的(优于随机)。 diff --git a/docs/X/geqo-pg-intro.md b/docs/en/geqo-pg-intro.md similarity index 100% rename from docs/X/geqo-pg-intro.md rename to docs/en/geqo-pg-intro.md diff --git a/docs/en/geqo-pg-intro.zh.md b/docs/en/geqo-pg-intro.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c7e10002449571a3c12561d2b059371a0ef57ffc --- /dev/null +++ b/docs/en/geqo-pg-intro.zh.md @@ -0,0 +1,46 @@ +## 60.3.PostgreSQL中的遗传查询优化(GEQO) + +[60.3.1. 使用GEQO生成可能的计划](geqo-pg-intro.html#id-1.10.12.5.6) + +[60.3.2. PostgreSQL GEQO的未来实施任务](geqo-pg-intro.html#GEQO-FUTURE) + +GEQO模块处理查询优化问题时,就像处理著名的旅行商问题(TSP)一样。可能的查询计划被编码为整数字符串。每个字符串表示从一个查询关系到下一个查询关系的连接顺序。例如,连接树 + +``` + /\ + /\ 2 + /\ 3 +4 1 +``` + +由整数字符串“4-1-3-2”编码,这意味着首先连接关系“4”和“1”,然后是“3”,然后是“2”,其中1、2、3、4是PostgreSQL优化器中的关系ID。 + +PostgreSQL中GEQO实现的具体特征如下: + +- a的用法*稳态*GA(替换群体中最不合适的个体,而不是整代替换)允许快速收敛到改进的查询计划。这对于在合理的时间内处理查询至关重要; + +- 使用*边缘复合交叉*特别适用于通过遗传算法求解TSP时保持较低的边缘损失; + +- 不推荐使用突变作为遗传算子,因此不需要修复机制来生成合法的TSP旅行。 + + GEQO模块的部分内容改编自D.Whitley的Genitor算法。 + + GEQO模块允许PostgreSQL查询优化器通过非穷举搜索有效地支持大型连接查询。 + +### 60.3.1.使用GEQO生成可能的计划 + +GEQO规划流程使用标准规划器代码生成扫描个人关系的计划。然后使用遗传方法制定加入计划。如上所示,每个候选连接计划由一个连接基本关系的序列表示。在初始阶段,GEQO代码只是随机生成一些可能的连接序列。对于考虑的每个连接序列,将调用标准planner代码来估计使用该连接序列执行查询的成本。(对于连接序列的每个步骤,都会考虑所有三种可能的连接策略;并且所有最初确定的关系扫描计划都可用。估计成本是这些可能性中最便宜的。)估计成本较低的连接序列被认为比成本较高的连接序列“更适合”。遗传算法丢弃最不合适的候选。然后,通过组合更合适的候选基因来生成新的候选基因——也就是说,通过使用已知低成本连接序列的随机选择部分来创建新序列以供考虑。重复该过程,直到考虑到预设数量的连接序列;然后使用搜索过程中任何时候找到的最佳计划生成完成的计划。 + +这一过程本质上是不确定的,因为在最初的群体选择和随后的最佳候选者“突变”过程中都会进行随机选择。为了避免所选计划发生意外变化,每次运行GEQO算法时,都会用当前值重新启动随机数生成器[盖库\_种子](runtime-config-query.html#GUC-GEQO-SEED)参数设置。只要`蛤蟆籽`如果其他GEQO参数保持不变,那么将为给定的查询生成相同的计划(以及其他计划器输入,如统计数据)。要尝试不同的搜索路径,请尝试更改`蛤蟆籽`. + +### 60.3.2.PostgreSQL GEQO的未来实施任务 + +改进遗传算法参数设置仍需努力。存档`src/backend/optimizer/geqo/geqo_main。C`例行公事`给我泳池大小`和`给我多少代`,我们必须为参数设置找到折衷方案,以满足两个相互竞争的要求: + +- 查询计划的最优性 + +- 计算时间 + + 在当前的实现中,通过从头开始运行standard planner的连接选择和成本估算代码来估计每个候选连接序列的适合度。由于不同的候选者使用相似的连接子序列,大量工作将被重复。通过保留子联接的成本估算,这可以大大加快速度。问题是要避免在保持这种状态时花费不合理的内存量。 + + 在更基本的层面上,目前尚不清楚使用为TSP设计的GA算法来解决查询优化是否合适。在TSP的情况下,与任何子字符串(部分遍历)相关的成本独立于遍历的其余部分,但对于查询优化来说,这肯定不是真的。因此,边缘重组交叉是否是最有效的变异过程值得怀疑。 diff --git a/docs/X/gin-builtin-opclasses.md b/docs/en/gin-builtin-opclasses.md similarity index 100% rename from docs/X/gin-builtin-opclasses.md rename to docs/en/gin-builtin-opclasses.md diff --git a/docs/en/gin-builtin-opclasses.zh.md b/docs/en/gin-builtin-opclasses.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..fd2b9a84a9433f4fc68aa24dc65c546416913154 --- /dev/null +++ b/docs/en/gin-builtin-opclasses.zh.md @@ -0,0 +1,25 @@ +## 67.2.内置运算符类 + +PostgreSQL核心发行版包括中所示的GIN运算符类[表67.1](gin-builtin-opclasses.html#GIN-BUILTIN-OPCLASSES-TABLE)(中介绍的一些可选模块)[附录F](contrib.html)提供额外的轧棉机操作员课程。) + +**表67.1。内置的GIN操作符类** + +| 名称 | 可转位算子 | +| --- | ----- | +| `阵列运算` | `&&(任意数组,任意数组)` | +| `@>(任意数组,任意数组)` | | +| `<@(任意数组,任意数组)` | | +| `=(任意数组,任意数组)` | | +| `jsonb_ops` | `@>(jsonb,jsonb)` | +| `@? (jsonb,jsonpath)` | | +| `@@(jsonb,jsonpath)` | | +| `? (jsonb,文本)` | | +| `?|(jsonb,文本[])` | | +| ` ;(jsonb,文本[])` | | +| `jsonb_路径_操作` | `@>(jsonb,jsonb)` | +| `@?(jsonb,jsonpath)` | | +| `@@(jsonb,jsonpath)` | | +| `Tsu ops` | `@@(tsvector,tsquery)` | +| `@@(tsvector,tsquery)` | | + +类型的两个运算符类的`jsonb`(笑声)`jsonb_ops`是默认值。`jsonb_路径_操作`只支持少数运营商,但为这些运营商提供了更好的性能。湖[第8.14.4节](datatype-json.html#JSON-INDEXING)详细信息。 diff --git a/docs/X/gin-implementation.md b/docs/en/gin-implementation.md similarity index 100% rename from docs/X/gin-implementation.md rename to docs/en/gin-implementation.md diff --git a/docs/en/gin-implementation.zh.md b/docs/en/gin-implementation.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..0a06b5b9a1a999e071291a2d7c185a99c2541089 --- /dev/null +++ b/docs/en/gin-implementation.zh.md @@ -0,0 +1,25 @@ +## 67.4.实施 + +[67.4.1. 快速更新技术](gin-implementation.html#GIN-FAST-UPDATE) + +[67.4.2. 部分匹配算法](gin-implementation.html#GIN-PARTIAL-MATCH) + +在内部,GIN索引包含在键上构造的B树索引,其中每个键都是一个或多个索引项(例如,数组的一个成员)的元素,叶页中的每个元组都包含指向堆指针的B树的指针(“发布树”),或者一个简单的堆指针列表(“发布列表”),当列表足够小,可以与键值一起放入一个索引元组时。[图67.1](gin-implementation.html#GIN-INTERNALS-FIGURE)说明了GIN索引的这些组件。 + +从PostgreSQL 9.1开始,索引中可以包含空键值。此外,占位符null也包含在索引中,用于索引为null或不包含键的索引项`提取值`。这允许搜索应该查找空项目的内容。 + +多列GIN索引是通过在复合值(列号、键值)上构建单个B树来实现的。不同列的键值可以是不同的类型。 + +**图67.1。杜松子酒** + +### 67.4.1.快速更新技术 + +由于反向索引的本质,更新GIN索引的速度往往很慢:插入或更新一个堆行可能会导致多次插入索引(从索引项中提取的每个键对应一个)。GIN可以通过将新元组插入到一个临时的、未排序的待处理项列表中来推迟大部分工作。当表格被抽真空或自动分析时,或者`杜松子酒清洁待处理清单`函数,或者如果挂起列表大于[杜松子酒\_悬而未决的\_列表\_限度](runtime-config-client.html#GUC-GIN-PENDING-LIST-LIMIT),使用初始索引创建期间使用的相同批量插入技术将条目移动到主GIN数据结构。这大大提高了GIN索引的更新速度,甚至计算了额外的真空开销。此外,开销工作可以通过后台进程而不是前台查询处理来完成。 + +这种方法的主要缺点是,除了搜索常规索引外,搜索还必须扫描待处理项列表,因此大量待处理项列表将显著降低搜索速度。另一个缺点是,虽然大多数更新都很快,但导致挂起列表变得“太大”的更新会导致立即的清理周期,因此比其他更新慢得多。正确使用自动真空可以最大限度地减少这两个问题。 + +如果一致的响应时间比更新速度更重要,可以通过关闭`快速更新`GIN索引的存储参数。看见[创建索引](sql-createindex.html)详细信息。 + +### 67.4.2.部分匹配算法 + +GIN可以支持“部分匹配”查询,在这种查询中,查询不会确定一个或多个键的精确匹配,但可能的匹配属于合理狭窄的键值范围(在`比较`支持方法)。这个`提取查询`方法,而不是返回要精确匹配的键值,而是返回一个键值,该键值是要搜索的范围的下限,并设置`P匹配`这是真的。然后使用`比较部分`方法`比较部分`对于匹配的索引键,必须返回零;对于仍在要搜索的范围内的非匹配项,必须返回小于零;如果索引键超出了可以匹配的范围,则必须返回大于零。 diff --git a/docs/X/gin-limit.md b/docs/en/gin-limit.md similarity index 100% rename from docs/X/gin-limit.md rename to docs/en/gin-limit.md diff --git a/docs/en/gin-limit.zh.md b/docs/en/gin-limit.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e4d1abc12ad7bac6b77b21c07c07da08a9a951b7 --- /dev/null +++ b/docs/en/gin-limit.zh.md @@ -0,0 +1,3 @@ +## 67.6.局限性 + +GIN假设可转位运算符是严格的。这意味着`提取值`不会对空项值调用(而是自动创建占位符索引项),并且`提取查询`也不会对空查询值调用(相反,该查询被假定为不可满足)。但是请注意,支持包含在非null复合项或查询值中的null键值。 diff --git a/docs/X/gist-builtin-opclasses.md b/docs/en/gist-builtin-opclasses.md similarity index 100% rename from docs/X/gist-builtin-opclasses.md rename to docs/en/gist-builtin-opclasses.md diff --git a/docs/en/gist-builtin-opclasses.zh.md b/docs/en/gist-builtin-opclasses.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..5f212185b16bf270f8a885bd67b8dca3d5dd1fca --- /dev/null +++ b/docs/en/gist-builtin-opclasses.zh.md @@ -0,0 +1,114 @@ +## 65.2.内置运算符类 + +PostgreSQL核心发行版包括中所示的GiST运算符类[表65.1](gist-builtin-opclasses.html#GIST-BUILTIN-OPCLASSES-TABLE)(中介绍的一些可选模块)[附录F](contrib.html)提供其他GiST运算符类。) + +**表65.1。内置GiST运算符类** + +| 名称 | 可转位算子 | 排序运算符 | +| --- | ----- | ----- | +| `箱子操作` | `<<(盒子,盒子)` | `<->(框、点)` | +| `&<(盒子,盒子)` | | | +| `&&(盒子,盒子)` | | | +| `&>(盒子,盒子)` | | | +| `>>(盒子,盒子)` | | | +| `~=(盒子,盒子)` | | | +| `@>(盒子,盒子)` | | | +| `<@(盒子,盒子)` | | | +| `&<|(盒子,盒子)` | | | +| `<<|(盒子,盒子)` | | | +| `|>>(盒子,盒子)` | | | +| `|&>(盒子,盒子)` | | | +| `(盒子,盒子)` | | | +| `@(盒子,盒子)` | | | +| `圆圈行动` | `<<(圆圈,圆圈)` | `<->(圆,点)` | +| `&<(圆圈,圆圈)` | | | +| `&>(圈,圈)` | | | +| `>>(圈,圈)` | | | +| `<@(圆圈,圆圈)` | | | +| `@>(圈,圈)` | | | +| `~=(圆圈,圆圈)` | | | +| `&&(圈,圈)` | | | +| `|>>(圈,圈)` | | | +| `<<|(圆圈,圆圈)` | | | +| `&<|(圆圈,圆圈)` | | | +| `|&>(圈,圈)` | | | +| `@(圈,圈)` | | | +| `(圆圈,圆圈)` | | | +| `内特奥普酒店` | `<(inet,inet)` | | +| `<<=(inet,inet)` | | | +| `>>(inet,inet)` | | | +| `>>=(inet,inet)` | | | +| `=(inet,inet)` | | | +| `<>(inet,inet)` | | | +| `<(inet,inet)` | | | +| `<=(inet,inet)` | | | +| `>(inet,inet)` | | | +| `>=(inet,inet)` | | | +| `&&(inet,inet)` | | | +| `multirange_ops` | `= (anymultirange, anymultirange)` | | +| `&& (anymultirange, anymultirange)` | | | +| `&& (anymultirange, anyrange)` | | | +| `@> (anymultirange, anyelement)` | | | +| `@> (anymultirange, anymultirange)` | | | +| `@> (anymultirange, anyrange)` | | | +| `<@ (anymultirange, anymultirange)` | | | +| `<@ (anymultirange, anyrange)` | | | +| `<< (anymultirange, anymultirange)` | | | +| `<< (anymultirange, anyrange)` | | | +| `>> (anymultirange, anymultirange)` | | | +| `>> (anymultirange, anyrange)` | | | +| `&< (anymultirange, anymultirange)` | | | +| `&< (anymultirange, anyrange)` | | | +| `&> (anymultirange, anymultirange)` | | | +| `&> (anymultirange, anyrange)` | | | +| `-|- (anymultirange, anymultirange)` | | | +| `-|- (anymultirange, anyrange)` | | | +| `point_ops` | `|>> (point, point)` | `<-> (point, point)` | +| `<< (point, point)` | | | +| `>> (point, point)` | | | +| `<<| (point, point)` | | | +| `~= (point, point)` | | | +| `<@ (point, box)` | | | +| `<@(点、多边形)` | | | +| `<@(点、圆)` | | | +| `保利奥普斯酒店` | `<<(多边形,多边形)` | `<->(多边形,点)` | +| `&<(多边形,多边形)` | | | +| `&>(多边形,多边形)` | | | +| `>>(多边形,多边形)` | | | +| `<@(多边形,多边形)` | | | +| `@>(多边形,多边形)` | | | +| `~=(多边形,多边形)` | | | +| `&&(多边形,多边形)` | | | +| `<<|(多边形,多边形)` | | | +| `&<|(多边形,多边形)` | | | +| `|&>(多边形,多边形)` | | | +| `|>>(多边形,多边形)` | | | +| `@(多边形,多边形)` | | | +| `(多边形,多边形)` | | | +| `射程行动` | `=(任意范围,任意范围)` | | +| `&&(任意范围,任意范围)` | | | +| `&&(任意范围,任意多范围)` | | | +| `@>(任意范围,任意元素)` | | | +| `@>(任意范围,任意范围)` | | | +| `@>(任意范围,任意多范围)` | | | +| `<@(任意范围,任意范围)` | | | +| `<@(任意范围,任意多范围)` | | | +| `<<(鹿角,鹿角)` | | | +| `<<(鹿角,鹿角)` | | | +| `>>(鹿角,鹿角)` | | | +| `>>(鹿角,鹿角)` | | | +| `&<(鹿角,鹿角)` | | | +| `&<(鹿角,鹿角)` | | | +| `&>(鹿角,鹿角)` | | | +| `&>(鹿角,鹿角)` | | | +| `-|-(鹿角,鹿角)` | | | +| `-|-(鹿角,鹿角)` | | | +| `Tsu_ops` | `<@(tsquery,tsquery)` | | +| `@>(tsquery,tsquery)` | | | +| `Tsu ops` | `@@(tsvector,tsquery)` | | + +出于历史原因`内特奥普酒店`运算符类不是类型的默认类`内特`和`苹果酒`.要使用它,请在`创建索引`例如 + +``` +CREATE INDEX ON my_table USING GIST (my_inet_column inet_ops); +``` diff --git a/docs/X/gist-extensibility.md b/docs/en/gist-extensibility.md similarity index 100% rename from docs/X/gist-extensibility.md rename to docs/en/gist-extensibility.md diff --git a/docs/en/gist-extensibility.zh.md b/docs/en/gist-extensibility.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a34cf70c971e5aef5c11037dad1bf411dc0247e4 --- /dev/null +++ b/docs/en/gist-extensibility.zh.md @@ -0,0 +1,618 @@ +## 65.3.可扩展性 + +传统上,实现一种新的索引访问方法意味着很多困难的工作。有必要了解数据库的内部工作原理,例如锁管理器和提前写入日志。GiST接口具有很高的抽象级别,只需要访问方法实现者实现所访问的数据类型的语义。GiST层本身负责并发、记录和搜索树结构。 + +这种可扩展性不应与其他标准搜索树在可处理数据方面的可扩展性相混淆。例如,PostgreSQL支持可扩展的B树和散列索引。这意味着您可以使用PostgreSQL在任何想要的数据类型上构建B树或散列。但是B-树只支持范围谓词(`<`,`=`,`>`),哈希索引只支持相等查询。 + +因此,如果使用PostgreSQL B树为图像集合编制索引,则只能发出诸如“imagex是否等于imagey”、“imagex是否小于imagey”和“imagex是否大于imagey”之类的查询。根据您在本文中如何定义“等于”、“小于”和“大于”,这可能会很有用。然而,通过使用基于要点的索引,您可以创建方法来询问特定领域的问题,可能是“查找所有马的图像”或“查找所有暴露的图像”。 + +启动并运行GiST访问方法只需实现几个用户定义的方法,这些方法定义树中键的行为。当然,这些方法必须非常花哨,才能支持花哨的查询,但对于所有标准查询(B树、R树等),它们都相对简单。简而言之,GiST结合了可扩展性、通用性、代码重用和干净的接口。 + +GiST的索引运算符类必须提供五种方法,六种是可选的。索引的正确性通过正确执行`相同的`,`一致的`和`协会`方法,而索引的效率(大小和速度)将取决于`处罚`和`皮克斯普利特`方法。有两种可选方法:`压紧`和`减压`,它允许索引具有与其索引的数据不同类型的内部树数据。叶子应该是索引数据类型,而其他树节点可以是任何C结构(但您仍然必须遵循这里的PostgreSQL数据类型规则,请参阅关于)`变长`用于可变大小的数据)。如果树的内部数据类型存在于SQL级别,则`存储`选择`创建操作符类`可以使用命令。可选的第八种方法是`距离`,如果operator类希望支持有序扫描(最近邻搜索),则需要该选项。可选的第九种方法`取来`如果operator类希望支持仅索引扫描,则需要`压紧`方法被省略。可选的第十种方法`选项`如果运算符类具有用户指定的参数,则需要。可选的第十一种方法`sortsupport`用于加快建立要点索引。 + +`一致的` + +给定一个索引项`p`和一个查询值`q`,此函数确定索引项是否与查询“一致”;也就是说,谓词可以“*`索引列`* *`可转位算子`* `q`“对于由索引项表示的任何行,是否为真?”?对于叶索引条目,这相当于测试可索引条件,而对于内部树节点,这决定是否需要扫描树节点表示的索引的子树。当结果是`符合事实的`A.`复查`国旗也必须归还。这表明谓词是肯定为真还是仅可能为真。如果`复查` = `错误的`然后索引准确地测试了谓词条件,而如果`复查` = `符合事实的`这一排只是一场候选赛。在这种情况下,系统将自动评估*`可转位算子`*对照实际行值,查看它是否真的匹配。这种约定允许GiST同时支持无损和有损索引结构。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_consistent(internal, data_type, smallint, oid, internal) +RETURNS bool +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_consistent); + +Datum +my_consistent(PG_FUNCTION_ARGS) +{ + GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); + data_type *query = PG_GETARG_DATA_TYPE_P(1); + StrategyNumber strategy = (StrategyNumber) PG_GETARG_UINT16(2); + /* Oid subtype = PG_GETARG_OID(3); */ + bool *recheck = (bool *) PG_GETARG_POINTER(4); + data_type *key = DatumGetDataType(entry->key); + bool retval; + + /* + * determine return value as a function of strategy, key and query. + * + * Use GIST_LEAF(entry) to know where you're called in the index tree, + * which comes handy when supporting the = operator for example (you could + * check for non empty union() in non-leaf nodes and equality in leaf + * nodes). + */ + + *recheck = true; /* or false if check is exact */ + + PG_RETURN_BOOL(retval); +} +``` + +在这里`钥匙`是索引中的一个元素,并且`查询`在索引中查找的值。这个`战略数字`参数指示要应用的运算符类中的哪个运算符-它与`创建操作符类`命令 + +根据类中包含的运算符,类的数据类型`查询`可能因运算符而异,因为它将是运算符右侧的任何类型,这可能不同于左侧显示的索引数据类型。(上面的代码框架假设只有一种类型是可能的;如果不是,则获取`查询`参数值必须取决于运算符。)建议`一致的`函数将opclass的索引数据类型用于`查询`参数,即使实际类型可能是其他类型,具体取决于运算符。 + +`协会` + +此方法整合树中的信息。给定一组条目,此函数将生成一个表示所有给定条目的新索引条目。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_union(internal, internal) +RETURNS storage_type +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_union); + +Datum +my_union(PG_FUNCTION_ARGS) +{ + GistEntryVector *entryvec = (GistEntryVector *) PG_GETARG_POINTER(0); + GISTENTRY *ent = entryvec->vector; + data_type *out, + *tmp, + *old; + int numranges, + i = 0; + + numranges = entryvec->n; + tmp = DatumGetDataType(ent[0].key); + out = tmp; + + if (numranges == 1) + { + out = data_type_deep_copy(tmp); + + PG_RETURN_DATA_TYPE_P(out); + } + + for (i = 1; i < numranges; i++) + { + old = out; + tmp = DatumGetDataType(ent[i].key); + out = my_union_implementation(out, tmp); + } + + PG_RETURN_DATA_TYPE_P(out); +} +``` + +如您所见,在这个框架中,我们处理的是一种数据类型`并集(X,Y,Z)=并集(并集(X,Y,Z)`。通过在这个GiST支持方法中实现适当的并集算法,在不支持这种情况的情况下,支持数据类型是很容易的。 + +调查的结果`协会`函数必须是索引的存储类型的值,不管它是什么(它可能不同于索引列的类型,也可能不同于索引列的类型)。这个`协会`函数应该返回一个指向新函数的指针`帕洛克()`我忘记了。即使没有类型更改,也不能按原样返回输入值。 + +如上图所示`协会`功能第一`内部的`争论实际上是一场争论`GistEntryVector`指针。第二个参数是指向整数变量的指针,可以忽略它。(过去要求`协会`函数将其结果值的大小存储到该变量中,但这不再是必需的。) + +`压紧` + +将数据项转换为适合索引页中物理存储的格式。如果`压紧`方法,数据项存储在索引中,无需修改。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_compress(internal) +RETURNS internal +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_compress); + +Datum +my_compress(PG_FUNCTION_ARGS) +{ + GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); + GISTENTRY *retval; + + if (entry->leafkey) + { + /* replace entry->key with a compressed version */ + compressed_data_type *compressed_data = palloc(sizeof(compressed_data_type)); + + /* fill *compressed_data from entry->key ... */ + + retval = palloc(sizeof(GISTENTRY)); + gistentryinit(*retval, PointerGetDatum(compressed_data), + entry->rel, entry->page, entry->offset, FALSE); + } + else + { + /* typically we needn't do anything with non-leaf entries */ + retval = entry; + } + + PG_RETURN_POINTER(retval); +} +``` + +你必须适应*`压缩数据类型`*当然,为了压缩叶节点,需要转换到特定的类型。 + +`减压` + +将存储的数据项表示形式转换为可由运算符类中的其他GiST方法操作的格式。如果`减压`方法,假定其他GiST方法可以直接处理存储的数据格式。(`减压`不一定是相反的`压紧`方法特别是,如果`压紧`是有损的那就不可能了`减压`准确地重建原始数据。`减压`不一定等同于`取来`,因为其他GiST方法可能不需要完全重建数据。) + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_decompress(internal) +RETURNS internal +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_decompress); + +Datum +my_decompress(PG_FUNCTION_ARGS) +{ + PG_RETURN_POINTER(PG_GETARG_POINTER(0)); +} +``` + +上述骨架适用于不需要减压的情况。(当然,完全省略该方法更容易,在这种情况下建议这样做。) + +`处罚` + +返回一个值,该值指示将新条目插入树的特定分支的“成本”。项目将沿最短路径插入`处罚`在树上。返回的值`处罚`应该是非负的。如果返回负值,它将被视为零。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_penalty(internal, internal, internal) +RETURNS internal +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; -- in some cases penalty functions need not be strict +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_penalty); + +Datum +my_penalty(PG_FUNCTION_ARGS) +{ + GISTENTRY *origentry = (GISTENTRY *) PG_GETARG_POINTER(0); + GISTENTRY *newentry = (GISTENTRY *) PG_GETARG_POINTER(1); + float *penalty = (float *) PG_GETARG_POINTER(2); + data_type *orig = DatumGetDataType(origentry->key); + data_type *new = DatumGetDataType(newentry->key); + + *penalty = my_penalty_implementation(orig, new); + PG_RETURN_POINTER(penalty); +} +``` + +出于历史原因`处罚`函数不仅仅返回`浮动`后果相反,它必须将值存储在第三个参数指示的位置。返回值本身被忽略,尽管通常会传回该参数的地址。 + +这个`处罚`函数对于索引的良好性能至关重要。它将在插入时用于确定在树中选择新条目的添加位置时遵循哪个分支。在查询时,索引越平衡,查找速度就越快。 + +`皮克斯普利特` + +当需要拆分索引页时,此函数决定页上的哪些条目将保留在旧页上,哪些条目将移动到新页。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_picksplit(internal, internal) +RETURNS internal +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_picksplit); + +Datum +my_picksplit(PG_FUNCTION_ARGS) +{ + GistEntryVector *entryvec = (GistEntryVector *) PG_GETARG_POINTER(0); + GIST_SPLITVEC *v = (GIST_SPLITVEC *) PG_GETARG_POINTER(1); + OffsetNumber maxoff = entryvec->n - 1; + GISTENTRY *ent = entryvec->vector; + int i, + nbytes; + OffsetNumber *left, + *right; + data_type *tmp_union; + data_type *unionL; + data_type *unionR; + GISTENTRY **raw_entryvec; + + maxoff = entryvec->n - 1; + nbytes = (maxoff + 1) * sizeof(OffsetNumber); + + v->spl_left = (OffsetNumber *) palloc(nbytes); + left = v->spl_left; + v->spl_nleft = 0; + + v->spl_right = (OffsetNumber *) palloc(nbytes); + right = v->spl_right; + v->spl_nright = 0; + + unionL = NULL; + unionR = NULL; + + /* Initialize the raw entry vector. */ + raw_entryvec = (GISTENTRY **) malloc(entryvec->n * sizeof(void *)); + for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i)) + raw_entryvec[i] = &(entryvec->vector[i]); + + for (i = FirstOffsetNumber; i <= maxoff; i = OffsetNumberNext(i)) + { + int real_index = raw_entryvec[i] - entryvec->vector; + + tmp_union = DatumGetDataType(entryvec->vector[real_index].key); + Assert(tmp_union != NULL); + + /* + * Choose where to put the index entries and update unionL and unionR + * accordingly. Append the entries to either v->spl_left or + * v->spl_right, and care about the counters. + */ + + if (my_choice_is_left(unionL, curl, unionR, curr)) + { + if (unionL == NULL) + unionL = tmp_union; + else + unionL = my_union_implementation(unionL, tmp_union); + + *left = real_index; + ++left; + ++(v->spl_nleft); + } + else + { + /* + * Same on the right + */ + } + } + + v->spl_ldatum = DataTypeGetDatum(unionL); + v->spl_rdatum = DataTypeGetDatum(unionR); + PG_RETURN_POINTER(v); +} +``` + +请注意`皮克斯普利特`函数的结果是通过修改传入的`五、`结构返回值本身被忽略,尽管传回`五、`. + +喜欢`处罚`这个`皮克斯普利特`函数对于索引的良好性能至关重要。设计合适的`处罚`和`皮克斯普利特`实现是实现性能良好的GiST索引的挑战所在。 + +`相同的` + +如果两个索引项相同,则返回true,否则返回false。(索引项是索引存储类型的值,不一定是原始索引列的类型。) + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_same(storage_type, storage_type, internal) +RETURNS internal +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_same); + +Datum +my_same(PG_FUNCTION_ARGS) +{ + prefix_range *v1 = PG_GETARG_PREFIX_RANGE_P(0); + prefix_range *v2 = PG_GETARG_PREFIX_RANGE_P(1); + bool *result = (bool *) PG_GETARG_POINTER(2); + + *result = my_eq(v1, v2); + PG_RETURN_POINTER(result); +} +``` + +出于历史原因`相同的`函数不仅返回布尔结果;相反,它必须将标志存储在第三个参数指示的位置。返回值本身被忽略,尽管通常会传回该参数的地址。 + +`距离` + +给定一个索引项`p`和一个查询值`q`,此函数确定索引项与查询值的“距离”。如果运算符类包含任何排序运算符,则必须提供此函数。使用排序运算符的查询将通过首先返回具有最小“距离”值的索引项来实现,因此结果必须与运算符的语义一致。对于叶索引项,结果仅表示到索引项的距离;对于内部树节点,结果必须是任何子条目可能具有的最小距离。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_distance(internal, data_type, smallint, oid, internal) +RETURNS float8 +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_distance); + +Datum +my_distance(PG_FUNCTION_ARGS) +{ + GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); + data_type *query = PG_GETARG_DATA_TYPE_P(1); + StrategyNumber strategy = (StrategyNumber) PG_GETARG_UINT16(2); + /* Oid subtype = PG_GETARG_OID(3); */ + /* bool *recheck = (bool *) PG_GETARG_POINTER(4); */ + data_type *key = DatumGetDataType(entry->key); + double retval; + + /* + * determine return value as a function of strategy, key and query. + */ + + PG_RETURN_FLOAT8(retval); +} +``` + +争论的焦点`距离`函数的参数与`一致的`作用 + +在确定距离时,只要结果永远不大于条目的实际距离,就允许使用某种近似值。因此,例如,在几何应用中,到边界框的距离通常是足够的。对于内部树节点,返回的距离不得大于到任何子节点的距离。如果返回的距离不准确,则必须设置该函数`*复查`这是真的。(对于内部树节点,这不是必需的;对于它们,计算总是假定不精确。)在这种情况下,执行器将在从堆中获取元组后计算准确的距离,并在必要时对元组重新排序。 + +如果距离函数返回`*重新检查=正确`对于任何叶节点,原始排序运算符的返回类型必须为`浮动8`或`浮动4`,距离函数的结果值必须与原始排序运算符的结果值相比较,因为执行器将使用距离函数结果和重新计算的排序运算符结果进行排序。否则,距离函数的结果值可以是任意有限的`浮动8`值,只要结果值的相对顺序与排序运算符返回的顺序匹配。(无穷大和负无穷大在内部用于处理null等情况,因此不建议`距离`函数返回这些值。) + +`取来` + +将数据项的压缩索引表示形式转换为原始数据类型(仅用于索引扫描)。返回的数据必须是原始索引值的准确无损副本。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_fetch(internal) +RETURNS internal +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +该参数是指向`GISTENTRY`结构。进入时,其`钥匙`字段包含压缩形式的非空叶数据。返回值是另一个`GISTENTRY`结构,谁的`钥匙`字段包含原始未压缩形式的相同数据。如果opclass的compress函数对叶条目不做任何处理,则`取来`方法可以按原样返回参数。或者,如果opclass没有压缩函数,则`取来`方法也可以省略,因为它必然是不可操作的。 + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_fetch); + +Datum +my_fetch(PG_FUNCTION_ARGS) +{ + GISTENTRY *entry = (GISTENTRY *) PG_GETARG_POINTER(0); + input_data_type *in = DatumGetPointer(entry->key); + fetched_data_type *fetched_data; + GISTENTRY *retval; + + retval = palloc(sizeof(GISTENTRY)); + fetched_data = palloc(sizeof(fetched_data_type)); + + /* + * Convert 'fetched_data' into the a Datum of the original datatype. + */ + + /* fill *retval from fetched_data. */ + gistentryinit(*retval, PointerGetDatum(converted_datum), + entry->rel, entry->page, entry->offset, FALSE); + + PG_RETURN_POINTER(retval); +} +``` + +如果compress方法对叶条目有损,则operator类不能支持仅索引扫描,并且不能定义`取来`作用 + +`选项` + +允许定义控制运算符类行为的用户可见参数。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_options(internal) +RETURNS void +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +函数被传递一个指向`本地重新选择`struct,它需要填充一组特定于运算符类的选项。可以使用从其他支持功能访问这些选项`PG_有_OPCLASS_选项()`和`PG_GET_OPCLASS_OPTIONS()`宏。 + +我的\_其他支持函数使用的选项()和参数如下所示: + +``` +typedef enum MyEnumType +{ + MY_ENUM_ON, + MY_ENUM_OFF, + MY_ENUM_AUTO +} MyEnumType; + +typedef struct +{ + int32 vl_len_; /* varlena header (do not touch directly!) */ + int int_param; /* integer parameter */ + double real_param; /* real parameter */ + MyEnumType enum_param; /* enum parameter */ + int str_param; /* string parameter */ +} MyOptionsStruct; + +/* String representation of enum values */ +static relopt_enum_elt_def myEnumValues[] = +{ + {"on", MY_ENUM_ON}, + {"off", MY_ENUM_OFF}, + {"auto", MY_ENUM_AUTO}, + {(const char *) NULL} /* list terminator */ +}; + +static char *str_param_default = "default"; + +/* + * Sample validator: checks that string is not longer than 8 bytes. + */ +static void +validate_my_string_relopt(const char *value) +{ + if (strlen(value) > 8) + ereport(ERROR, + (errcode(ERRCODE_INVALID_PARAMETER_VALUE), + errmsg("str_param must be at most 8 bytes"))); +} + +/* + * Sample filler: switches characters to lower case. + */ +static Size +fill_my_string_relopt(const char *value, void *ptr) +{ + char *tmp = str_tolower(value, strlen(value), DEFAULT_COLLATION_OID); + int len = strlen(tmp); + + if (ptr) + strcpy((char *) ptr, tmp); + + pfree(tmp); + return len + 1; +} + +PG_FUNCTION_INFO_V1(my_options); + +Datum +my_options(PG_FUNCTION_ARGS) +{ + local_relopts *relopts = (local_relopts *) PG_GETARG_POINTER(0); + + init_local_reloptions(relopts, sizeof(MyOptionsStruct)); + add_local_int_reloption(relopts, "int_param", "integer parameter", + 100, 0, 1000000, + offsetof(MyOptionsStruct, int_param)); + add_local_real_reloption(relopts, "real_param", "real parameter", + 1.0, 0.0, 1000000.0, + offsetof(MyOptionsStruct, real_param)); + add_local_enum_reloption(relopts, "enum_param", "enum parameter", + myEnumValues, MY_ENUM_ON, + "Valid values are: \"on\", \"off\" and \"auto\".", + offsetof(MyOptionsStruct, enum_param)); + add_local_string_reloption(relopts, "str_param", "string parameter", + str_param_default, + &validate_my_string_relopt, + &fill_my_string_relopt, + offsetof(MyOptionsStruct, str_param)); + + PG_RETURN_VOID(); +} + +PG_FUNCTION_INFO_V1(my_compress); + +Datum +my_compress(PG_FUNCTION_ARGS) +{ + int int_param = 100; + double real_param = 1.0; + MyEnumType enum_param = MY_ENUM_ON; + char *str_param = str_param_default; + + /* + * Normally, when opclass contains 'options' method, then options are always + * passed to support functions. However, if you add 'options' method to + * existing opclass, previously defined indexes have no options, so the + * check is required. + */ + if (PG_HAS_OPCLASS_OPTIONS()) + { + MyOptionsStruct *options = (MyOptionsStruct *) PG_GET_OPCLASS_OPTIONS(); + + int_param = options->int_param; + real_param = options->real_param; + enum_param = options->enum_param; + str_param = GET_STRING_RELOPTION(options, str_param); + } + + /* the rest implementation of support function */ +} +``` + +由于GiST中键的表示是灵活的,它可能取决于用户指定的参数。例如,可以指定密钥签名的长度。看见`gtsvector_选项()`例如 + +`sortsupport` + +返回一个比较器函数,以保留局部性的方式对数据进行排序。它是由`创建索引`和`重新索引`命令。创建的索引的质量取决于比较器函数确定的排序顺序在多大程度上保留了输入的局部性。 + +这个`sortsupport`方法是可选的。如果没有提供,`创建索引`通过使用`处罚`和`皮克斯普利特`功能,这要慢得多。 + +函数的SQL声明必须如下所示: + +``` +CREATE OR REPLACE FUNCTION my_sortsupport(internal) +RETURNS void +AS 'MODULE_PATHNAME' +LANGUAGE C STRICT; +``` + +该参数是指向`SortSupport`结构。至少,函数必须填写其comparator字段。比较器有三个参数:两个要比较的基准和一个指向`SortSupport`结构。基准是两个索引值,其格式与索引中存储的格式相同;也就是说,以`压紧`方法完整的API在中定义`src/include/utils/sortsupport。H`. + +C模块中的匹配代码可以遵循以下框架: + +``` +PG_FUNCTION_INFO_V1(my_sortsupport); + +static int +my_fastcmp(Datum x, Datum y, SortSupport ssup) +{ + /* establish order between x and y by computing some sorting value z */ + + int z1 = ComputeSpatialCode(x); + int z2 = ComputeSpatialCode(y); + + return z1 == z2 ? 0 : z1 > z2 ? 1 : -1; +} + +Datum +my_sortsupport(PG_FUNCTION_ARGS) +{ + SortSupport ssup = (SortSupport) PG_GETARG_POINTER(0); + + ssup->comparator = my_fastcmp; + PG_RETURN_VOID(); +} +``` + +所有的GiST支持方法通常在短期记忆环境中调用;就是,`CurrentMemoryContext`将在处理每个元组后重置。因此,担心自己失去的一切并不重要。然而,在某些情况下,支持方法在重复调用中缓存数据是有用的。要做到这一点,请在中分配寿命较长的数据`fcinfo->flinfo->fn_mcxt`,并在中保留指向它的指针`fcinfo->flinfo->fn_额外`。此类数据将在索引操作的整个生命周期内(例如,单个GiST索引扫描、索引构建或索引元组插入)保持有效。更换时,请小心释放以前的值`fn_额外费用`值,否则泄漏将在操作期间累积。 diff --git a/docs/X/infoschema-domains.md b/docs/en/infoschema-domains.md similarity index 100% rename from docs/X/infoschema-domains.md rename to docs/en/infoschema-domains.md diff --git a/docs/en/infoschema-domains.zh.md b/docs/en/infoschema-domains.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..1bd5b9ab6b85b2887e42d44dab16c645b3942371 --- /dev/null +++ b/docs/en/infoschema-domains.zh.md @@ -0,0 +1,35 @@ +## 37.23。`域` + +风景`域`包含当前数据库中定义的所有域。仅显示当前用户有权访问的那些域(通过成为所有者或具有某些特权)。 + +**表 37.21。`域`列** + +| 列类型

描述 | +| --------------- | +| `域目录` `sql_identifier`

包含域的数据库的名称(始终为当前数据库) | +| `域模式` `sql_identifier`

包含域的架构的名称 | +| `域名` `sql_identifier`

域名 | +| `数据类型` `字符数据`

域的数据类型,如果它是内置类型,或者`大批`如果它是某个数组(在这种情况下,请参阅视图`元素类型`), 别的`用户自定义`(在这种情况下,类型在`udt_name`和相关的列)。 | +| `character_maximum_length` `基数`

如果域有字符或位串类型,则声明最大长度;对于所有其他数据类型,或者如果没有声明最大长度,则为 null。 | +| `character_octet_length` `基数`

如果域具有字符类型,则以八位字节(字节)为单位的数据的最大可能长度;所有其他数据类型为 null。最大八位字节长度取决于声明的字符最大长度(见上文)和服务器编码。 | +| `字符集目录` `sql_identifier`

适用于 PostgreSQL 中不可用的功能 | +| `character_set_schema` `sql_identifier`

适用于 PostgreSQL 中不可用的功能 | +| `字符集名称` `sql_identifier`

适用于 PostgreSQL 中不可用的功能 | +| `collat​​ion_catalog` `sql_identifier`

包含域排序规则的数据库名称(始终为当前数据库),如果默认为 null 或域的数据类型不可排序 | +| `collat​​ion_schema` `sql_identifier`

包含域排序规则的架构名称,默认为 null 或域的数据类型不可排序 | +| `collat​​ion_name` `sql_identifier`

域的排序规则的名称,默认为空或域的数据类型不可排序 | +| `数字精度` `基数`

如果域具有数字类型,则此列包含此域的类型的(声明的或隐含的)精度。精度表示有效位数。它可以用十进制(以 10 为底)或二进制(以 2 为底)表示,如列中所述`numeric_precision_radix`.对于所有其他数据类型,此列为空。 | +| `numeric_precision_radix` `基数`

如果域具有数字类型,则此列将指示列中的值的基数`数字精度`和`数字比例尺`都表达出来了。该值为2或10。对于所有其他数据类型,此列为空。 | +| `数字比例尺` `基数`

如果域具有精确的数字类型,则此列包含该域类型的(声明的或隐式的)刻度。刻度表示小数点右侧的有效位数。根据列中的规定,它可以用十进制(以10为基数)或二进制(以2为基数)表示`数字精度基数`。对于所有其他数据类型,此列为空。 | +| `日期时间精度` `基数`

如果`数据类型`标识日期、时间、时间戳或间隔类型,此列包含此域类型的(声明的或隐式的)小数秒精度,即秒值小数点后保留的小数位数。对于所有其他数据类型,此列为空。 | +| `区间型` `字符数据`

如果`数据类型`标识间隔类型,此列包含该域的间隔包括哪些字段的规范,例如。,`年复一年`,`日复一日`,等等。如果未指定字段限制(即间隔接受所有字段),并且对于所有其他数据类型,此字段为空。 | +| `区间精度` `基数`

适用于PostgreSQL中不可用的功能(请参阅`日期时间精度`对于间隔类型域的小数秒精度) | +| `域默认值` `字符数据`

域的默认表达式 | +| `udt_目录` `sql_标识符`

定义域数据类型的数据库的名称(始终为当前数据库) | +| `udt_模式` `sql_标识符`

定义域数据类型的架构的名称 | +| `udt_名称` `sql_标识符`

域数据类型的名称 | +| `目录范围` `sql_标识符`

适用于PostgreSQL中不可用的功能 | +| `范围和模式` `sql_标识符`

适用于PostgreSQL中不可用的功能 | +| `范围名称` `sql_标识符`

适用于PostgreSQL中不可用的功能 | +| `最大基数` `基数`

总是空的,因为数组在PostgreSQL中总是有无限的最大基数 | +| `dtd_标识符` `sql_标识符`

域的数据类型描述符的标识符,在与该域有关的数据类型描述符中是唯一的(这很简单,因为一个域只包含一个数据类型描述符)。这主要用于与此类标识符的其他实例连接。(未定义标识符的具体格式,也不保证在未来版本中保持不变。) | diff --git a/docs/X/install-post.md b/docs/en/install-post.md similarity index 100% rename from docs/X/install-post.md rename to docs/en/install-post.md diff --git a/docs/en/install-post.zh.md b/docs/en/install-post.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..b79d7eafeb36482be4947fa9e2e7283b38f4696a --- /dev/null +++ b/docs/en/install-post.zh.md @@ -0,0 +1,81 @@ +## 17.5.安装后设置 + +[17.5.1. 共享库](install-post.html#INSTALL-POST-SHLIBS) + +[17.5.2. 环境变量](install-post.html#id-1.6.4.9.3) + +### 17.5.1.共享库 + +[](<>) + +在某些具有共享库的系统上,您需要告诉系统如何查找新安装的共享库。它所在的系统*不*必要的软件包括FreeBSD、HP-UX、Linux、NetBSD、OpenBSD和Solaris。 + +设置共享库搜索路径的方法因平台而异,但最广泛使用的方法是设置环境变量`图书馆路`就像这样:在伯恩贝壳里(`嘘`,`ksh`,`猛击`,`zsh`): + +``` +LD_LIBRARY_PATH=/usr/local/pgsql/lib +export LD_LIBRARY_PATH +``` + +或者在`csh`或`tcsh`: + +``` +setenv LD_LIBRARY_PATH /usr/local/pgsql/lib +``` + +代替`/usr/local/pgsql/lib`无论你设定了什么`--利伯迪尔`加入[第一步](install-procedure.html#CONFIGURE)。您应该将这些命令放入shell启动文件中,例如`/等/简介`或`~/.bash_简介`。有关此方法的注意事项,请访问[http://xahlee.info/UnixResource\_迪尔/\_/ldpath。html](http://xahlee.info/UnixResource_dir/_/ldpath.html). + +在某些系统上,最好设置环境变量`LD_RUN_PATH` *之前*建筑物 + +在Cygwin上,将库目录放入`路径`或者移动`.dll`将文件放入`箱子`目录 + +如果有疑问,请参阅系统的手册页(可能是`劳埃德。所以`或`rld`).如果您稍后收到如下消息: + +``` +psql: error in loading shared libraries +libpq.so.2.1: cannot open shared object file: No such file or directory +``` + +那么这一步是必要的。那就好好照顾它吧。 + +[](<>)如果您在Linux上,并且具有root访问权限,则可以运行: + +``` +/sbin/ldconfig /usr/local/pgsql/lib +``` + +(或等效目录),使运行时链接器能够更快地找到共享库。请参阅第页的手册`ldconfig`了解更多信息。在FreeBSD、NetBSD和OpenBSD上,命令是: + +``` +/sbin/ldconfig -m /usr/local/pgsql/lib +``` + +相反目前还不知道其他系统是否有类似的命令。 + +### 17.5.2.环境变量 + +[](<>) + +如果你安装到`/usr/local/pgsql`或者默认情况下未搜索程序的其他位置,您应该添加`/usr/local/pgsql/bin`(或者你设定的任何东西)`--宾迪尔`加入[第一步](install-procedure.html#CONFIGURE))进入你的`路径`.严格来说,这是没有必要的,但它将使PostgreSQL的使用更加方便。 + +为此,请将以下内容添加到shell启动文件中,例如`~/.bash_简介`(或`/等/简介`,如果希望它影响所有用户): + +``` +PATH=/usr/local/pgsql/bin:$PATH +export PATH +``` + +如果你正在使用`csh`或`tcsh`,然后使用以下命令: + +``` +set path = ( /usr/local/pgsql/bin $path ) +``` + +[](<>)为了使系统能够找到man文档,除非安装到默认搜索的位置,否则需要在shell启动文件中添加以下行: + +``` +MANPATH=/usr/local/pgsql/share/man:$MANPATH +export MANPATH +``` + +环境变量`PGHOST`和`PGPORT`为客户端应用程序指定数据库服务器的主机和端口,覆盖默认编译的。如果要远程运行客户端应用程序,那么如果每个计划使用数据库集的用户`PGHOST`。不过,这不是必需的;这些设置可以通过命令行选项与大多数客户端程序通信。 diff --git a/docs/X/install-procedure.md b/docs/en/install-procedure.md similarity index 100% rename from docs/X/install-procedure.md rename to docs/en/install-procedure.md diff --git a/docs/en/install-procedure.zh.md b/docs/en/install-procedure.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..718b3eb7304eccc0e54e56c8d2ebad33b283a6cd --- /dev/null +++ b/docs/en/install-procedure.zh.md @@ -0,0 +1,558 @@ +## 17.4.安装程序 + +[17.4.1.`配置`选项](install-procedure.html#CONFIGURE-OPTIONS) + +[17.4.2.`配置`环境变量](install-procedure.html#CONFIGURE-ENVVARS) + +1. **配置** + + [](<>) + + 安装过程的第一步是为系统配置源代码树,并选择所需的选项。这是通过运行`配置`剧本对于默认安装,只需输入: + + ``` + ./configure + ``` + + 该脚本将运行大量测试,以确定各种系统因变量的值,并检测操作系统的任何怪癖,最后将在构建树中创建几个文件来记录发现的内容。 + + 你也可以跑步`配置`如果希望将生成目录与原始源文件分开,请在源代码树外的目录中,然后在那里生成。这个过程叫做[](<>)*虚拟路径*建筑以下是方法: + + ``` + mkdir build_dir + cd build_dir + /path/to/source/tree/configure [options go here] + make + ``` + + 默认配置将构建服务器和实用程序,以及所有只需要C编译器的客户端应用程序和接口。所有文件都将安装在`/usr/local/pgsql`默认情况下。 + + 通过向用户提供一个或多个命令行选项,可以自定义生成和安装过程`配置`。通常,您会自定义安装位置或生成的一组可选功能。`配置`有大量选项,如中所述[第17.4.1节](install-procedure.html#CONFIGURE-OPTIONS). + + 而且`配置`响应某些环境变量,如中所述[第17.4.2节](install-procedure.html#CONFIGURE-ENVVARS)。这些提供了自定义配置的其他方法。 + +2. **建筑** + + 要启动生成,请键入以下任一项: + + ``` + make + make all + ``` + + (记住使用GNU make。)构建将需要几分钟时间,具体取决于您的硬件。 + + 如果您想构建所有可以构建的东西,包括文档(HTML和手册页)和其他模块(`contrib`),改为键入: + + ``` + make world + ``` + + 如果你想构建所有可以构建的东西,包括附加模块(`contrib`),但如果没有文档,请键入: + + ``` + make world-bin + ``` + + 如果希望从另一个makefile而不是手动调用构建,则必须取消设置`MAKELEVEL`或者将其设置为零,例如: + + ``` + build-postgresql: + $(MAKE) -C postgresql MAKELEVEL=0 all + ``` + + 否则可能会导致奇怪的错误消息,通常是关于缺少头文件。 + +3. **回归测试** + + [](<>) + + 如果要在安装新构建的服务器之前对其进行测试,可以在此时运行回归测试。回归测试是一个测试套件,用于验证PostgreSQL是否以开发人员预期的方式在您的机器上运行。类型: + + ``` + make check + ``` + + (这不能作为root用户使用;请作为非特权用户使用。)看见[第33章](regress.html)有关解释测试结果的详细信息。您可以在以后的任何时候通过发出相同的命令来重复此测试。 + +4. **安装文件** + + ### 笔记 + + 如果要升级现有系统,请务必阅读[第19.6节](upgrading.html),其中包含有关升级群集的说明。 + + 要安装PostgreSQL,请输入: + + ``` + make install + ``` + + 这将把文件安装到中指定的目录中[第一步](install-procedure.html#CONFIGURE).确保你有适当的权限写入该区域。通常,您需要以root用户身份执行此步骤。或者,您可以提前创建目标目录,并安排授予适当的权限。 + + 要安装文档(HTML和手册页),请输入: + + ``` + make install-docs + ``` + + 如果您在上面构建了世界,请键入: + + ``` + make install-world + ``` + + 这也会安装文档。 + + 如果您构建的世界没有上述文档,请键入: + + ``` + make install-world-bin + ``` + + 你可以用`制作安装条`而不是`制作安装`在安装可执行文件和库时剥离它们。这将节省一些空间。如果使用调试支持构建,剥离将有效地删除调试支持,因此仅当不再需要调试时才应进行剥离。`安装密封条`试图做一个合理的工作来节省空间,但它并不完全了解如何从可执行文件中删除所有不需要的字节,因此,如果你想尽可能地节省所有磁盘空间,就必须进行手动操作。 + + 标准安装提供了客户端应用程序开发以及服务器端程序开发所需的所有头文件,例如用C编写的自定义函数或数据类型。 + + **仅客户端安装:**如果只想安装客户端应用程序和接口库,则可以使用以下命令: + + ``` + make -C src/bin install + make -C src/include install + make -C src/interfaces install + make -C doc install + ``` + + `src/bin`有一些二进制文件只供服务器使用,但它们很小。 + +**卸载:**要撤消安装,请使用以下命令`进行卸载`。但是,这不会删除任何已创建的目录。 + +**打扫:**安装完成后,可以通过以下命令从源代码树中删除生成的文件来释放磁盘空间`打扫干净`。这将保留`配置`程序,这样你就可以用`制作`过后要将源树重置为其分发时的状态,请使用`打扫卫生`。如果要在同一个源代码树中为多个平台构建,则必须执行此操作,并为每个平台重新配置。(或者,为每个平台使用单独的构建树,以便源树保持不变。) + +如果您执行构建,然后发现`配置`选择是错误的,或者如果你改变了什么`配置`调查(例如,软件升级),然后这是一个好主意`打扫卫生`在重新配置和重建之前。如果没有这一点,配置选择中的更改可能不会传播到需要的任何地方。 + +### 17.4.1. `配置`选项 + +[](<>) + +`配置`的命令行选项解释如下。此列表并不详尽(请使用`配置-帮助`要得到一个是)。此处未涵盖的选项适用于交叉编译等高级用例,并记录在标准Autoconf文档中。 + +#### 17.4.1.1.安装位置 + +这些选项可以控制在哪里`制作安装`我会把文件放进去。这个`--前缀`在大多数情况下,该选项已足够。如果您有特殊需要,可以使用本节介绍的其他选项自定义安装子目录。但是,请注意,更改不同子目录的相对位置可能会导致安装不可重新定位,这意味着您将无法在安装后移动它。(小标题)`成年男子`和`医生`地点不受此限制。)对于可重新定位的安装,您可能需要使用`--禁用rpath`选项将在后面介绍。 + +`--前缀=*`前缀`*` + +安装目录下的所有文件*`前缀`*而不是`/usr/local/pgsql`.实际文件将安装到各个子目录中;不会将任何文件直接安装到*`前缀`*目录 + +`--执行前缀=*`EXEC-PREFIX`*` + +可以在不同的前缀下安装与体系结构相关的文件,*`EXEC-PREFIX`*,比什么*`前缀`*设置为。这对于在主机之间共享与体系结构无关的文件非常有用。如果你忽略了这个,那么*`EXEC-PREFIX`*设置为*`前缀`*依赖于体系结构的文件和独立文件都将安装在同一棵树下,这可能是您想要的。 + +`--宾迪尔=*`目录`*` + +指定可执行程序的目录。默认值是`*`EXEC-PREFIX`*/垃圾箱`,这通常意味着`/usr/local/pgsql/bin`. + +`--sysconfdir=*`目录`*` + +设置各种配置文件的目录,`*`前缀`*/等等`默认情况下。 + +`--利伯迪尔=*`目录`*` + +设置安装库和可动态加载模块的位置。默认值是`*`EXEC-PREFIX`*/解放党`. + +`--includedir=*`目录`*` + +设置安装C和C++头文件的目录。默认值是`*`前缀`*/包括`. + +`--datarootdir=*`目录`*` + +为各种类型的只读数据文件设置根目录。这仅为以下一些选项设置默认值。默认值是`*`前缀`*/分享`. + +`--数据目录=*`目录`*` + +设置已安装程序使用的只读数据文件的目录。默认值是`*`DATAROOTDIR`*`。请注意,这与数据库文件的放置位置无关。 + +`--localedir=*`目录`*` + +设置用于安装区域设置数据的目录,尤其是消息翻译目录文件。默认值是`*`DATAROOTDIR`*/地点`. + +`--曼迪尔=*`目录`*` + +PostgreSQL附带的手册页将安装在此目录下,并在各自的目录中`老兄*`十、`*`子目录。默认值是`*`DATAROOTDIR`*/老兄`. + +`--多克迪尔=*`目录`*` + +设置安装文档文件的根目录,但“手册”页除外。这仅设置以下选项的默认值。此选项的默认值为`*`DATAROOTDIR`*/doc/postgresql`. + +`--htmldir=*`目录`*` + +PostgreSQL的HTML格式文档将安装在此目录下。默认值是`*`DATAROOTDIR`*`. + +### 笔记 + +已经注意到可以将PostgreSQL安装到共享的安装位置(例如`/usr/本地/包括`)不干扰系统其余部分的名称空间。首先,字符串“`/postgresql`“将自动附加到`数据目录`, `sysconfdir`和`多克迪尔`,除非完全展开的目录名已包含字符串“`博士后`“或者”`pgsql`”. 例如,如果你选择`/usr/本地`作为前缀,文档将安装在`/usr/local/doc/postgresql`,但如果前缀是`/opt/postgres`,那么它就会在`/opt/postgres/doc`客户端接口的公共C头文件安装到`includedir`而且是干净的。内部头文件和服务器头文件安装在`includedir`.有关如何访问其头文件的信息,请参阅每个接口的文档。最后,如果合适,还将在下面创建一个私有子目录`利伯迪尔`用于动态加载模块。 + +#### 17.4.1.2.PostgreSQL特性 + +本节中描述的选项允许构建默认情况下未构建的各种PostgreSQL功能。其中大多数是非默认的,只是因为它们需要额外的软件,如中所述[第17.2节](install-requirements.html). + +`--启用nls[=*`语言`*]` + +启用本机语言支持(NLS),即以英语以外的语言显示程序消息的能力。*`语言`*是一个可选的空格分隔列表,其中列出了您希望支持的语言的代码,例如`--启用nls='de fr'`(列表和实际提供的翻译集之间的交集将自动计算。)如果未指定列表,则会安装所有可用的翻译。 + +要使用此选项,您将需要Gettext API的实现。 + +`--使用perl` + +构建PL/Perl服务器端语言。 + +`--用python` + +构建PL/Python服务器端语言。 + +`--与tcl` + +构建PL/Tcl服务器端语言。 + +`--使用tclconfig=*`目录`*` + +Tcl安装该文件`tclConfig。嘘`,其中包含构建与Tcl接口的模块所需的配置信息。该文件通常会在已知位置自动找到,但如果您想使用不同版本的Tcl,可以指定要查找的目录`tclConfig。嘘`. + +`--在重症监护室` + +为ICU提供支持[](<>)库,支持使用ICU排序功能(请参阅[第24.2节](collation.html))。这需要安装ICU4C软件包。ICU4C的最低要求版本目前为4.2。 + +默认情况下,pkg config[](<>)将用于查找所需的编译选项。ICU4C版本4.6及更高版本支持这一点。对于旧版本,或者如果pkg config不可用,则变量`重症监护病房`和`ICU_LIBS`可以指定为`配置`,如本例所示: + +``` +./configure ... --with-icu ICU_CFLAGS='-I/some/where/include' ICU_LIBS='-L/some/where/lib -licui18n -licuuc -licudata' +``` + +(如果ICU4C位于编译器的默认搜索路径中,那么仍然需要指定非空字符串,以避免使用pkg config,例如,`ICU_CFLAGS=“”`.) + +`--与llvm` + +构建时支持基于LLVM的JIT编译(请参阅[第32章](jit.html))。这需要安装LLVM库。LLVM的最低要求版本目前为3.9。 + +`llvm配置`[](<>)将用于查找所需的编译选项。`llvm配置`,然后`llvm配置-$major-$minor`对于所有受支持的版本,将在`路径`.如果无法生成所需的程序,请使用`LLVM_配置`指定到正确路径的步骤`llvm配置`例如 + +``` +./configure ... --with-llvm LLVM_CONFIG='/path/to/llvm/bin/llvm-config' +``` + +LLVM支持需要兼容的`叮当声`编译器(如有必要,使用`叮当声`环境变量)和一个工作C++编译器(如果需要的话,使用`CXX`环境变量)。 + +`--使用-lz4` + +使用LZ4压缩支持构建。这允许使用LZ4压缩表数据。 + +`--使用ssl=*`图书馆`*` [](<>) + +构建时支持SSL(加密)连接。唯一的*`图书馆`*支持的是`openssl`。这需要安装OpenSSL包。`配置`在继续之前,将检查所需的头文件和库,以确保您的OpenSSL安装足够。 + +`--使用openssl` + +过时的相当于`--使用ssl=openssl`. + +`--和gssapi` + +使用对GSSAPI身份验证的支持进行构建。在许多系统上,GSSAPI系统(通常是Kerberos安装的一部分)未安装在默认搜索的位置(例如。,`/usr/包括`, `/usr/lib`),所以您必须使用这些选项`--包括`和`--图书馆`除了这个选项。`配置`在继续之前,将检查所需的头文件和库,以确保GSSAPI安装足够。 + +`--使用ldap` + +使用LDAP构建[](<>)支持身份验证和连接参数查找(请参阅[第34.18节](libpq-ldap.html)和[第21.10节](auth-ldap.html)更多信息)。在Unix上,这需要安装OpenLDAP包。在Windows上,使用默认的WinLDAP库。`配置`在继续之前,将检查所需的头文件和库,以确保OpenLDAP安装足够。 + +`--和帕姆` + +与PAM一起构建[](<>)(可插拔认证模块)支持。 + +`--用bsd认证` + +使用BSD身份验证支持构建。(BSD身份验证框架目前仅在OpenBSD上可用。) + +`--用systemd` + +通过对systemd的支持进行构建[](<>)服务通知。如果服务器是在systemd下启动的,这将提高集成度,但在其他方面没有影响;看见[第19.3节](server-start.html)了解更多信息。要使用此选项,需要安装libsystemd和相关的头文件。 + +`--你好` + +使用对Bonjour自动服务发现的支持进行构建。这需要在操作系统中提供Bonjour支持。建议在macOS上使用。 + +`--带uuid=*`图书馆`*` + +建造[uuid ossp](uuid-ossp.html)模块(提供生成UUID的函数),使用指定的UUID库。[](<>) *`图书馆`*必须是以下之一: + +- `bsd`使用FreeBSD、NetBSD和其他一些BSD派生系统中的UUID函数 + +- `e2fs`使用`e2fsprogs`项目该库存在于大多数Linux系统和macOS中,也可用于其他平台 + +- `ossp`使用[OSSP UUID库](http://www.ossp.org/pkg/lib/uuid/) + +`--带ossp uuid` + +过时的相当于`--uuid=ossp时`. + +`--使用libxml` + +使用libxml2构建,支持SQL/XML。此功能需要Libxml2版本2.6.23或更高版本。 + +要检测所需的编译器和链接器选项,PostgreSQL将查询`包装配置`,如果已安装并了解libxml2。否则,该计划将失败`xml2配置`,由libxml2安装,如果找到它,将使用它。使用`包装配置`是首选,因为它可以更好地处理多体系结构安装。 + +要使用位于不寻常位置的libxml2安装,可以设置`包装配置`-相关的环境变量(参见其文档),或设置环境变量`XML2_配置`指`xml2配置`属于libxml2安装的程序,或设置变量`XML2_CFLAGS`和`XML2_LIBS`(如果`包装配置`则要覆盖libxml2所在位置的概念,必须`XML2_配置`或者两者兼而有之`XML2_CFLAGS`和`XML2_LIBS`到非空字符串。) + +`--使用libxslt` + +使用libxslt构建,启用[xml2](xml2.html)模块来执行XML的XSL转换。`--使用libxml`也必须指定。 + +#### 17.4.1.3.反特征 + +本节介绍的选项允许禁用某些默认生成的PostgreSQL功能,但如果所需的软件或系统功能不可用,则可能需要关闭这些功能。除非确有必要,否则不建议使用这些选项。 + +`--没有读线` + +防止使用Readline库(以及libedit)。此选项禁用psql中的命令行编辑和历史记录。 + +`--有libedit优先` + +支持使用BSD许可的libedit库,而不是GPL许可的Readline。只有安装了两个库时,此选项才有效;这种情况下的默认设置是使用Readline。 + +`--没有zlib` + +[](<>)防止使用Zlib库。这将禁用对pg中压缩档案的支持\_转储和pg\_恢复 + +`--禁用旋转锁` + +即使PostgreSQL没有对平台的CPU自旋锁支持,也允许构建成功。缺少自旋锁支持将导致性能非常差;因此,只有当构建中止并通知您平台缺少自旋锁支持时,才应使用此选项。如果在您的平台上构建PostgreSQL需要此选项,请向PostgreSQL开发人员报告问题。 + +`--禁用原子` + +禁用CPU原子操作的使用。在缺乏此类操作的平台上,此选项不起任何作用。在有它们的平台上,这将导致性能不佳。此选项仅在调试或进行性能比较时有用。 + +`--禁用线程安全` + +禁用客户端库的线程安全。这会阻止libpq和ECPG程序中的并发线程安全地控制它们的私有连接句柄。仅在线程支持不足的平台上使用此选项。 + +#### 17.4.1.4.构建过程细节 + +`--包括=*`目录`*` + +*`目录`*是一个以冒号分隔的目录列表,将添加到编译器搜索头文件的列表中。如果在非标准位置安装了可选软件包(如GNU Readline),则必须使用此选项,可能还需要使用相应的`--图书馆`选项 + +例子:`--with includes=/opt/gnu/include:/usr/sup/include`. + +`--图书馆=*`目录`*` + +*`目录`*是一个以冒号分隔的目录列表,用于搜索库。您可能必须使用此选项(以及相应的`--包括`选项)如果在非标准位置安装了软件包。 + +例子:`--带库=/opt/gnu/lib:/usr/sup/lib`. + +`--使用系统数据=*`目录`*` [](<>) + +PostgreSQL包含自己的时区数据库,用于日期和时间操作。这个时区数据库实际上与许多操作系统(如FreeBSD、Linux和Solaris)提供的IANA时区数据库兼容,因此再次安装它是多余的。使用此选项时,系统将在数据库中提供时区*`目录`*使用,而不是PostgreSQL源代码发行版中包含的。*`目录`*必须指定为绝对路径。`/usr/share/zoneinfo`可能是某些操作系统上的目录。请注意,安装例程不会检测不匹配或错误的时区数据。如果您使用此选项,建议您运行回归测试,以验证您所指向的时区数据是否能正确用于PostgreSQL。 + +[](<>) + +此选项主要针对熟悉目标操作系统的二进制软件包分销商。使用此选项的主要优点是,每当许多本地夏时制规则发生变化时,PostgreSQL包都不需要升级。另一个优点是,如果在安装过程中不需要构建时区数据库文件,PostgreSQL可以更直接地进行交叉编译。 + +`--有额外版本=*`一串`*` + +追加*`一串`*到PostgreSQL版本号。例如,您可以使用它来标记从未发布的Git快照生成的二进制文件,或包含带有额外版本字符串的自定义修补程序,例如`git描述`标识符或分发包发布号。 + +`--禁用rpath` + +不要标记PostgreSQL的可执行文件,以指示它们应该在安装的库目录中搜索共享库(请参阅`--利伯迪尔`)。在大多数平台上,此标记使用指向库目录的绝对路径,因此,如果您以后重新定位安装,它将毫无帮助。但是,您需要为可执行文件提供一些其他方法来查找共享库。通常这需要配置操作系统的动态链接器来搜索库目录;看见[第17.5.1节](install-post.html#INSTALL-POST-SHLIBS)更多细节。 + +#### 17.4.1.5.杂 + +使用`--使用pgport`.本节中的其他选项仅建议高级用户使用。 + +`--使用pgport=*`数字`*` + +设置*`数字`*作为服务器和客户端的默认端口号。默认值为5432。这个端口以后总是可以更改的,但是如果您在这里指定它,那么服务器和客户机都将使用相同的默认值进行编译,这非常方便。通常,选择非默认值的唯一好理由是,如果要在同一台机器上运行多个PostgreSQL服务器。 + +`--与krb srvnam合作=*`名称`*` + +GSSAPI使用的Kerberos服务主体的默认名称。`博士后`这是默认值。除非是为Windows环境构建,否则通常没有理由更改此设置,在这种情况下,必须将其设置为大写`博士后`. + +`--使用segsize=*`赛格斯`*` + +设定*段大小*,以千兆字节为单位。大型表被划分为多个操作系统文件,每个文件的大小等于段大小。这避免了许多平台上存在的文件大小限制问题。默认段大小为1G,在所有受支持的平台上都是安全的。如果你的操作系统支持“大文件”(现在大多数都支持),你可以使用更大的段大小。这有助于减少在处理非常大的表时使用的文件描述符的数量。但请注意,不要选择大于您的平台和您打算使用的文件系统支持的值。您可能希望使用的其他工具,例如tar,也可以设置可用文件大小的限制。虽然不是绝对要求,但建议该值为2的幂。请注意,更改此值会破坏磁盘数据库兼容性,这意味着您无法使用`pg_升级`升级到具有不同段大小的版本。 + +`--大块头=*`块状大小`*` + +设定*块大小*,以千字节为单位。这是表中存储和I/O的单位。默认值为8KB,适用于大多数情况;但在特殊情况下,其他值可能有用。该值必须是1到32(千字节)之间的2的幂。请注意,更改此值会破坏磁盘数据库兼容性,这意味着您无法使用`pg_升级`升级到具有不同块大小的生成。 + +`--大块头=*`块状大小`*` + +设定*墙块大小*,以千字节为单位。这是WAL日志中的存储和I/O单元。默认值为8KB,适用于大多数情况;但在特殊情况下,其他值可能有用。该值必须是1到64(千字节)之间的2的幂。请注意,更改此值会破坏磁盘数据库兼容性,这意味着您无法使用`pg_升级`升级到具有不同墙块大小的建筑。 + +#### 17.4.1.6.开发者选项 + +本节中的大多数选项仅适用于开发或调试PostgreSQL。不建议将其用于生产构建,但以下情况除外:`--启用调试`,这对于在遇到错误的不幸事件中启用详细的错误报告非常有用。在支持DTrace的平台上,`--启用dtrace`也可合理用于生产。 + +在构建用于在服务器内部开发代码的安装时,建议至少使用以下选项:`--启用调试`和`--启用卡塞特`. + +`--启用调试` + +编译带有调试符号的所有程序和库。这意味着您可以在调试器中运行程序来分析问题。这会极大地扩大已安装可执行文件的大小,在非GCC编译器上,它通常还会禁用编译器优化,导致速度减慢。然而,拥有可用的符号对于处理可能出现的任何问题非常有帮助。目前,只有在使用GCC的情况下,才建议将此选项用于生产安装。但如果你正在做开发工作或运行测试版,你应该一直打开它。 + +`--启用卡塞特` + +使能够*断言*检查服务器,这会测试许多“不可能发生”的情况。这对于代码开发来说是非常宝贵的,但是测试会显著降低服务器的速度。此外,打开测试并不一定会提高服务器的稳定性!断言检查没有根据严重性进行分类,因此如果触发断言失败,相对无害的错误仍然会导致服务器重新启动。不建议在生产环境中使用此选项,但在开发工作或运行测试版时,应将其打开。 + +`--启用抽头测试` + +使用Perl TAP工具启用测试。这需要Perl安装和Perl模块`IPC::运行`看见[第33.4节](regress-tap.html)了解更多信息。 + +`--启用依赖` + +启用自动依赖项跟踪。使用此选项,可以设置makefiles,以便在更改任何头文件时重建所有受影响的对象文件。如果您正在进行开发工作,这是有用的,但如果您只打算编译一次并安装,这只是浪费了开销。目前,该选项仅适用于GCC。 + +`--启用覆盖范围` + +如果使用GCC,所有程序和库都会使用代码覆盖率测试工具进行编译。运行时,它们会在构建目录中生成包含代码覆盖率指标的文件。看见[第33.5节](regress-coverage.html)了解更多信息。此选项仅用于GCC和进行开发工作时。 + +`--启用分析` + +如果使用GCC,所有程序和库都会被编译,以便对它们进行分析。在后端退出时,将创建一个子目录,其中包含`格蒙。出来`包含配置文件数据的文件。此选项仅用于GCC和进行开发工作时。 + +`--启用dtrace` + +[](<>)编译支持动态跟踪工具DTrace的PostgreSQL。看见[第28.5节](dynamic-trace.html)了解更多信息。 + +指`dtrace`程序,环境变量`DTRACE`可以设定。这通常是必要的,因为`dtrace`通常安装在`/usr/sbin`,这可能不在你的`路径`. + +额外的命令行选项`dtrace`程序可以在环境变量中指定`DTRACEFLAGS`。在Solaris上,要在64位二进制文件中包含DTrace支持,必须指定`DTRACEFLAGS=“-64”`。例如,使用GCC编译器: + +``` +./configure CC='gcc -m64' --enable-dtrace DTRACEFLAGS='-64' ... +``` + +使用Sun的编译器: + +``` +./configure CC='/opt/SUNWspro/bin/cc -xtarget=native64' --enable-dtrace DTRACEFLAGS='-64' ... +``` + +### 17.4.2. `配置`环境变量 + +[](<>) + +除了上面描述的普通命令行选项,`配置`响应多个环境变量。可以在上指定环境变量`配置`命令行,例如: + +``` +./configure CC=/opt/bin/gcc CFLAGS='-O2 -pipe' +``` + +在这种用法中,环境变量与命令行选项几乎没有区别。您也可以事先设置这些变量: + +``` +export CC=/opt/bin/gcc +export CFLAGS='-O2 -pipe' +./configure +``` + +这种用法很方便,因为许多程序的配置脚本以类似的方式响应这些变量。 + +这些环境变量中最常用的是`科科斯群岛`和`CFLAGS`.如果你喜欢不同于`配置`选择,您可以设置变量`科科斯群岛`你选择的节目。默认情况下,`配置`会挑选`gcc`如果可用,则为平台的默认值(通常为`复写的副本`)。类似地,如果需要,可以使用`CFLAGS`变量 + +以下是可以通过这种方式设置的重要变量列表: + +`野牛` + +野牛计划 + +`科科斯群岛` + +C编译器 + +`CFLAGS` + +传递给C编译器的选项 + +`叮当声` + +通往`叮当声`使用编译时用于处理内联源代码的程序`--与llvm` + +`CPP` + +C预处理器 + +`CPPFLAGS` + +传递给C预处理器的选项 + +`CXX` + +C++编译器 + +`CXXFLAGS` + +传递给C++编译器的选项 + +`DTRACE` + +地点`dtrace`程序 + +`DTRACEFLAGS` + +将选项传递给`dtrace`程序 + +`弯曲` + +Flex程序 + +`LDFLAGS` + +链接可执行文件或共享库时使用的选项 + +`LDU-EX` + +仅用于链接可执行文件的其他选项 + +`LDU SL` + +仅用于链接共享库的其他选项 + +`LLVM_配置` + +`llvm配置`用于定位LLVM安装的程序 + +`MSGFMT` + +`msgfmt`母语支持计划 + +`PERL` + +Perl解释器程序。这将用于确定构建PL/Perl的依赖关系。默认值是`perl`. + +`PYTHON` + +Python解释器程序。这将用于确定构建PL/Python的依赖关系。此外,这里是否指定了Python2或Python3(或者隐式选择了Python2或Python3)决定了可用的PL/Python语言变体。看见[第46.1节](plpython-python23.html)了解更多信息。如果未设置,则按此顺序探测以下内容:`蟒蛇蟒蛇3蟒蛇2`. + +`TCLSH` + +Tcl解释器程序。这将用于确定构建PL/Tcl的依赖关系。如果未设置,则按此顺序探测以下内容:`tclsh tcl tclsh8。6 tclsh86 tclsh8。5 tclsh85 tclsh8。4 tclsh84`. + +`XML2_配置` + +`xml2配置`用于定位libxml2安装的程序 + +有时,将事实之后的编译器标志添加到用户选择的集合中是有用的`配置`一个重要的例子是gcc的`-沃罗`选项不能包含在列表中`CFLAGS`传给`配置`,因为它会打破许多`配置`的内置测试。要添加此类标志,请将其包含在`科普特`运行时的环境变量`制作`.文件的内容`科普特`都添加到了`CFLAGS`和`LDFLAGS`由设置的选项`配置`.例如,你可以 + +``` +make COPT='-Werror' +``` + +或 + +``` +export COPT='-Werror' +make +``` + +### 笔记 + +如果使用GCC,最好以至少为的优化级别进行构建`-O1`,因为没有使用优化(`-O0`)禁用一些重要的编译器警告(例如使用未初始化的变量)。然而,非零优化级别可能会使调试复杂化,因为单步执行编译代码通常不会与源代码行一一匹配。如果您在调试优化代码时感到困惑,请使用重新编译感兴趣的特定文件`-O0`。一个简单的方法是通过传递选项来实现:`make PROFILE=-O0文件。o`. + +这个`科普特`和`轮廓`环境变量实际上由PostgreSQL makefiles以相同的方式处理。使用哪个是一个偏好问题,但开发人员的一个共同习惯是使用哪个`轮廓`用于一次性调整标志,而`科普特`可能一直都是固定的。 diff --git a/docs/X/isn.md b/docs/en/isn.md similarity index 100% rename from docs/X/isn.md rename to docs/en/isn.md diff --git a/docs/en/isn.zh.md b/docs/en/isn.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..4e30bb1f525ffacc614d6c3b91eed0c17b914a7d --- /dev/null +++ b/docs/en/isn.zh.md @@ -0,0 +1,169 @@ +## F.19。不是吗 + +[F.19.1。数据类型](isn.html#id-1.11.7.28.5)[F.19.2。铸造](isn.html#id-1.11.7.28.6)[F.19.3。函数和运算符](isn.html#id-1.11.7.28.7)[F.19.4。例子](isn.html#id-1.11.7.28.8)[F.19.5。参考文献](isn.html#id-1.11.7.28.9)[F.19.6。著者](isn.html#id-1.11.7.28.10) + +[](<>) + +这个`不是吗`模块为以下国际产品编号标准提供数据类型:EAN13、UPC、ISBN(图书)、ISMN(音乐)和ISSN(系列)。根据前缀的硬编码列表,在输入时验证数字;此前缀列表还用于在输出时对数字进行连字符。由于新前缀会不时被分配,前缀列表可能已经过时。希望该模块的未来版本将从一个或多个用户可以根据需要轻松更新的表中获取前缀列表;然而,目前只能通过修改源代码和重新编译来更新列表。或者,该模块的未来版本可能会放弃前缀验证和断字支持。 + +该模块被认为是“受信任的”,也就是说,它可以由拥有`创造`当前数据库的权限。 + +### F.19.1。数据类型 + +[表F.11](isn.html#ISN-DATATYPES)显示由提供的数据类型`不是吗`单元 + +**表F.11。`不是吗`数据类型** + +| 数据类型 | 描述 | +| ---- | --- | +| `EAN13` | 欧洲商品编号,始终以EAN13显示格式显示 | +| `ISBN13` | 以新的EAN13显示格式显示的国际标准书号 | +| `ISMN13` | 以新的EAN13显示格式显示的国际标准音乐号码 | +| `ISSN13` | 以新的EAN13显示格式显示的国际标准序列号 | +| `ISBN` | 以旧的短显示格式显示的国际标准书号 | +| `ISMN` | 以旧的短显示格式显示的国际标准音乐号码 | +| `伊森` | 以旧的短显示格式显示的国际标准序列号 | +| `UPC` | 通用产品代码 | + +注意: + +1. ISBN13、ISMN13、ISSN13号都是EAN13号。 + +2. EAN13数字并不总是ISBN13、ISMN13或ISSN13(有些是)。 + +3. 一些ISBN13数字可以显示为ISBN。 + +4. 一些ISMN13数字可以显示为ISMN。 + +5. 一些ISSN13数字可以显示为ISSN。 + +6. UPC编号是EAN13编号的一个子集(基本上是没有第一个编号的EAN13)`0`数字)。 + +7. 所有UPC、ISBN、ISMN和ISSN编号都可以表示为EAN13编号。 + + 在内部,所有这些类型都使用相同的表示(64位整数),并且都可以互换。提供了多种类型来控制显示格式,并允许更严格的输入有效性检查,该输入应表示一种特定类型的数字。 + + 这个`ISBN`, `ISMN`和`伊森`只要可能,类型将显示数字的短版本(ISxN 10),对于不适合短版本的数字,类型将显示ISxN 13格式。这个`EAN13`, `ISBN13`, `ISMN13`和`ISSN13`类型将始终显示ISxN(EAN13)的长版本。 + +### F.19.2。铸造 + +这个`不是吗`模块提供以下类型转换对: + +- ISBN13\\\<=> EAN13 + +- ISMN13\\\<=> EAN13 + +- ISSN13\\\<=> EAN13 + +- ISBN\\\<=> EAN13 + +- ISMN \\\<=>EAN13 + +- ISSN \\\<=>EAN13 + +- UPC \\\<=>EAN13 + +- ISBN \\\<=>ISBN13 + +- ISMN \\\<=>ISMN13 + +- ISSN \\\<=>ISSN13 + + When casting from`EAN13`to another type, there is a run-time check that the value is within the domain of the other type, and an error is thrown if not. The other casts are simply relabelings that will always succeed. + +### F.19.3. Functions and Operators + +The`isn`module provides the standard comparison operators, plus B-tree and hash indexing support for all these data types. In addition there are several specialized functions; shown in[Table F.12](isn.html#ISN-FUNCTIONS). In this table,`不是吗`指模块的任何一种数据类型。 + +**表F.12。 `不是吗`功能** + +| 作用

描述 | +| -------------- | +| [](<>) `你不软弱吗` ( `布尔值` ) → `布尔值`

设置弱输入模式,并返回新设置。 | +| `你不软弱吗` () → `布尔值`

返回弱模式的当前状态。 | +| [](<>) `使你有效` ( `不是吗` ) → `不是吗`

验证无效数字(清除无效标志)。 | +| [](<>) `_有效吗` ( `不是吗` ) → `布尔值`

检查是否存在无效标志。 | + +*虚弱的*模式用于将无效数据插入表中。无效表示校验位错误,而不是缺少数字。 + +为什么要使用弱模式?嗯,可能是因为你收集了大量的ISBN号,而且其中有太多的ISBN号,出于奇怪的原因,一些ISBN号的校验位是错误的(可能是从打印的列表中扫描的数字,OCR得到的数字是错误的,可能是手动捕获的数字……谁知道呢)。不管怎么说,关键是你可能想收拾残局,但你仍然希望能够在数据库中拥有所有的数字,并且可能使用外部工具来定位数据库中的无效数字,这样你就可以更容易地验证信息和验证它;例如,你需要选择表中所有的无效数字。 + +当使用弱模式在表格中插入无效数字时,数字将与更正的校验位一起插入,但它将以感叹号显示(`!`)例如,在最后`0-11-000322-5!`。此无效标记可通过`_有效吗`函数并用`使你有效`作用 + +您还可以通过添加`!`数字末尾的字符。 + +另一个特点是,在输入过程中,您可以编写`?`替换校验位,并自动插入正确的校验位。 + +### F.19.4。例子 + +``` +--Using the types directly: +SELECT isbn('978-0-393-04002-9'); +SELECT isbn13('0901690546'); +SELECT issn('1436-4522'); + +--Casting types: +-- note that you can only cast from ean13 to another type when the +-- number would be valid in the realm of the target type; +-- thus, the following will NOT work: select isbn(ean13('0220356483481')); +-- but these will: +SELECT upc(ean13('0220356483481')); +SELECT ean13(upc('220356483481')); + +--Create a table with a single column to hold ISBN numbers: +CREATE TABLE test (id isbn); +INSERT INTO test VALUES('9780393040029'); + +--Automatically calculate check digits (observe the '?'): +INSERT INTO test VALUES('220500896?'); +INSERT INTO test VALUES('978055215372?'); + +SELECT issn('3251231?'); +SELECT ismn('979047213542?'); + +--Using the weak mode: +SELECT isn_weak(true); +INSERT INTO test VALUES('978-0-11-000533-4'); +INSERT INTO test VALUES('9780141219307'); +INSERT INTO test VALUES('2-205-00876-X'); +SELECT isn_weak(false); + +SELECT id FROM test WHERE NOT is_valid(id); +UPDATE test SET id = make_valid(id) WHERE id = '2-205-00876-X!'; + +SELECT * FROM test; + +SELECT isbn13(id) FROM test; +``` + +### F.19.5。参考书目 + +实施本模块的信息来自多个网站,包括: + +- + +- + +- + +- + + 用于连字号的前缀也来自: + +- + +- [https://en.wikipedia.org/wiki/List\_属于\_ISBN\_标识符\_组](https://en.wikipedia.org/wiki/List_of_ISBN_identifier_groups) + +- + +- [https://en.wikipedia.org/wiki/International\_标准\_音乐\_数字](https://en.wikipedia.org/wiki/International_Standard_Music_Number) + +- + + 在创建算法的过程中非常小心,并根据ISBN、ISMN、ISSN用户手册中建议的算法进行了仔细验证。 + +### F.19.6。作者 + +German Méndez Bravo(克朗),2004-2006年 + +本模块的灵感来自加勒特·A·沃尔曼的`国际标准书号`密码 diff --git a/docs/X/legalnotice.md b/docs/en/legalnotice.md similarity index 100% rename from docs/X/legalnotice.md rename to docs/en/legalnotice.md diff --git a/docs/en/legalnotice.zh.md b/docs/en/legalnotice.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/docs/X/libpq-async.md b/docs/en/libpq-async.md similarity index 100% rename from docs/X/libpq-async.md rename to docs/en/libpq-async.md diff --git a/docs/en/libpq-async.zh.md b/docs/en/libpq-async.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f04a000e77bc047494a0f8a01902775e1bf8acfd --- /dev/null +++ b/docs/en/libpq-async.zh.md @@ -0,0 +1,178 @@ +## 34.4.异步命令处理 + +[](<>) + +这个[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)该函数足以在正常的同步应用程序中提交命令。然而,它有一些对某些用户很重要的缺陷: + +- [`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)等待命令完成。应用程序可能还有其他工作要做(比如维护用户界面),在这种情况下,它不想阻止等待响应。 + +- 由于客户端应用程序在等待结果时暂停执行,因此应用程序很难决定是否要尝试取消正在执行的命令。(这可以通过信号处理器完成,但不能通过其他方式完成。) + +- [`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)只能返回一个`PGresult`结构如果提交的命令字符串包含多个SQL命令,则除最后一个命令外`PGresult`被丢弃[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC). + +- [`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)始终收集命令的整个结果,并将其缓冲在单个`PGresult`。虽然这简化了应用程序的错误处理逻辑,但对于包含多行的结果来说可能不切实际。 + + 不喜欢这些限制的应用程序可以使用[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)它由以下部分构成:[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)和[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT).还有[`PQsendQueryParams`](libpq-async.html#LIBPQ-PQSENDQUERYPARAMS),[`PQsendPrepare`](libpq-async.html#LIBPQ-PQSENDPREPARE),[`PQsendQueryPrepared`](libpq-async.html#LIBPQ-PQSENDQUERYPREPARED),[`PQsendDescribePrepared`](libpq-async.html#LIBPQ-PQSENDDESCRIBEPREPARED), 和[`PQsendDescribePortal`](libpq-async.html#LIBPQ-PQSENDDESCRIBEPORTAL), 它可以与[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)复制的功能[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS),[`PQprepare`](libpq-exec.html#LIBPQ-PQPREPARE),[`PQexec 准备`](libpq-exec.html#LIBPQ-PQEXECPREPARED),[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED), 和[`PQdescribePortal`](libpq-exec.html#LIBPQ-PQDESCRIBEPORTAL)分别。 + +`PQsendQuery`[](<>) + +向服务器提交命令而不等待结果。如果命令成功发送则返回 1,否则返回 0(在这种情况下,使用[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)以获取有关故障的更多信息)。 + +``` +int PQsendQuery(PGconn *conn, const char *command); +``` + +调用成功后[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY), 称呼[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)一次或多次获得结果。[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)不能再次调用(在同一连接上),直到[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)返回了一个空指针,表示命令执行完毕。 + +在管道模式下,不允许包含多个 SQL 命令的命令字符串。 + +`PQsendQueryParams`[](<>) + +向服务器提交命令和单独的参数,而不等待结果。 + +``` +int PQsendQueryParams(PGconn *conn, + const char *command, + int nParams, + const Oid *paramTypes, + const char * const *paramValues, + const int *paramLengths, + const int *paramFormats, + int resultFormat); +``` + +这相当于[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)除了查询参数可以与查询字符串分开指定。该函数的参数的处理方式与[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS).喜欢[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS),它只允许查询字符串中有一个命令。 + +`PQsendPrepare`[](<>) + +发送请求以使用给定参数创建准备好的语句,而无需等待完成。 + +``` +int PQsendPrepare(PGconn *conn, + const char *stmtName, + const char *query, + int nParams, + const Oid *paramTypes); +``` + +这是一个异步版本[`PQprepare`](libpq-exec.html#LIBPQ-PQPREPARE):如果能够发送请求,则返回 1,否则返回 0。调用成功后,调用[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)判断服务器是否成功创建了prepared statement。该函数的参数的处理方式与[`PQprepare`](libpq-exec.html#LIBPQ-PQPREPARE). + +`PQsendQueryPrepared`[](<>) + +发送请求以执行具有给定参数的准备好的语句,而无需等待结果。 + +``` +int PQsendQueryPrepared(PGconn *conn, + const char *stmtName, + int nParams, + const char * const *paramValues, + const int *paramLengths, + const int *paramFormats, + int resultFormat); +``` + +这类似于[`PQsendQueryParams`](libpq-async.html#LIBPQ-PQSENDQUERYPARAMS),但要执行的命令是通过命名先前准备的语句来指定的,而不是给出查询字符串。该函数的参数的处理方式与[`PQexec 准备`](libpq-exec.html#LIBPQ-PQEXECPREPARED). + +`PQsendDescribePrepared`[](<>) + +提交请求以获取有关指定预准备语句的信息,而无需等待完成。 + +``` +int PQsendDescribePrepared(PGconn *conn, const char *stmtName); +``` + +这是一个异步版本[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED):如果能够发送请求,则返回 1,否则返回 0。调用成功后,调用[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)来获得结果。该函数的参数的处理方式与[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED). + +`PQsendDescribePortal`[](<>) + +提交请求以获取有关指定门户的信息,而无需等待完成。 + +``` +int PQsendDescribePortal(PGconn *conn, const char *portalName); +``` + +这是一个异步版本[`PQdescribePortal`](libpq-exec.html#LIBPQ-PQDESCRIBEPORTAL):如果能够发送请求,则返回 1,否则返回 0。调用成功后,调用[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)来获得结果。该函数的参数的处理方式与[`PQdescribePortal`](libpq-exec.html#LIBPQ-PQDESCRIBEPORTAL). + +`PQgetResult`[](<>) + +等待前一个结果的下一个结果[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY),[`PQsendQueryParams`](libpq-async.html#LIBPQ-PQSENDQUERYPARAMS),[`PQsendPrepare`](libpq-async.html#LIBPQ-PQSENDPREPARE),[`PQsendQueryPrepared`](libpq-async.html#LIBPQ-PQSENDQUERYPREPARED),[`PQsendDescribePrepared`](libpq-async.html#LIBPQ-PQSENDDESCRIBEPREPARED),[`PQsendDescribePortal`](libpq-async.html#LIBPQ-PQSENDDESCRIBEPORTAL), 或者[`PQpipelineSync`](libpq-pipeline-mode.html#LIBPQ-PQPIPELINESYNC)调用,并返回它。命令完成时返回一个空指针,不会再有结果。 + +``` +PGresult *PQgetResult(PGconn *conn); +``` + +[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)必须重复调用,直到它返回一个空指针,表示命令完成。(如果在没有命令处于活动状态时调用,[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)将立即返回一个空指针。)每个非空结果来自[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)应该使用相同的处理`PG结果`前面描述的访问器函数。不要忘记释放每个结果对象[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)完成后。注意[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)仅当命令处于活动状态且必要的响应数据尚未被读取时才会阻塞[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT). + +在流水线模式下,`PQgetResult`除非发生错误,否则将正常返回;对于在导致错误的查询之后发送的任何后续查询,直到(并且不包括)下一个同步点,类型的特殊结果`PGRES_PIPELINE_ABORTED`将被返回,并在其后返回一个空指针。当达到管道同步点时,类型为`PGRES_PIPELINE_SYNC`将被退回。紧跟在同步点之后的下一个查询的结果(即同步点之后不返回空指针)。 + +### 笔记 + +即使当[`PQresult状态`](libpq-exec.html#LIBPQ-PQRESULTSTATUS)表示致命错误,[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)应该调用直到它返回一个空指针,以允许 libpq 完全处理错误信息。 + +使用[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)和[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)解决其中之一[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)的问题:如果一个命令字符串包含多个SQL命令,则可以单独获取这些命令的结果。(顺便说一下,这允许一种简单形式的重叠处理:客户端可以处理一个命令的结果,而服务器仍在处理同一命令字符串中的后续查询。) + +另一个经常需要的功能,可以通过[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)和[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)一次检索大查询结果一行。这在[第 34.6 节](libpq-single-row-mode.html). + +本身,调用[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)仍然会导致客户端阻塞,直到服务器完成下一个 SQL 命令。这可以通过正确使用另外两个功能来避免: + +`PQconsume输入`[](<>) + +如果可以从服务器获得输入,则使用它。 + +``` +int PQconsumeInput(PGconn *conn); +``` + +[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)通常返回 1 表示“没有错误”,但如果出现某种故障则返回 0(在这种情况下[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)可以咨询)。请注意,结果并未说明是否实际收集了任何输入数据。打电话后[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT),应用程序可以检查[`PQis忙碌`](libpq-async.html#LIBPQ-PQISBUSY)和/或`PQ 通知`看看他们的状态是否发生了变化。 + +[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)即使应用程序还没有准备好处理结果或通知,也可以调用。该函数将读取可用数据并将其保存在缓冲区中,从而导致`选择()`阅读就绪指示离开。该应用程序因此可以使用[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)清除`选择()`立即状况,然后在闲暇时检查结果。 + +`PQis忙碌`[](<>) + +如果命令忙,则返回 1,即[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)会阻塞等待输入。返回 0 表示[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)可以在保证不阻塞的情况下调用。 + +``` +int PQisBusy(PGconn *conn); +``` + +[`PQis忙碌`](libpq-async.html#LIBPQ-PQISBUSY)本身不会尝试从服务器读取数据;所以[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)必须先调用,否则忙碌状态永远不会结束。 + +使用这些函数的典型应用程序将有一个主循环,它使用`选择()`要么`轮询()`等待它必须响应的所有条件。条件之一将从服务器输入,根据`选择()`表示文件描述符上的可读数据[`PQsocket`](libpq-status.html#LIBPQ-PQSOCKET).当主循环检测到输入就绪时,它应该调用[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)读取输入。然后它可以调用[`PQis忙碌`](libpq-async.html#LIBPQ-PQISBUSY), 其次是[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)如果[`PQis忙碌`](libpq-async.html#LIBPQ-PQISBUSY)返回假 (0)。它也可以调用`PQ 通知`检测`通知`消息(见[第 34.9 节](libpq-notify.html))。 + +使用的客户端[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)/[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)还可以尝试取消服务器仍在处理的命令;看[第 34.7 节](libpq-cancel.html).但不管[`取消`](libpq-cancel.html#LIBPQ-PQCANCEL),应用程序必须使用[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)。成功取消只会导致命令提前终止。 + +通过使用上述功能,可以避免在等待来自数据库服务器的输入时发生阻塞。但是,应用程序仍有可能阻止等待向服务器发送输出。这种情况相对少见,但如果发送很长的SQL命令或数据值,就会发生这种情况。(如果应用程序通过`抄送`(然而)为了防止这种可能性并实现完全无阻塞的数据库操作,可以使用以下附加功能。 + +`PQsetnonblocking`[](<>) + +设置连接的非阻塞状态。 + +``` +int PQsetnonblocking(PGconn *conn, int arg); +``` + +如果需要,将连接状态设置为非阻塞*`阿格`*是1,如果*`阿格`*是0。如果正常,则返回0;如果错误,则返回1。 + +在非阻塞状态下,调用[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY),[`PQputline`](libpq-copy.html#LIBPQ-PQPUTLINE),[`PQputnbytes`](libpq-copy.html#LIBPQ-PQPUTNBYTES),[`PQputCopyData`](libpq-copy.html#LIBPQ-PQPUTCOPYDATA)和[`PQendcopy`](libpq-copy.html#LIBPQ-PQENDCOPY)如果需要再次调用,则不会阻止,而是返回错误。 + +注意[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)不支持非阻塞模式;如果它被调用,它将以阻塞方式运行。 + +`PQIS非阻塞`[](<>) + +返回数据库连接的阻塞状态。 + +``` +int PQisnonblocking(const PGconn *conn); +``` + +如果连接设置为非阻塞模式,则返回1;如果连接设置为阻塞模式,则返回0。 + +`PQflush`[](<>) + +尝试将任何排队的输出数据刷新到服务器。如果成功(或如果发送队列为空),则返回0;如果由于某种原因失败,则返回-1;如果无法发送发送队列中的所有数据,则返回1(这种情况仅在连接未阻塞时发生)。 + +``` +int PQflush(PGconn *conn); +``` + +在非阻塞连接上发送任何命令或数据后,调用[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH).如果返回1,则等待套接字变为读写就绪。如果已准备好写入,请致电[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH)再一次如果它已准备就绪,请致电[`pqconsumer输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT),然后打电话[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH)再一次重复直到[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH)返回0。(有必要检查read ready(读取准备就绪)并使用[`pqconsumer输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT),因为服务器可以阻止向我们发送数据,例如通知消息,并且在我们读取数据之前不会读取我们的数据。)一旦[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH)返回0,等待套接字读取就绪,然后如上所述读取响应。 diff --git a/docs/X/libpq-build.md b/docs/en/libpq-build.md similarity index 100% rename from docs/X/libpq-build.md rename to docs/en/libpq-build.md diff --git a/docs/en/libpq-build.zh.md b/docs/en/libpq-build.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f2b86bf59c5a259ef5e63ba99cb016dda8a888a7 --- /dev/null +++ b/docs/en/libpq-build.zh.md @@ -0,0 +1,95 @@ +## 34.21.构建libpq程序 + +[](<>) + +要使用libpq构建(即编译和链接)程序,您需要执行以下所有操作: + +- 包括`libpq-fe。H`头文件: + + ``` + #include + ``` + + 如果您未能做到这一点,那么您通常会从编译器中收到类似以下内容的错误消息: + + ``` + foo.c: In function `main': + foo.c:34: `PGconn' undeclared (first use in this function) + foo.c:35: `PGresult' undeclared (first use in this function) + foo.c:54: `CONNECTION_BAD' undeclared (first use in this function) + foo.c:68: `PGRES_COMMAND_OK' undeclared (first use in this function) + foo.c:95: `PGRES_TUPLES_OK' undeclared (first use in this function) + ``` + +- 通过提供`-我*`目录`*`选项添加到编译器中。(在某些情况下,编译器会在默认情况下查看相关目录,因此您可以忽略此选项。)例如,compile命令行可能如下所示: + + ``` + cc -c -I/usr/local/pgsql/include testprog.c + ``` + + 如果您使用的是makefiles,那么将该选项添加到`CPPFLAGS`变量: + + ``` + CPPFLAGS += -I/usr/local/pgsql/include + ``` + + 如果你的程序有可能被其他用户编译,那么你不应该像这样硬编码目录位置。相反,您可以运行该实用程序`pg_配置`[](<>)要了解头文件在本地系统上的位置,请执行以下操作: + + ``` + $ pg_config --includedir + /usr/local/include + ``` + + 如果你有`包装配置`[](<>)安装后,您可以改为运行: + + ``` + $ pkg-config --cflags libpq + -I/usr/local/include + ``` + + 请注意,这已经包括`-我`在小路前面。 + + 未能为编译器指定正确的选项将导致错误消息,例如: + + ``` + testlibpq.c:8:22: libpq-fe.h: No such file or directory + ``` + +- 链接最终程序时,请指定选项`-lpq`这样libpq库和选项`-L*`目录`*`将编译器指向libpq库所在的目录。(同样,默认情况下,编译器将搜索一些目录。)为了实现最大的可移植性,请将`-L`选择权`-lpq`选项例如: + + ``` + cc -o testprog testprog1.o testprog2.o -L/usr/local/pgsql/lib -lpq + ``` + + 您可以使用`pg_配置`也: + + ``` + $ pg_config --libdir + /usr/local/pgsql/lib + ``` + + 或者再次使用`包装配置`: + + ``` + $ pkg-config --libs libpq + -L/usr/local/pgsql/lib -lpq + ``` + + 再次注意,这将打印完整选项,而不仅仅是路径。 + + 指向此区域问题的错误消息可能如下所示: + + ``` + testlibpq.o: In function `main': + testlibpq.o(.text+0x60): undefined reference to `PQsetdbLogin' + testlibpq.o(.text+0x71): undefined reference to `PQstatus' + testlibpq.o(.text+0xa4): undefined reference to `PQerrorMessage' + ``` + + 这意味着你忘了`-lpq`. + + ``` + /usr/bin/ld: cannot find -lpq + ``` + + 这意味着你忘了`-L`选项或未指定正确的目录。 diff --git a/docs/X/libpq-connect.md b/docs/en/libpq-connect.md similarity index 100% rename from docs/X/libpq-connect.md rename to docs/en/libpq-connect.md diff --git a/docs/en/libpq-connect.zh.md b/docs/en/libpq-connect.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..de9eafc326ca778dda9a5794028c03b5da3ca6e6 --- /dev/null +++ b/docs/en/libpq-connect.zh.md @@ -0,0 +1,699 @@ +## 34.1.数据库连接控制功能 + +[34.1.1. 连结字符串](libpq-connect.html#LIBPQ-CONNSTRING) + +[34.1.2. 参数关键字](libpq-connect.html#LIBPQ-PARAMKEYWORDS) + +以下函数处理与PostgreSQL后端服务器的连接。一个应用程序可以同时打开多个后端连接。(这样做的一个原因是访问多个数据库。)每个连接都由一个`PGconn`[](<>)对象,它是从函数中获取的[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB),[`PQconnectdbParams`](libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS)或[`PQsetdbLogin`](libpq-connect.html#LIBPQ-PQSETDBLOGIN)。请注意,这些函数将始终返回非空对象指针,除非内存太少,甚至无法分配`PGconn`对象这个[`PQ状态`](libpq-status.html#LIBPQ-PQSTATUS)在通过连接对象发送查询之前,应该调用函数来检查成功连接的返回值。 + +### 警告 + +如果不受信任的用户可以访问未采用[安全模式使用模式](ddl-schemas.html#DDL-SCHEMAS-PATTERNS),通过从中删除可公开写入的架构开始每个会话`搜索路径`.可以设置参数关键字`选项`重视`-csearch_path=`.或者,你可以发布`PQexec(*`康涅狄格州`*,“选择pg_目录。设置_配置('search_path','',false)”)`连接后。这种考虑并不是libpq特有的;它适用于执行任意SQL命令的每个接口。 + +### 警告 + +在Unix上,用开放的libpq连接分叉进程可能会导致不可预测的结果,因为父进程和子进程共享相同的套接字和操作系统资源。因此,不建议使用这种用法,尽管`执行官`从子进程加载新的可执行文件是安全的。 + +`PQconnectdbParams`[](<>) + +与数据库服务器建立新连接。 + +``` +PGconn *PQconnectdbParams(const char * const *keywords, + const char * const *values, + int expand_dbname); +``` + +此函数使用从两个数据库中获取的参数打开一个新的数据库连接`无效的`-端接阵列。第一个,`关键词`,定义为字符串数组,每个字符串都是一个关键字。第二个,`价值观`,给出每个关键字的值。不像[`PQsetdbLogin`](libpq-connect.html#LIBPQ-PQSETDBLOGIN)下面,参数集可以在不更改函数签名的情况下进行扩展,因此使用此函数(或其非阻塞类似物[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS)和`民意测验`)是新应用程序编程的首选。 + +中列出了当前识别的参数关键字[第34.1.2节](libpq-connect.html#LIBPQ-PARAMKEYWORDS). + +传递的数组可以为空以使用所有默认参数,也可以包含一个或多个参数设置。它们的长度必须匹配。处理将首先停止`无效的`进入`关键词`大堆此外,如果`价值观`与非-`无效的` `关键词`参赛作品是`无效的`或空字符串,则忽略该条目,并继续处理下一对数组条目。 + +什么时候`展开_dbname`为非零,第一个的值*`库名`*检查关键字是否为*连接字符串*。如果是,则将其“扩展”为从字符串中提取的各个连接参数。如果该值包含等号,则该值被视为连接字符串,而不仅仅是数据库名称(`=`)或者它以URI模式指示符开头。(有关连接字符串格式的更多详细信息,请参见[第34.1.1节](libpq-connect.html#LIBPQ-CONNSTRING))只有第一次出现*`库名`*被这样对待;任何后续的*`库名`*参数作为普通数据库名称处理。 + +通常,参数数组从头到尾都会被处理。如果有任何关键字被重复,最后一个值(即`无效的`或空)使用。当在连接字符串中找到的关键字与在连接字符串中出现的关键字冲突时,此规则尤其适用`关键词`大堆因此,程序员可以确定数组项是否可以覆盖或被从连接字符串中获取的值覆盖。在展开的*`库名`*条目可以被连接字符串的字段覆盖,而这些字段又被后面出现的数组条目覆盖*`库名`*(但同样,仅当这些条目提供非空值时)。 + +在处理所有数组项和任何扩展的连接字符串后,任何未设置的连接参数都将用默认值填充。如果未设置参数对应的环境变量(请参见[第34.15节](libpq-envars.html))设置,则使用其值。如果环境变量也未设置,则使用参数的内置默认值。 + +`PQconnectdb`[](<>) + +与数据库服务器建立新连接。 + +``` +PGconn *PQconnectdb(const char *conninfo); +``` + +此函数使用从字符串中获取的参数打开一个新的数据库连接`康宁`. + +传递的字符串可以为空以使用所有默认参数,也可以包含一个或多个由空格分隔的参数设置,或者包含URI。看见[第34.1.1节](libpq-connect.html#LIBPQ-CONNSTRING)详细信息。 + +`PQsetdbLogin`[](<>) + +与数据库服务器建立新连接。 + +``` +PGconn *PQsetdbLogin(const char *pghost, + const char *pgport, + const char *pgoptions, + const char *pgtty, + const char *dbName, + const char *login, + const char *pwd); +``` + +这是它的前身[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB)有一组固定的参数。它具有相同的功能,只是缺少的参数将始终采用默认值。写`无效的`或者为任何一个要默认的固定参数设置一个空字符串。 + +如果*`数据库名`*包含一个`=`签名或具有有效的连接URI前缀,则将其视为*`康宁`*字符串的方式与传递给[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB),剩下的参数将按照[`PQconnectdbParams`](libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS). + +`pgtty`不再使用,传递的任何值都将被忽略。 + +`PQsetdb`[](<>) + +与数据库服务器建立新连接。 + +``` +PGconn *PQsetdb(char *pghost, + char *pgport, + char *pgoptions, + char *pgtty, + char *dbName); +``` + +这是一个调用[`PQsetdbLogin`](libpq-connect.html#LIBPQ-PQSETDBLOGIN)对于*`登录`*和*`pwd`*参数。它是为了向后兼容非常旧的程序而提供的。 + +`连接星图`[](<>)\ +`连接开始`[](<>)\ +`民意测验`[](<>) + +[](<>)以非阻塞方式连接到数据库服务器。 + +``` +PGconn *PQconnectStartParams(const char * const *keywords, + const char * const *values, + int expand_dbname); + +PGconn *PQconnectStart(const char *conninfo); + +PostgresPollingStatusType PQconnectPoll(PGconn *conn); +``` + +这三个函数用于打开与数据库服务器的连接,以便在执行时不会在远程I/O上阻塞应用程序的执行线程。这种方法的要点是,等待I/O完成可以发生在应用程序的主循环中,而不是在内部[`PQconnectdbParams`](libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS)或[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB),因此应用程序可以与其他活动并行管理此操作。 + +具有[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS),使用从数据库中获取的参数进行数据库连接`关键词`和`价值观`数组,并由`展开_dbname`,如上所述[`PQconnectdbParams`](libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS). + +具有`连接开始`,使用从字符串中获取的参数建立数据库连接`康宁`如上所述[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB). + +也不[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS)也没有`连接开始`也没有`民意测验`将被阻止,只要满足一些限制: + +- 这个`霍斯塔德酒店`参数必须适当使用,以防止进行DNS查询。请参阅中此参数的文档[第34.1.2节](libpq-connect.html#LIBPQ-PARAMKEYWORDS)详细信息。 + +- 如果你打电话[`PQtrace`](libpq-control.html#LIBPQ-PQTRACE),确保跟踪到的流对象不会被阻止。 + +- 在调用之前,必须确保套接字处于适当的状态`民意测验`,如下所述。 + + 要开始非阻塞连接请求,请调用`连接开始`或[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS)。如果结果为空,则libpq无法分配新的`PGconn`结构否则,一个有效的`PGconn`返回指针(尽管尚未表示到数据库的有效连接)。下一个电话`PQstatus(康涅狄格州)`.如果结果是`连接不好`,连接尝试已失败,通常是因为连接参数无效。 + + 如果`连接开始`或[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS)如果成功,下一步是轮询libpq,以便继续连接序列。使用`电源插座(康涅狄格州)`获取数据库连接基础套接字的描述符。(注意:不要假设插座在整个过程中保持不变。)`民意测验`电话。)这样循环:如果`康涅狄格州`上次回来`PGRES_轮询_读取`,等待套接字准备好读取(如所示)`选择()`, `投票`,或类似的系统功能)。然后打电话`康涅狄格州`再一次相反,如果`康涅狄格州`上次回来`PGRES_POLLING_写作`,等待套接字准备好写入,然后调用`康涅狄格州`再一次在第一次迭代中,即如果您尚未调用`民意测验`,就好像它最后一次回来一样`PGRES_POLLING_写作`.继续这个循环直到`康涅狄格州`返回`PGRES_轮询_失败`,表示连接过程失败,或`PGRES_POLLING_OK`,表示已成功建立连接。 + + 在连接过程中的任何时候,都可以通过调用[`PQ状态`](libpq-status.html#LIBPQ-PQSTATUS).如果这个电话回来`连接不好`,则连接过程失败;如果电话回音`好的`,则连接已准备就绪。这两种状态都可以从`民意测验`,如上所述。其他状态也可能在异步连接过程期间(并且仅在异步连接过程期间)发生。这些指示连接过程的当前阶段,例如,可能有助于向用户提供反馈。这些状态是: + +`连接已启动` + +正在等待连接。 + +`连接!` + +连接正常;等待发送。 + +`连接等待响应` + +正在等待服务器的响应。 + +`连接_验证_确定` + +接收认证;正在等待后端启动完成。 + +`连接\u SSL\u启动` + +协商SSL加密。 + +`连接_SETENV` + +协商环境驱动的参数设置。 + +`连接检查可写` + +检查连接是否能够处理写事务。 + +`连接消耗` + +正在使用连接上的所有剩余响应消息。 + +请注意,尽管这些常量将保持不变(为了保持兼容性),但应用程序永远不应该依赖于这些常量以特定顺序出现,或者根本不应该依赖于这些常量,或者依赖于状态始终是这些记录的值之一。应用程序可能会执行以下操作: + +``` +switch(PQstatus(conn)) +{ + case CONNECTION_STARTED: + feedback = "Connecting..."; + break; + + case CONNECTION_MADE: + feedback = "Connected to server..."; + break; +. +. +. + default: + feedback = "Connecting..."; +} +``` + +这个`连接超时`使用时忽略连接参数`民意测验`; 应用程序有责任决定是否经过了过长的时间。否则`连接开始`然后是`民意测验`循环相当于[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB). + +注意,当`连接开始`或[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS)返回非空指针,必须调用[`PQfinish`](libpq-connect.html#LIBPQ-PQFINISH)当你完成它,以便处置该结构和任何相关的内存块。即使连接尝试失败或放弃,也必须这样做。 + +`pqconn默认值`[](<>) + +返回默认连接选项。 + +``` +PQconninfoOption *PQconndefaults(void); + +typedef struct +{ + char *keyword; /* The keyword of the option */ + char *envvar; /* Fallback environment variable name */ + char *compiled; /* Fallback compiled in default value */ + char *val; /* Option's current value, or NULL */ + char *label; /* Label for field in connect dialog */ + char *dispchar; /* Indicates how to display this field + in a connect dialog. Values are: + "" Display entered value as is + "*" Password field - hide value + "D" Debug option - don't show by default */ + int dispsize; /* Field size in characters for dialog */ +} PQconninfoOption; +``` + +返回连接选项数组。这可以用来确定所有可能的[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB)选项及其当前默认值。返回值指向`pQConInfoOption`结构,该结构以具有null的条目结尾`关键词`指针。如果无法分配内存,则返回空指针。请注意,当前的默认值(`瓦尔`字段)将取决于环境变量和其他上下文。丢失或无效的服务文件将被静默忽略。呼叫者必须将连接选项数据视为只读。 + +处理完选项数组后,将其传递给[`免费的`](libpq-misc.html#LIBPQ-PQCONNINFOFREE)。如果不这样做,则每次调用[`pqconn默认值`](libpq-connect.html#LIBPQ-PQCONNDEFAULTS). + +`PQconninfo`[](<>) + +返回实时连接使用的连接选项。 + +``` +PQconninfoOption *PQconninfo(PGconn *conn); +``` + +返回连接选项数组。这可以用来确定所有可能的[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB)选项和用于连接到服务器的值。返回值指向`pQConInfoOption`结构,该结构以具有null的条目结尾`关键词`指针。以上所有注释仅供参考[`pqconn默认值`](libpq-connect.html#LIBPQ-PQCONNDEFAULTS)也适用于以下结果:[`PQconninfo`](libpq-connect.html#LIBPQ-PQCONNINFO). + +`pqconnsporse`[](<>) + +从提供的连接字符串返回已解析的连接选项。 + +``` +PQconninfoOption *PQconninfoParse(const char *conninfo, char **errmsg); +``` + +解析连接字符串并将结果选项作为数组返回;或返回`无效的`如果连接字符串有问题。此函数可用于提取[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB)提供的连接字符串中的选项。返回值指向`pQConInfoOption`结构,该结构以具有null的条目结尾`关键词`指针。 + +所有合法选项都将出现在结果数组中,但`pQConInfoOption`对于连接字符串中不存在的任何选项`瓦尔`开始`无效的`; 不会插入默认值。 + +如果`嗯`不是`无效的`然后`*嗯`即将`无效的`在成功的时候,否则就要失败`马洛克`“d解释问题的错误字符串。”。(也有可能是`*嗯`着手`无效的`以及返回的函数`无效的`; 这表示内存不足。) + +处理完选项数组后,将其传递给[`免费的`](libpq-misc.html#LIBPQ-PQCONNINFOFREE)。如果不这样做,则每次调用[`pqconnsporse`](libpq-connect.html#LIBPQ-PQCONNINFOPARSE).相反,如果发生错误`嗯`不是`无效的`,请确保使用释放错误字符串[`PQfreemem`](libpq-misc.html#LIBPQ-PQFREEMEM). + +`PQfinish`[](<>) + +关闭与服务器的连接。还可以释放用户使用的内存`PGconn`对象 + +``` +void PQfinish(PGconn *conn); +``` + +请注意,即使服务器连接尝试失败(如[`PQ状态`](libpq-status.html#LIBPQ-PQSTATUS)),应用程序应调用[`PQfinish`](libpq-connect.html#LIBPQ-PQFINISH)释放服务器使用的内存`PGconn`对象这个`PGconn`之后不得再次使用指针[`PQfinish`](libpq-connect.html#LIBPQ-PQFINISH)已经打过电话了。 + +`PQreset`[](<>) + +重置与服务器的通信通道。 + +``` +void PQreset(PGconn *conn); +``` + +此功能将关闭与服务器的连接,并尝试使用之前使用的所有相同参数建立新连接。如果工作连接丢失,这可能有助于错误恢复。 + +`重新启动`[](<>)\ +`民意测验`[](<>) + +以非阻塞方式重置与服务器的通信通道。 + +``` +int PQresetStart(PGconn *conn); + +PostgresPollingStatusType PQresetPoll(PGconn *conn); +``` + +这些功能将关闭与服务器的连接,并尝试使用之前使用的所有相同参数建立新连接。如果工作连接丢失,这对于错误恢复非常有用。它们不同于[`PQreset`](libpq-connect.html#LIBPQ-PQRESET)(上图)因为它们以非阻塞的方式行动。这些功能受到的限制与[`连接星图`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS), `连接开始`和`民意测验`. + +要启动连接重置,请致电[`重新启动`](libpq-connect.html#LIBPQ-PQRESETSTART).如果返回0,则重置失败。如果返回1,则使用`民意测验`与使用创建连接的方式完全相同`民意测验`. + +`PQpingParams`[](<>) + +[`PQpingParams`](libpq-connect.html#LIBPQ-PQPINGPARAMS)报告服务器的状态。它接受与的连接参数相同的连接参数[`PQconnectdbParams`](libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS),如上所述。获取服务器状态不需要提供正确的用户名、密码或数据库名称值;但是,如果提供的值不正确,服务器将记录失败的连接尝试。 + +``` +PGPing PQpingParams(const char * const *keywords, + const char * const *values, + int expand_dbname); +``` + +该函数返回以下值之一: + +`好的` + +服务器正在运行,似乎正在接受连接。 + +`PQPING_拒绝` + +服务器正在运行,但处于不允许连接的状态(启动、关闭或崩溃恢复)。 + +`PQPING_无响应` + +无法联系服务器。这可能表明服务器没有运行,或者给定的连接参数有问题(例如,错误的端口号),或者存在网络连接问题(例如,防火墙阻止连接请求)。 + +`PQPING_无尝试` + +没有尝试联系服务器,因为提供的参数明显不正确,或者存在一些客户端问题(例如,内存不足)。 + +`PQping`[](<>) + +[`PQping`](libpq-connect.html#LIBPQ-PQPING)报告服务器的状态。它接受与的连接参数相同的连接参数[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB),如上所述。获取服务器状态不需要提供正确的用户名、密码或数据库名称值;但是,如果提供的值不正确,服务器将记录失败的连接尝试。 + +``` +PGPing PQping(const char *conninfo); +``` + +返回值与的相同[`PQpingParams`](libpq-connect.html#LIBPQ-PQPINGPARAMS). + +`PQsetSSLKeyPassHook_OpenSSL`[](<>) + +`PQsetSSLKeyPassHook_OpenSSL`允许应用程序覆盖libpq的[加密客户端证书密钥文件的默认处理](libpq-ssl.html#LIBPQ-SSL-CLIENTCERT)使用[sslpassword](libpq-connect.html#LIBPQ-CONNECT-SSLPASSWORD)或者互动提示。 + +``` +void PQsetSSLKeyPassHook_OpenSSL(PQsslKeyPassHook_OpenSSL_type hook); +``` + +应用程序将指针传递给带有签名的回调函数: + +``` +int callback_fn(char *buf, int size, PGconn *conn); +``` + +然后哪个libpq将调用*而不是*它的默认`PQdefaultSSLKeyPassHook_OpenSSL`汉德勒。回调函数应该确定密钥的密码,并将其复制到结果缓冲区*`缓冲器`*大小*`大小`*.绳子*`缓冲器`*必须以null结尾。回调必须返回存储在中的密码长度*`缓冲器`*不包括空终止符。失败时,应设置回调`buf[0]='\0'`然后返回0。看见`PQdefaultSSLKeyPassHook_OpenSSL`以libpq的源代码为例。 + +如果用户指定了显式密钥位置,其路径将为`康涅狄格州->斯尔基`当调用回调时。如果正在使用默认密钥路径,则该值将为空。对于作为引擎说明符的密钥,由引擎实现决定它们是使用OpenSSL密码回调还是定义自己的处理。 + +应用回调可以选择将未处理的案例委托给`PQdefaultSSLKeyPassHook_OpenSSL`,或者先调用它,如果它返回0,则尝试其他操作,或者完全覆盖它。 + +回拨*不能*例外情况下,脱离正常流量控制,`longjmp(…)`等。它必须正常返回。 + +`PQgetSSLKeyPassHook_OpenSSL`[](<>) + +`PQgetSSLKeyPassHook_OpenSSL`返回当前客户端证书密钥密码挂钩,或`无效的`如果没有设置。 + +``` +PQsslKeyPassHook_OpenSSL_type PQgetSSLKeyPassHook_OpenSSL(void); +``` + +### 34.1.1.连接字符串 + +[](<>)[](<>) + +几个libpq函数解析用户指定的字符串以获取连接参数。这些字符串有两种公认的格式:普通关键字/值字符串和URI。URI通常遵循[RFC 3986](https://tools.ietf.org/html/rfc3986),但允许使用多主机连接字符串,如下所述。 + +#### 34.1.1.1.关键字/值连接字符串 + +在关键字/值格式中,每个参数设置的格式为*`关键词`* `=` *`价值`*,设置之间有空格。设置等号周围的空格是可选的。例如,要写入空值或包含空格的值,请将其用单引号括起来`关键字='a value'`.值中的单引号和反斜杠必须用反斜杠转义,即。,`\'`和`\\`. + +例子: + +``` +host=localhost port=5432 dbname=mydb connect_timeout=10 +``` + +中列出了识别的参数关键字[第34.1.2节](libpq-connect.html#LIBPQ-PARAMKEYWORDS). + +#### 34.1.1.2.连接URI + +连接URI的一般形式是: + +``` +postgresql://[userspec@][hostspec][/dbname][?paramspec] + +where userspec is: + +user[:password] + +and hostspec is: + +[host][:port][,...] + +and paramspec is: + +name=value[&...] +``` + +URI方案指示符可以是`postgresql://`或`博士后://`。剩余的每个URI部分都是可选的。以下示例说明了有效的URI语法: + +``` +postgresql:// +postgresql://localhost +postgresql://localhost:5433 +postgresql://localhost/mydb +postgresql://user@localhost +postgresql://user:secret@localhost +postgresql://other@localhost/otherdb?connect_timeout=10&application_name=myapp +postgresql://host1:123,host2:456/somedb?target_session_attrs=any&application_name=myapp +``` + +通常出现在URI层次结构部分的值也可以作为命名参数给出。例如: + +``` +postgresql:///mydb?host=localhost&port=5433 +``` + +所有命名参数必须与中列出的关键字匹配[第34.1.2节](libpq-connect.html#LIBPQ-PARAMKEYWORDS),除了为了与JDBC连接URI兼容之外`ssl=true`翻译成`sslmode=require`. + +连接URI需要使用[百分比编码](https://tools.ietf.org/html/rfc3986#section-2.1)如果其任何部分包含具有特殊含义的符号。下面是一个例子,其中等号(`=`)被替换为`%3D`还有空间特征`%20`: + +``` +postgresql://user@localhost:5433/mydb?options=-c%20synchronous_commit%3Doff +``` + +主机部分可以是主机名或IP地址。要指定IPv6地址,请将其括在方括号中: + +``` +postgresql://[2001:db8::1234]/database +``` + +主体零件的解释如参数所述[主办](libpq-connect.html#LIBPQ-CONNECT-HOST)。特别是,如果主机部分为空或看起来像绝对路径名,则选择Unix域套接字连接,否则会启动TCP/IP连接。但是,请注意,斜杠是URI层次结构部分中的保留字符。因此,要指定非标准的Unix域套接字目录,可以省略URI的主机部分并将主机指定为命名参数,或者对URI的主机部分的路径进行百分比编码: + +``` +postgresql:///dbname?host=/var/lib/postgresql +postgresql://%2Fvar%2Flib%2Fpostgresql/dbname +``` + +可以在一个URI中指定多个主机组件,每个组件都有一个可选的端口组件。表单的URI`Postgresql://host1:port1,host2:port2,host3:port3/`相当于以下形式的连接字符串`主机=主机1,主机2,主机3端口=端口1,端口2,端口3`。如下文所述,将依次尝试每个主机,直到成功建立连接。 + +#### 34.1.1.3.指定多个主机 + +可以指定要连接的多个主机,以便按给定顺序尝试它们。在关键字/值格式中`主办`, `霍斯塔德酒店`和`港口城市`选项接受逗号分隔的值列表。在指定的每个选项中必须给出相同数量的元素,例如,第一个`霍斯塔德酒店`对应于第一个主机名,第二个`霍斯塔德酒店`对应于第二个主机名,依此类推。如果只有一个例外的话`港口城市`如果指定,则它适用于所有主机。 + +在连接URI格式中,可以列出多个`主持人:波特`表中以逗号分隔的对`主办`URI的组件。 + +在这两种格式中,一个主机名都可以转换为多个网络地址。一个常见的例子是同时具有IPv4和IPv6地址的主机。 + +当指定了多个主机时,或者当一个主机名被转换为多个地址时,将按顺序尝试所有主机和地址,直到其中一个成功。如果无法联系到任何主机,则连接将失败。如果成功建立连接,但身份验证失败,则不会尝试列表中的其余主机。 + +如果使用密码文件,您可以为不同的主机使用不同的密码。列表中的每个主机的所有其他连接选项都相同;例如,不可能为不同的主机指定不同的用户名。 + +### 34.1.2.参数关键字 + +当前识别的参数关键字为: + +`主办` + +要连接到的主机的名称。[](<>)如果主机名看起来像一个绝对路径名,则它指定Unix域通信,而不是TCP/IP通信;该值是存储套接字文件的目录的名称。(在Unix上,绝对路径名以斜杠开头。在Windows上,也可以识别以驱动器号开头的路径。)如果主机名以`@`,它被作为抽象名称空间中的Unix域套接字(当前Linux和Windows支持)。当`主办`未指定或为空,是为了连接到Unix域套接字[](<>)在里面`/tmp`(或者构建PostgreSQL时指定的任何套接字目录)。在Windows和没有Unix域套接字的计算机上,默认设置是连接到`本地服务器`. + +也接受以逗号分隔的主机名列表,在这种情况下,列表中的每个主机名都会按顺序进行尝试;列表中的一个空项将选择如上所述的默认行为。看见[第34.1.1.3节](libpq-connect.html#LIBPQ-MULTIPLE-HOSTS)详细信息。 + +`霍斯塔德酒店` + +要连接的主机的数字IP地址。这应该是标准的IPv4地址格式,例如。,`172.28.40.9`。如果您的计算机支持IPv6,您也可以使用这些地址。当为此参数指定非空字符串时,始终使用TCP/IP通信。如果未指定此参数,则`主办`将被查找以找到相应的IP地址——或者,如果`主办`指定一个IP地址,该值将直接使用。 + +使用`霍斯塔德酒店`允许应用程序避免主机名查找,这在有时间限制的应用程序中可能很重要。但是,GSSAPI或SSPI身份验证方法以及`验证完整`SSL证书验证。使用以下规则: + +- 如果`主办`没有指定`霍斯塔德酒店`,将进行主机名查找。(使用时)`民意测验`,当`民意测验`首先考虑这个主机名,它可能会导致`民意测验`要在相当长的时间内阻塞。) + +- 如果`霍斯塔德酒店`没有指定`主办`,为`霍斯塔德酒店`提供服务器网络地址。如果身份验证方法需要主机名,则连接尝试将失败。 + +- 如果两者都有`主办`和`霍斯塔德酒店`如果已指定,则`霍斯塔德酒店`提供服务器网络地址。价值`主办`被忽略,除非身份验证方法需要它,在这种情况下,它将被用作主机名。 + + 请注意,如果`主办`不是网络地址处的服务器名称`霍斯塔德酒店`.而且,当两者都`主办`和`霍斯塔德酒店`指定的,`主办`用于在密码文件中标识连接(请参阅[第34.16节](libpq-pgpass.html)). + + 以逗号分隔的列表`霍斯塔德酒店`值也被接受,在这种情况下,列表中的每个主机都会按顺序进行尝试。列表中的空项会导致使用相应的主机名,如果该主机名为空,则使用默认主机名。看见[第34.1.1.3节](libpq-connect.html#LIBPQ-MULTIPLE-HOSTS)详细信息。 + + 没有主机名或主机地址,libpq将使用本地Unix域套接字进行连接;或者在Windows和没有Unix域套接字的计算机上,它将尝试连接到`本地服务器`. + +`港口城市` + +服务器主机上要连接的端口号,或Unix域连接的套接字文件扩展名。[](<>)如果系统中提供了多个主机`主办`或`霍斯塔德酒店`参数,此参数可以指定与主机列表长度相同的端口的逗号分隔列表,也可以指定用于所有主机的单个端口号。空字符串或逗号分隔列表中的空项指定生成PostgreSQL时建立的默认端口号。 + +`库名` + +数据库名称。默认值与用户名相同。在某些情况下,会检查扩展格式的值;看见[第34.1.1节](libpq-connect.html#LIBPQ-CONNSTRING)更多关于这些的细节。 + +`使用者` + +连接为的PostgreSQL用户名。默认值与运行应用程序的用户的操作系统名称相同。 + +`暗语` + +如果服务器要求密码身份验证,则使用密码。 + +`密码文件` + +指定用于存储密码的文件名(请参见[第34.16节](libpq-pgpass.html)).默认为`~/.pgpass`或`%APPDATA%\postgresql\pgpass。形态`在微软Windows上。(如果此文件不存在,则不会报告错误。) + +`通道绑定` + +此选项控制客户端对通道绑定的使用。一套`要求`表示连接必须使用通道绑定,`更喜欢`意味着客户端将选择通道绑定(如果可用),以及`使残废`防止使用通道绑定。默认值是`更喜欢`如果PostgreSQL是使用SSL支持编译的;否则默认为`使残废`. + +通道绑定是服务器向客户机进行身份验证的一种方法。仅支持通过SSL连接使用PostgreSQL 11或更高版本的服务器`快滚`身份验证方法。 + +`连接超时` + +连接时等待的最长时间,以秒为单位(以十进制整数写入,例如。,`10`).零、负或未指定表示无限期等待。允许的最小超时为2秒,因此值为`1.`被解释为`2.`。此超时分别适用于每个主机名或IP地址。例如,如果指定两个主机和`连接超时`是5,如果在5秒内没有连接,每个主机将超时,因此等待连接的总时间可能长达10秒。 + +`客户机编码` + +这就决定了`客户机编码`此连接的配置参数。除了相应服务器选项接受的值之外,还可以使用`汽车`根据客户端中的当前区域设置确定正确的编码(`LC_CTYPE`Unix系统上的环境变量)。 + +`选项` + +指定要在连接开始时发送到服务器的命令行选项。例如,将其设置为`-c geqo=关闭`设置会话的`盖库`参数到`关`。此字符串中的空格被视为分隔命令行参数,除非用反斜杠转义(`\`); 写`\\`表示文字反斜杠。有关可用选项的详细讨论,请咨询[第 20 章](runtime-config.html). + +`应用名称` + +指定一个值[应用\_姓名](runtime-config-logging.html#GUC-APPLICATION-NAME)配置参数。 + +`fallback_application_name` + +为[应用\_姓名](runtime-config-logging.html#GUC-APPLICATION-NAME)配置参数。如果没有为`应用名称`通过连接参数或`PGAPPNAME`环境变量。在希望设置默认应用程序名称但允许用户覆盖它的通用实用程序中,指定备用名称很有用。 + +`保活` + +控制是否使用客户端 TCP keepalive。默认值为 1,表示开启,但如果不需要保活,您可以将其更改为 0,表示关闭。对于通过 Unix 域套接字建立的连接,此参数将被忽略。 + +`keepalives_idle` + +控制 TCP 应向服务器发送 keepalive 消息的不活动秒数。零值使用系统默认值。对于通过 Unix 域套接字建立的连接,或者如果禁用了 keepalives,则忽略此参数。它仅在以下系统上受支持`TCP_KEEPIDLE`或等效的套接字选项可用,并且在 Windows 上;在其他系统上,它没有影响。 + +`keepalives_interval` + +控制应该重新传输未被服务器确认的 TCP keepalive 消息的秒数。零值使用系统默认值。对于通过 Unix 域套接字建立的连接,或者如果禁用了 keepalives,则忽略此参数。它仅在以下系统上受支持`TCP_KEEPINTVL`或等效的套接字选项可用,并且在 Windows 上;在其他系统上,它没有影响。 + +`keepalives_count` + +控制在客户端与服务器的连接被认为死之前可能丢失的 TCP 保持连接的数量。零值使用系统默认值。对于通过 Unix 域套接字建立的连接,或者如果禁用了 keepalives,则忽略此参数。它仅在以下系统上受支持`TCP_KEEPCNT`或提供等效的插座选项;在其他系统上,它没有影响。 + +`tcp_user_timeout` + +控制在强制关闭连接之前传输的数据可能保持未确认的毫秒数。零值使用系统默认值。对于通过 Unix 域套接字建立的连接,此参数将被忽略。它仅在以下系统上受支持`TCP_USER_TIMEOUT`可用;在其他系统上,它没有影响。 + +`tty` + +忽略(以前,这指定将服务器调试输出发送到何处)。 + +`复制` + +此选项确定连接是否应使用复制协议而不是普通协议。这就是 PostgreSQL 复制连接以及 pg 等工具\_basebackup 在内部使用,但也可以由第三方应用程序使用。有关复制协议的说明,请参阅[第 53.4 节](protocol-replication.html). + +支持以下不区分大小写的值: + +`真的`,`在`,`是的`,`1` + +连接进入物理复制模式。 + +`数据库` + +连接进入逻辑复制模式,连接到指定的数据库`数据库名称`范围。 + +`错误的`,`离开`,`不`,`0` + +连接是常规连接,这是默认行为。 + +在物理或逻辑复制模式下,只能使用简单查询协议。 + +`gssencmode` + +此选项确定是否或以何种优先级与服务器协商安全 GSS TCP/IP 连接。共有三种模式: + +`禁用` + +只尝试非 GSSAPI 加密的连接 + +`更喜欢`(默认) + +如果存在 GSSAPI 凭据(即,在凭据缓存中),首先尝试 GSSAPI 加密连接;如果失败或没有凭据,请尝试非 GSSAPI 加密连接。这是使用 GSSAPI 支持编译 PostgreSQL 时的默认设置。 + +`要求` + +只尝试 GSSAPI 加密的连接 + +`gssencmode`对于 Unix 域套接字通信被忽略。如果 PostgreSQL 在没有 GSSAPI 支持的情况下编译,使用`要求`选项将导致错误,而`更喜欢`将被接受,但 libpq 实际上不会尝试 GSSAPI 加密连接。[](<>) + +`ssl模式` + +此选项确定是否或以何种优先级与服务器协商安全 SSL TCP/IP 连接。有六种模式: + +`禁用` + +只尝试非 SSL 连接 + +`允许` + +首先尝试非 SSL 连接;如果失败,请尝试 SSL 连接 + +`更喜欢`(默认) + +首先尝试 SSL 连接;如果失败,请尝试非 SSL 连接 + +`要求` + +只尝试 SSL 连接。如果存在根 CA 文件,则以与以下方式相同的方式验证证书`验证-ca`被指定 + +`验证-ca` + +仅尝试 SSL 连接,并验证服务器证书是否由受信任的证书颁发机构 (CA) 颁发 + +`验证完整` + +仅尝试 SSL 连接,验证服务器证书是否由受信任的 CA 颁发,并且请求的服务器主机名与证书中的主机名匹配 + +看[第 34.19 节](libpq-ssl.html)有关这些选项如何工作的详细说明。 + +`ssl模式`对于 Unix 域套接字通信被忽略。如果 PostgreSQL 在没有 SSL 支持的情况下编译,使用选项`要求`,`验证-ca`, 或者`验证完整`将导致错误,而选项`允许`和`更喜欢`将被接受,但 libpq 实际上不会尝试 SSL 连接。[](<>) + +请注意,如果 GSSAPI 加密是可能的,那么将优先使用 SSL 加密,而不管`ssl模式`.要在具有有效 GSSAPI 基础架构(例如 Kerberos 服务器)的环境中强制使用 SSL 加密,还需设置`gssencmode`到`禁用`. + +`requiressl` + +This option is deprecated in favor of the`sslmode`setting. + +If set to 1, an SSL connection to the server is required (this is equivalent to`sslmode` `require`). libpq will then refuse to connect if the server does not accept an SSL connection. If set to 0 (default), libpq will negotiate the connection type with the server (equivalent to`sslmode` `prefer`). This option is only available if PostgreSQL is compiled with SSL support. + +`sslcompression` + +If set to 1, data sent over SSL connections will be compressed. If set to 0, compression will be disabled. The default is 0. This parameter is ignored if a connection without SSL is made. + +SSL compression is nowadays considered insecure and its use is no longer recommended. OpenSSL 1.1.0 disables compression by default, and many operating system distributions disable it in prior versions as well, so setting this parameter to on will not have any effect if the server does not accept compression. PostgreSQL 14 disables compression completely in the backend. + +If security is not a primary concern, compression can improve throughput if the network is the bottleneck. Disabling compression can improve response time and throughput if CPU performance is the limiting factor. + +`sslcert` + +This parameter specifies the file name of the client SSL certificate, replacing the default`~/.postgresql/postgresql.crt`. This parameter is ignored if an SSL connection is not made. + +`sslkey` + +This parameter specifies the location for the secret key used for the client certificate. It can either specify a file name that will be used instead of the default`~/.postgresql/postgresql.key`, or it can specify a key obtained from an external “engine” (engines are OpenSSL loadable modules). An external engine specification should consist of a colon-separated engine name and an engine-specific key identifier. This parameter is ignored if an SSL connection is not made. + +`sslpassword` + +This parameter specifies the password for the secret key specified in`sslkey`, allowing client certificate private keys to be stored in encrypted form on disk even when interactive passphrase input is not practical. + +Specifying this parameter with any non-empty value suppresses the`Enter PEM pass phrase:`提示 OpenSSL 在提供加密客户端证书密钥时默认发出`库`. + +如果密钥未加密,则忽略此参数。该参数对 OpenSSL 引擎指定的密钥没有影响,除非引擎使用 OpenSSL 密码回调机制进行提示。 + +没有与此选项等效的环境变量,也没有用于查找它的工具`.pgpass`.它可以在服务文件连接定义中使用。使用更复杂的用户应考虑使用 OpenSSL 引擎和工具,如 PKCS#11 或 USB 加密卸载设备。 + +`sslrootcert` + +此参数指定包含 SSL 证书颁发机构 (CA) 证书的文件的名称。如果文件存在,服务器的证书将被验证为由这些机构之一签名。默认是`~/.postgresql/root.crt`. + +`sslcrl` + +此参数指定 SSL 服务器证书撤销列表 (CRL) 的文件名。此文件中列出的证书(如果存在)将在尝试验证服务器证书时被拒绝。如果两者都没有[sslcrl](libpq-connect.html#LIBPQ-CONNECT-SSLCRL)也不[sslcrldir](libpq-connect.html#LIBPQ-CONNECT-SSLCRLDIR)被设置,这个设置被视为`~/.postgresql/root.crl`. + +`sslcrldir` + +此参数指定 SSL 服务器证书撤销列表 (CRL) 的目录名称。此目录中的文件中列出的证书(如果存在)将在尝试验证服务器证书时被拒绝。 + +需要使用 OpenSSL 命令准备目录`openssl 重新散列`要么`c_rehash`.有关详细信息,请参阅其文档。 + +两个都`sslcrl`和`sslcrldir`可以一起指定。 + +`sslsni`[](<>) + +如果设置为 1(默认),libpq 在启用 SSL 的连接上设置 TLS 扩展“服务器名称指示”(SNI)。通过将此参数设置为 0,将其关闭。 + +感知 SSL 的代理可以使用服务器名称指示来路由连接,而无需解密 SSL 流。(请注意,这需要一个知道 PostgreSQL 协议握手的代理,而不仅仅是任何 SSL 代理。)但是,SNI 使目标主机名以明文形式出现在网络流量中,因此在某些情况下它可能是不可取的。 + +`要求同行` + +该参数指定服务器的操作系统用户名,例如`要求同行=postgres`.建立 Unix 域套接字连接时,如果设置了此参数,客户端会在连接开始时检查服务器进程是否以指定的用户名运行;如果不是,则连接因错误而中止。此参数可用于提供类似于 TCP/IP 连接上 SSL 证书可用的服务器身份验证。(请注意,如果 Unix 域套接字位于`/tmp`或另一个可公开写入的位置,任何用户都可以在那里启动服务器监听。使用此参数可确保您连接到由受信任用户运行的服务器。)此选项仅在具有以下功能的平台上受支持`同行`实现认证方法;看[第 21.9 节](auth-peer.html). + +`ssl_min_protocol_version` + +此参数指定允许连接的最低 SSL/TLS 协议版本。有效值为`TLSv1`,`TLSv1.1`,`TLSv1.2`和`TLSv1.3`.支持的协议取决于所使用的 OpenSSL 版本,旧版本不支持最现代的协议版本。如果未指定,则默认为`TLSv1.2`,在撰写本文时满足行业最佳实践。 + +`ssl_max_protocol_version` + +此参数指定允许连接的最大 SSL/TLS 协议版本。有效值为`TLSv1`,`TLSv1.1`,`TLSv1.2`和`TLSv1.3`.支持的协议取决于所使用的 OpenSSL 版本,旧版本不支持最现代的协议版本。如果未设置,则忽略此参数,连接将使用后端定义的最大界限(如果设置)。设置最大协议版本主要用于测试或某些组件在使用较新协议时出现问题。 + +`krbsrv 名称` + +使用 GSSAPI 进行身份验证时使用的 Kerberos 服务名称。这必须与服务器配置中指定的服务名称匹配,才能使 Kerberos 身份验证成功。(也可以看看[第 21.6 节](gssapi-auth.html).) 默认值通常是`postgres`, 但是在构建 PostgreSQL 时可以通过`--with-krb-srvnam`配置选项。在大多数环境中,永远不需要更改此参数。某些 Kerberos 实现可能需要不同的服务名称,例如 Microsoft Active Directory,它要求服务名称为大写 (`研究生院`)。 + +`gsslib` + +用于 GSSAPI 身份验证的 GSS 库。目前,这被忽略,除了同时包含 GSSAPI 和 SSPI 支持的 Windows 版本。在这种情况下,将其设置为`gssapi`使 libpq 使用 GSSAPI 库而不是默认的 SSPI 进行身份验证。 + +`服务` + +用于附加参数的服务名称。它指定了一个服务名称`pg_service.conf`包含额外的连接参数。这允许应用程序仅指定一个服务名称,以便可以集中维护连接参数。看[第 34.17 节](libpq-pgservice.html). + +`目标会话属性` + +此选项确定会话是否必须具有某些属性才能被接受。它通常与多个主机名结合使用,以在多个主机中选择第一个可接受的替代方案。有六种模式: + +`任何`(默认) + +任何成功的连接都是可以接受的 + +`读写` + +session 必须默认接受读写事务(即服务器不能处于热备模式并且`default_transaction_read_only`参数必须是`离开`) + +`只读` + +默认情况下,会话不得接受读写事务(反之亦然) + +`基本的` + +服务器不得处于热备用模式 + +`支持` + +服务器必须处于热备模式 + +`首选备用` + +首先尝试查找备用服务器,但如果列出的主机都不是备用服务器,请在`任何`模式 diff --git a/docs/X/libpq-copy.md b/docs/en/libpq-copy.md similarity index 100% rename from docs/X/libpq-copy.md rename to docs/en/libpq-copy.md diff --git a/docs/en/libpq-copy.zh.md b/docs/en/libpq-copy.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..bf9d8b388a502f0acef8ceb1a288418b36671efc --- /dev/null +++ b/docs/en/libpq-copy.zh.md @@ -0,0 +1,163 @@ +## 34.10. Functions Associated with the`COPY`Command + +[34.10.1. Functions for Sending`COPY`Data](libpq-copy.html#LIBPQ-COPY-SEND) + +[34.10.2. Functions for Receiving`COPY`Data](libpq-copy.html#LIBPQ-COPY-RECEIVE) + +[34.10.3. Obsolete Functions for`COPY`](libpq-copy.html#LIBPQ-COPY-DEPRECATED) + +[](<>) + +The`COPY`command in PostgreSQL has options to read from or write to the network connection used by libpq. The functions described in this section allow applications to take advantage of this capability by supplying or consuming copied data. + +The overall process is that the application first issues the SQL`COPY`command via[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)or one of the equivalent functions. The response to this (if there is no error in the command) will be a`PGresult`object bearing a status code of`PGRES_COPY_OUT`or`PGRES_COPY_IN`(depending on the specified copy direction). The application should then use the functions of this section to receive or transmit data rows. When the data transfer is complete, another`PGresult`object is returned to indicate success or failure of the transfer. Its status will be`PGRES_COMMAND_OK`for success or`PGRES_FATAL_ERROR`如果遇到一些问题。此时可以通过以下方式发出进一步的 SQL 命令[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC).(不能使用同一连接执行其他 SQL 命令,而`复制`操作正在进行中。) + +如果一个`复制`命令是通过发出的[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)在可能包含其他命令的字符串中,应用程序必须继续通过以下方式获取结果[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)完成后`复制`顺序。只有当[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)返回`空值`是否确定[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)命令字符串已完成,可以安全地发出更多命令。 + +本节的功能只有在获得结果状态后才能执行`PGRES_COPY_OUT`要么`PGRES_COPY_IN`从[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)要么[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT). + +一种`PGresult`object bearing one of these status values carries some additional data about the`COPY`operation that is starting. This additional data is available using functions that are also used in connection with query results: + +`PQnfields`[](<>) + +Returns the number of columns (fields) to be copied. + +`PQbinaryTuples`[](<>) + +0 indicates the overall copy format is textual (rows separated by newlines, columns separated by separator characters, etc). 1 indicates the overall copy format is binary. See[COPY](sql-copy.html)for more information. + +`PQfformat`[](<>) + +Returns the format code (0 for text, 1 for binary) associated with each column of the copy operation. The per-column format codes will always be zero when the overall copy format is textual, but the binary format can support both text and binary columns. (However, as of the current implementation of`COPY`, only binary columns appear in a binary copy; so the per-column formats always match the overall format at present.) + +### 34.10.1. Functions for Sending`COPY`Data + +These functions are used to send data during`COPY FROM STDIN`. They will fail if called when the connection is not in`COPY_IN`state. + +`PQputCopyData`[](<>) + +Sends data to the server during`COPY_IN`state. + +``` +int PQputCopyData(PGconn *conn, + const char *buffer, + int nbytes); +``` + +Transmits the`COPY`data in the specified*`buffer`*, 长度*`nbytes`*, 到服务器。如果数据已排队,则结果为 1,如果由于缓冲区已满而未排队,则结果为 0(这只会在非阻塞模式下发生),如果发生错误,则结果为 -1。(采用[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)如果返回值为 -1,则检索详细信息。如果值为零,请等待写入就绪,然后重试。) + +该应用程序可以划分`复制`数据流到任何方便大小的缓冲区加载中。缓冲区加载边界在发送时没有语义意义。数据流的内容必须与预期的数据格式相匹配`复制`命令;看[复制](sql-copy.html)详情。 + +`PQputCopyEnd`[](<>) + +在期间向服务器发送数据结束指示`COPY_IN`状态。 + +``` +int PQputCopyEnd(PGconn *conn, + const char *errormsg); +``` + +结束`COPY_IN`如果操作成功*`错误消息`*是`空值`.如果*`错误消息`*不是`空值`然后`复制`被迫失败,字符串指向*`错误消息`*用作错误消息。(但是,我们不应该假设这个确切的错误消息会从服务器返回,因为服务器可能已经失败了。)`复制`因为它自己的原因。) + +如果发送了终止消息,则结果为1;或者在非阻塞模式下,这可能仅表明终止消息已成功排队。(在非阻塞模式下,为了确保数据已经发送,接下来应该等待write ready并调用[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH),重复,直到返回零。)零表示由于缓冲区已满,函数无法对终止消息排队;这只会在非阻塞模式下发生。(在这种情况下,请等待write ready并尝试[`PQputCopyEnd`](libpq-copy.html#LIBPQ-PQPUTCOPYEND)再打一次。)如果出现硬错误,则返回-1;你可以用[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)检索详细信息。 + +成功呼叫后[`PQputCopyEnd`](libpq-copy.html#LIBPQ-PQPUTCOPYEND)呼叫[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)以获取`复制`命令人们可以等待这个结果以通常的方式出现。然后恢复正常操作。 + +### 34.10.2.接收功能`复制`数据 + +这些函数用于在运行期间接收数据`复制到标准输出`。如果在连接不在时调用,它们将失败`抄写`状态 + +`PQgetCopyData`[](<>) + +在运行期间从服务器接收数据`抄写`状态 + +``` +int PQgetCopyData(PGconn *conn, + char **buffer, + int async); +``` + +尝试在中断期间从服务器获取另一行数据`复制`.数据总是一次返回一个数据行;如果只有部分行可用,则不返回。数据行的成功返回涉及分配一块内存来保存数据。这*`缓冲`*参数必须是非`空值`.*\`*缓冲`* 设置为指向分配的内存,或者指向`空值`在没有返回缓冲区的情况下。一个非`空值`应使用 [ 释放结果缓冲区`PQfreemem\`](libpq-misc.html#LIBPQ-PQFREEMEM) 不再需要时。 + +当成功返回一行时,返回值为该行中的数据字节数(这将始终大于零)。返回的字符串始终以 null 结尾,尽管这可能仅对文本有用`复制`.结果为零表示`复制`仍在进行中,但还没有可用的行(这只有在*`异步`*是真的)。-1 的结果表明`复制`已经完成了。结果 -2 表示发生了错误(请参阅[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)由于这个原因)。 + +什么时候*`异步`*为真(非零),[`PQgetCopyData`](libpq-copy.html#LIBPQ-PQGETCOPYDATA)不会阻塞等待输入;如果`COPY`is still in progress but no complete row is available. (In this case wait for read-ready and then call[`PQconsumeInput` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)before calling[`PQgetCopyData`](libpq-copy.html#LIBPQ-PQGETCOPYDATA)again.) When*`async`*is false (zero),[`PQgetCopyData`](libpq-copy.html#LIBPQ-PQGETCOPYDATA)will block until data is available or the operation completes. + +After[`PQgetCopyData`](libpq-copy.html#LIBPQ-PQGETCOPYDATA)returns -1, call[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)to obtain the final result status of the`COPY`command. One can wait for this result to be available in the usual way. Then return to normal operation. + +### 34.10.3. Obsolete Functions for`COPY` + +These functions represent older methods of handling`COPY`. Although they still work, they are deprecated due to poor error handling, inconvenient methods of detecting end-of-data, and lack of support for binary or nonblocking transfers. + +`PQgetline`[](<>) + +Reads a newline-terminated line of characters (transmitted by the server) into a buffer string of size*`length`*. + +``` +int PQgetline(PGconn *conn, + char *buffer, + int length); +``` + +This function copies up to*`length`*-1 characters into the buffer and converts the terminating newline into a zero byte.[`PQgetline`](libpq-copy.html#LIBPQ-PQGETLINE)返回`EOF`在输入结束时,如果已读取整行,则为 0,如果缓冲区已满但尚未读取终止的换行符,则为 1。 + +请注意,应用程序必须检查新行是否包含两个字符`\。`,表示服务器已经完成发送结果`复制`命令。如果应用程序可能收到超过*`长度`*-1 个字符长,需要注意确保它能够识别`\。`线正确(例如,不会将长数据线的末端误认为是终止线)。 + +`PQgetlineAsync`[](<>) + +读取一行`复制`数据(由服务器传输)进入缓冲区而不阻塞。 + +``` +int PQgetlineAsync(PGconn *conn, + char *buffer, + int bufsize); +``` + +这个功能类似于[`PQgetline`](libpq-copy.html#LIBPQ-PQGETLINE),但它可以被必须阅读的应用程序使用`复制`数据异步,即不阻塞。发出了`复制`命令并得到一个`PGRES_COPY_OUT`响应,应用程序应该调用[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)和[`PQgetlineAsync`](libpq-copy.html#LIBPQ-PQGETLINEASYNC)直到检测到数据结束信号。 + +不像[`PQgetline`](libpq-copy.html#LIBPQ-PQGETLINE),这个函数负责检测数据结束。 + +在每次通话中,[`PQgetlineAsync`](libpq-copy.html#LIBPQ-PQGETLINEASYNC)如果在 libpq 的输入缓冲区中有完整的数据行,将返回数据。否则,在行的其余部分到达之前,不会返回任何数据。如果已识别出复制数据结束标记,则该函数返回 -1,如果没有数据可用,则返回 0,或者返回一个正数,表示返回的数据字节数。如果返回 -1,调用者必须下一次调用[`PQendcopy`](libpq-copy.html#LIBPQ-PQENDCOPY),然后返回正常处理。 + +返回的数据不会超出数据行边界。如果可能,将一次返回一整行。但是如果调用者提供的缓冲区太小而无法容纳服务器发送的行,则将返回部分数据行。对于文本数据,可以通过测试最后返回的字节是否为`\n`或不。(在二进制`复制`, 实际解析`复制`将需要数据格式来进行等效确定。)返回的字符串不是以空值结尾的。(如果要添加终止空值,请务必传递一个*`缓冲区大小`*比实际可用的房间小一个。) + +`PQputline`[](<>) + +向服务器发送一个以 null 结尾的字符串。如果 OK 并且返回 0`EOF`如果无法发送字符串。 + +``` +int PQputline(PGconn *conn, + const char *string); +``` + +这`复制`通过一系列调用发送的数据流[`PQputline`](libpq-copy.html#LIBPQ-PQPUTLINE)与返回的格式相同[`PQgetlineAsync`](libpq-copy.html#LIBPQ-PQGETLINEASYNC),除了应用程序没有义务每次只发送一个数据行[`PQputline`](libpq-copy.html#LIBPQ-PQPUTLINE)称呼;每次通话可以发送部分线路或多条线路。 + +### 笔记 + +在 PostgreSQL 协议 3.0 之前,应用程序需要显式发送这两个字符`\。`作为最后一行,向服务器表明它已完成发送`复制`数据。虽然这仍然有效,但它已被弃用,并且它的特殊含义`\。`预计将在未来的版本中删除。打电话就够了[`PQendcopy`](libpq-copy.html#LIBPQ-PQENDCOPY)在发送了实际数据之后。 + +`PQputnbytes`[](<>) + +向服务器发送一个非空终止的字符串。如果 OK 并且返回 0`EOF`如果无法发送字符串。 + +``` +int PQputnbytes(PGconn *conn, + const char *buffer, + int nbytes); +``` + +这就像[`PQputline`](libpq-copy.html#LIBPQ-PQPUTLINE),除了数据缓冲区不需要以空值结尾,因为要发送的字节数是直接指定的。发送二进制数据时使用此过程。 + +`PQendcopy`[](<>) + +与服务器同步。 + +``` +int PQendcopy(PGconn *conn); +``` + +该函数一直等到服务器完成复制。它应该在最后一个字符串被发送到服务器时使用[`PQputline`](libpq-copy.html#LIBPQ-PQPUTLINE)或者当从服务器接收到最后一个字符串时,使用`PQgetline`.必须发出它,否则服务器将与客户端“不同步”。从这个函数返回后,服务器准备好接收下一个 SQL 命令。成功完成时返回值为 0,否则为非零。(采用[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)如果返回值非零,则检索详细信息。) + +使用时[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT),应用程序应响应`PGRES_COPY_OUT`结果通过执行[`PQgetline`](libpq-copy.html#LIBPQ-PQGETLINE)反复,接着[`PQendcopy`](libpq-copy.html#LIBPQ-PQENDCOPY)在看到终结符线之后。然后它应该返回到[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)循环直到[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)返回一个空指针。同样一个`PGRES_COPY_IN`结果由一系列处理[`PQputline`](libpq-copy.html#LIBPQ-PQPUTLINE)电话随后[`PQendcopy`](libpq-copy.html#LIBPQ-PQENDCOPY),然后返回[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)环形。这种安排将确保一个`复制`嵌入在一系列 SQL 命令中的命令将被正确执行。 + +较早的申请可能会提交`复制`通过[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)并假设交易完成后[`PQendcopy`](libpq-copy.html#LIBPQ-PQENDCOPY).只有当`复制`是命令字符串中唯一的 SQL 命令。 diff --git a/docs/X/libpq-envars.md b/docs/en/libpq-envars.md similarity index 100% rename from docs/X/libpq-envars.md rename to docs/en/libpq-envars.md diff --git a/docs/en/libpq-envars.zh.md b/docs/en/libpq-envars.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..5bf8df310fcfedb3eb7ff6f1cde34549e18d3644 --- /dev/null +++ b/docs/en/libpq-envars.zh.md @@ -0,0 +1,81 @@ +## 34.15. Environment Variables + +[](<>) + +The following environment variables can be used to select default connection parameter values, which will be used by[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB),[`PQsetdbLogin`](libpq-connect.html#LIBPQ-PQSETDBLOGIN)and[`PQsetdb`](libpq-connect.html#LIBPQ-PQSETDB)if no value is directly specified by the calling code. These are useful to avoid hard-coding database connection information into simple client applications, for example. + +- [](<>) `PGHOST`behaves the same as the[host](libpq-connect.html#LIBPQ-CONNECT-HOST)connection parameter. + +- [](<>) `PGHOSTADDR`behaves the same as the[hostaddr](libpq-connect.html#LIBPQ-CONNECT-HOSTADDR)connection parameter. This can be set instead of or in addition to`PGHOST`to avoid DNS lookup overhead. + +- [](<>) `PGPORT`behaves the same as the[port](libpq-connect.html#LIBPQ-CONNECT-PORT)connection parameter. + +- [](<>) `PGDATABASE`behaves the same as the[dbname](libpq-connect.html#LIBPQ-CONNECT-DBNAME)connection parameter. + +- [](<>) `PGUSER`行为与[用户](libpq-connect.html#LIBPQ-CONNECT-USER)连接参数。 + +- [](<>) `PG密码`行为与[密码](libpq-connect.html#LIBPQ-CONNECT-PASSWORD)连接参数。出于安全原因,不建议使用此环境变量,因为某些操作系统允许非 root 用户通过 ps 查看进程环境变量;而是考虑使用密码文件(请参阅[第 34.16 节](libpq-pgpass.html))。 + +- [](<>) `PGPASS文件`行为与[密码文件](libpq-connect.html#LIBPQ-CONNECT-PASSFILE)连接参数。 + +- [](<>) `PG通道绑定`行为与[渠道\_捆绑](libpq-connect.html#LIBPQ-CONNECT-CHANNEL-BINDING)连接参数。 + +- [](<>) `PGSERVICE`行为与[服务](libpq-connect.html#LIBPQ-CONNECT-SERVICE)连接参数。 + +- [](<>) `PGSERVICE文件`指定每用户连接服务文件的名称。如果未设置,则默认为`~/.pg_service.conf`(看[第 34.17 节](libpq-pgservice.html))。 + +- [](<>) `选项`行为与[选项](libpq-connect.html#LIBPQ-CONNECT-OPTIONS)连接参数。 + +- [](<>) `PGAPPNAME`行为与[应用\_姓名](libpq-connect.html#LIBPQ-CONNECT-APPLICATION-NAME)连接参数。 + +- [](<>) `PGSSL模式`行为与[ssl模式](libpq-connect.html#LIBPQ-CONNECT-SSLMODE)连接参数。 + +- [](<>) `PGREQUIRESSL`行为与[需要sl](libpq-connect.html#LIBPQ-CONNECT-REQUIRESSL)连接参数。此环境变量已弃用,取而代之的是`PGSSL模式`多变的;设置这两个变量会抑制这个变量的影响。 + +- [](<>) `PGSSL压缩`行为与[ssl压缩](libpq-connect.html#LIBPQ-CONNECT-SSLCOMPRESSION)连接参数。 + +- [](<>) `PGSSL证书`行为与[sslcert](libpq-connect.html#LIBPQ-CONNECT-SSLCERT)连接参数。 + +- [](<>) `PGSSLKEY`行为与[sslkey](libpq-connect.html#LIBPQ-CONNECT-SSLKEY)连接参数。 + +- [](<>) `PGSSL根证书`行为与[sslrootcert](libpq-connect.html#LIBPQ-CONNECT-SSLROOTCERT)连接参数。 + +- [](<>) `PGSSLCRL`行为与[sslcrl](libpq-connect.html#LIBPQ-CONNECT-SSLCRL)连接参数。 + +- [](<>) `PGSSLCRLDIR`行为与[sslcrldir](libpq-connect.html#LIBPQ-CONNECT-SSLCRLDIR)连接参数。 + +- [](<>) `PGSSLSNI`行为与[sslsni](libpq-connect.html#LIBPQ-CONNECT-SSLSNI)连接参数。 + +- [](<>) `PGRE要求者`行为与[要求同行](libpq-connect.html#LIBPQ-CONNECT-REQUIREPEER)连接参数。 + +- [](<>) `PGSSLMIN协议转换`行为与[ssl\_分钟\_协议\_版本](libpq-connect.html#LIBPQ-CONNECT-SSL-MIN-PROTOCOL-VERSION)连接参数。 + +- [](<>) `PGSSLMAX协议转换`行为与[ssl\_最大限度\_协议\_版本](libpq-connect.html#LIBPQ-CONNECT-SSL-MAX-PROTOCOL-VERSION)连接参数。 + +- [](<>) `PGGSSENC模式`行为与[gssencmode](libpq-connect.html#LIBPQ-CONNECT-GSSENCMODE)连接参数。 + +- [](<>) `PGKRBSRVNAME`行为与[krbsrv 名称](libpq-connect.html#LIBPQ-CONNECT-KRBSRVNAME)连接参数。 + +- [](<>) `PGGSSLIB`行为与[gsslib](libpq-connect.html#LIBPQ-CONNECT-GSSLIB)连接参数。 + +- [](<>) `PGCONNECT_TIMEOUT`行为与[连接\_超时](libpq-connect.html#LIBPQ-CONNECT-CONNECT-TIMEOUT)连接参数。 + +- [](<>) `PGC客户端编码`行为与[客户\_编码](libpq-connect.html#LIBPQ-CONNECT-CLIENT-ENCODING)连接参数。 + +- [](<>) `PGTARGETSESSIONATTRS`行为与[目标\_会议\_属性](libpq-connect.html#LIBPQ-CONNECT-TARGET-SESSION-ATTRS)连接参数。 + + 以下环境变量可用于指定每个 PostgreSQL 会话的默认行为。(另见[改变角色](sql-alterrole.html)和[更改数据库](sql-alterdatabase.html)用于在每个用户或每个数据库的基础上设置默认行为的方法的命令。) + +- [](<>) `PGD​​ATE样式`设置日期/时间表示的默认样式。(相当于`将日期样式设置为 ...`.) + +- [](<>) `PGTZ`设置默认时区。(相当于`将时区设置为 ...`.) + +- [](<>) `PGGEQO`设置遗传查询优化器的默认模式。(相当于`将 geqo 设置为 ...`.) + + 参考SQL命令[放](sql-set.html)有关这些环境变量的正确值的信息。 + + 以下环境变量决定了 libpq 的内部行为;它们覆盖编译的默认值。 + +- [](<>) `PGSYSCONFDIR`设置包含`pg_service.conf`文件和在未来的版本中可能是其他系统范围的配置文件。 + +- [](<>) `PGLOCALEDIR`设置包含`语言环境`用于消息本地化的文件。 diff --git a/docs/X/libpq-events.md b/docs/en/libpq-events.md similarity index 100% rename from docs/X/libpq-events.md rename to docs/en/libpq-events.md diff --git a/docs/en/libpq-events.zh.md b/docs/en/libpq-events.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..53e2cf698570a89902fd9c3c9792807d89f44019 --- /dev/null +++ b/docs/en/libpq-events.zh.md @@ -0,0 +1,313 @@ +## 34.14. Event System + +[34.14.1. Event Types](libpq-events.html#LIBPQ-EVENTS-TYPES) + +[34.14.2. Event Callback Procedure](libpq-events.html#LIBPQ-EVENTS-PROC) + +[34.14.3. Event Support Functions](libpq-events.html#LIBPQ-EVENTS-FUNCS) + +[34.14.4. Event Example](libpq-events.html#LIBPQ-EVENTS-EXAMPLE) + +libpq's event system is designed to notify registered event handlers about interesting libpq events, such as the creation or destruction of`PGconn`and`PGresult`objects. A principal use case is that this allows applications to associate their own data with a`PGconn`or`PGresult`and ensure that that data is freed at an appropriate time. + +Each registered event handler is associated with two pieces of data, known to libpq only as opaque`void *`pointers. There is a*pass-through*pointer that is provided by the application when the event handler is registered with a`PGconn`. The pass-through pointer never changes for the life of the`PGconn`and all`PGresult`s generated from it; so if used, it must point to long-lived data. In addition there is an*instance data*pointer, which starts out`NULL`in every`PGconn`and`PGresult`. This pointer can be manipulated using the[`PQinstanceData`](libpq-events.html#LIBPQ-PQINSTANCEDATA),[`PQsetInstanceData`](libpq-events.html#LIBPQ-PQSETINSTANCEDATA),[`PQresultInstanceData`](libpq-events.html#LIBPQ-PQRESULTINSTANCEDATA)and`PQsetResultInstanceData`functions. Note that unlike the pass-through pointer, instance data of a`PGconn`is not automatically inherited by`PGresult`s created from it. libpq does not know what pass-through and instance data pointers point to (if anything) and will never attempt to free them — that is the responsibility of the event handler. + +### 34.14.1. Event Types + +The enum`PGEventId`names the types of events handled by the event system. All its values have names beginning with`PGEVT`. For each event type, there is a corresponding event info structure that carries the parameters passed to the event handlers. The event types are: + +`PGEVT_REGISTER` + +The register event occurs when[`PQregisterEventProc`](libpq-events.html#LIBPQ-PQREGISTEREVENTPROC)is called. It is the ideal time to initialize any`instanceData`可能需要一个事件过程。每个连接的每个事件处理程序只会触发一个注册事件。如果事件过程失败,则中止注册。 + +``` +typedef struct +{ + PGconn *conn; +} PGEventRegister; +``` + +当一个`PGEVT_REGISTER`事件被接收,*`事件信息`*指针应转换为`PGEventRegister *`.这个结构包含一个`PGconn`那应该在`CONNECTION_OK`地位;保证如果有人打电话[`PQregisterEventProc`](libpq-events.html#LIBPQ-PQREGISTEREVENTPROC)在获得好东西之后`PGconn`.返回失败代码时,所有清理必须执行为 no`PGEVT_CONNDESTROY`事件将被发送。 + +`PGEVT_CONNRESET` + +连接重置事件在完成时触发[`复位`](libpq-connect.html#LIBPQ-PQRESET)或者`PQresetPoll`.在这两种情况下,只有在重置成功时才会触发事件。如果事件过程失败,则整个连接重置失败;这`PGconn`被放入`CONNECTION_BAD`状态和`PQresetPoll`将返回`PGRES_POLLING_FAILED`. + +``` +typedef struct +{ + PGconn *conn; +} PGEventConnReset; +``` + +当一个`PGEVT_CONNRESET`事件被接收,*`事件信息`*指针应转换为`PGEventConnReset *`.虽然包含`PGconn`刚刚重置,所有事件数据保持不变。此事件应用于重置/重新加载/重新查询任何关联的`实例数据`.请注意,即使事件过程无法处理`PGEVT_CONNRESET`,它仍然会收到一个`PGEVT_CONNDESTROY`连接关闭时的事件。 + +`PGEVT_CONNDESTROY` + +连接销毁事件被触发以响应[`PQ完成`](libpq-connect.html#LIBPQ-PQFINISH).正确清理其事件数据是事件过程的责任,因为 libpq 无法管理此内存。清理失败会导致内存泄漏。 + +``` +typedef struct +{ + PGconn *conn; +} PGEventConnDestroy; +``` + +当一个`PGEVT_CONNDESTROY`事件被接收,*`事件信息`*指针应转换为`PGEventConnDestroy *`.此事件在之前触发[`PQ完成`](libpq-connect.html#LIBPQ-PQFINISH)执行任何其他清理。事件过程的返回值被忽略,因为无法从[`PQ完成`](libpq-connect.html#LIBPQ-PQFINISH).此外,事件过程失败不应中止清理不需要的内存的过程。 + +`PGEVT_RESULTCREATE` + +结果创建事件被触发以响应任何生成结果的查询执行函数,包括[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT).只有在成功创建结果后才会触发此事件。 + +``` +typedef struct +{ + PGconn *conn; + PGresult *result; +} PGEventResultCreate; +``` + +当一个`PGEVT_RESULTCREATE`事件被接收,*`事件信息`*指针应转换为`PGEventResultCreate *`.这*`康恩`*是用于生成结果的连接。这是初始化任何`实例数据`需要与结果相关联。如果事件过程失败,结果将被清除并传播失败。事件过程不能试图[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)结果对象本身。返回失败代码时,所有清理必须执行为 no`PGEVT_RESULTDESTROY`事件将被发送。 + +`PGEVT_RESULTCOPY` + +结果复制事件被触发以响应[`PQcopyResult`](libpq-misc.html#LIBPQ-PQCOPYRESULT).只有在复制完成后才会触发此事件。只有已成功处理的事件过程`PGEVT_RESULTCREATE`要么`PGEVT_RESULTCOPY`源结果的事件将收到`PGEVT_RESULTCOPY`事件。 + +``` +typedef struct +{ + const PGresult *src; + PGresult *dest; +} PGEventResultCopy; +``` + +当一个`PGEVT_RESULTCOPY`事件被接收,*`事件信息`*指针应转换为`PGEventResultCopy *`.这*`源代码`*结果是复制的内容,而*`目的地`*结果是复制目标。此事件可用于提供`实例数据`, 自从`PQcopyResult`不可以这样做。如果事件过程失败,整个复制操作将失败,并且*`目的地`*结果将被清除。返回失败代码时,所有清理必须执行为 no`PGEVT_RESULTDESTROY`将针对目标结果发送事件。 + +`PGEVT_RESULTDESTROY` + +结果销毁事件被触发以响应[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR).由于libpq无法管理该内存,因此事件程序有责任正确清理其事件数据。未能清理将导致内存泄漏。 + +``` +typedef struct +{ + PGresult *result; +} PGEventResultDestroy; +``` + +当`PGEVT_结果演示`事件收到后*`埃夫廷福`*指针应投射到`PGEventResultDestroy*`.此事件在[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)执行任何其他清理。事件过程的返回值将被忽略,因为无法从中指示故障[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)。此外,事件过程失败不应中止清除不需要的内存的过程。 + +### 34.14.2.事件回调过程 + +`PGEventProc`[](<>) + +`PGEventProc`是指向事件过程的指针的typedef,即从libpq接收事件的用户回调函数。事件程序的签名必须是 + +``` +int eventproc(PGEventId evtId, void *evtInfo, void *passThrough) +``` + +这个*`埃夫蒂德`*参数指示`PGEVT`事件发生了。这个*`埃夫廷福`*指针必须强制转换为适当的结构类型,才能获得有关事件的进一步信息。这个*`穿越`*参数是提供给[`PQregisterEventProc`](libpq-events.html#LIBPQ-PQREGISTEREVENTPROC)当事件过程被注册时。如果函数成功,则返回非零值;如果函数失败,则返回零。 + +A particular event procedure can be registered only once in any`PGconn`. This is because the address of the procedure is used as a lookup key to identify the associated instance data. + +### Caution + +On Windows, functions can have two different addresses: one visible from outside a DLL and another visible from inside the DLL. One should be careful that only one of these addresses is used with libpq's event-procedure functions, else confusion will result. The simplest rule for writing code that will work is to ensure that event procedures are declared`static`. If the procedure's address must be available outside its own source file, expose a separate function to return the address. + +### 34.14.3. Event Support Functions + +`PQregisterEventProc`[](<>) + +Registers an event callback procedure with libpq. + +``` +int PQregisterEventProc(PGconn *conn, PGEventProc proc, + const char *name, void *passThrough); +``` + +An event procedure must be registered once on each`PGconn`you want to receive events about. There is no limit, other than memory, on the number of event procedures that can be registered with a connection. The function returns a non-zero value if it succeeds and zero if it fails. + +The*`proc`*argument will be called when a libpq event is fired. Its memory address is also used to lookup`instanceData`. The*`name`*argument is used to refer to the event procedure in error messages. This value cannot be`NULL`or a zero-length string. The name string is copied into the`PGconn`, so what is passed need not be long-lived. The*`passThrough`*pointer is passed to the*`proc`*whenever an event occurs. This argument can be`空值`. + +`PQsetInstanceData`[](<>) + +设置连接*`康恩`*的`实例数据`办理手续*`过程`*到*`数据`*.这将返回非零表示成功,返回零表示失败。(只有在*`过程`*没有正确注册*`康恩`*.) + +``` +int PQsetInstanceData(PGconn *conn, PGEventProc proc, void *data); +``` + +`PQinstanceData`[](<>) + +返回连接*`康恩`*的`实例数据`与程序相关*`过程`*, 或者`空值`如果没有。 + +``` +void *PQinstanceData(const PGconn *conn, PGEventProc proc); +``` + +`PQresultSetInstanceData`[](<>) + +设置结果的`instanceData`for*`proc`*to*`data`*. This returns non-zero for success and zero for failure. (Failure is only possible if*`proc`*has not been properly registered in the result.) + +``` +int PQresultSetInstanceData(PGresult *res, PGEventProc proc, void *data); +``` + +Beware that any storage represented by*`data`*will not be accounted for by[`PQresultMemorySize`](libpq-misc.html#LIBPQ-PQRESULTMEMORYSIZE), unless it is allocated using[`PQresultAlloc`](libpq-misc.html#LIBPQ-PQRESULTALLOC). (Doing so is recommendable because it eliminates the need to free such storage explicitly when the result is destroyed.) + +`PQresultInstanceData`[](<>) + +Returns the result's`instanceData`associated with*`proc`*, or`NULL`if there is none. + +``` +void *PQresultInstanceData(const PGresult *res, PGEventProc proc); +``` + +### 34.14.4. Event Example + +Here is a skeleton example of managing private data associated with libpq connections and results. + +``` +/* required header for libpq events (note: includes libpq-fe.h) */ +#include + +/* The instanceData */ +typedef struct +{ + int n; + char *str; +} mydata; + +/* PGEventProc */ +static int myEventProc(PGEventId evtId, void *evtInfo, void *passThrough); + +int +main(void) +{ + mydata *data; + PGresult *res; + PGconn *conn = + PQconnectdb("dbname=postgres options=-csearch_path="); + + if (PQstatus(conn) != CONNECTION_OK) + { + /* PQerrorMessage's result includes a trailing newline */ + fprintf(stderr, "%s", PQerrorMessage(conn)); + PQfinish(conn); + return 1; + } + + /* called once on any connection that should receive events. + * Sends a PGEVT_REGISTER to myEventProc. + */ + if (!PQregisterEventProc(conn, myEventProc, "mydata_proc", NULL)) + { + fprintf(stderr, "Cannot register PGEventProc\n"); + PQfinish(conn); + return 1; + } + + /* conn instanceData is available */ + data = PQinstanceData(conn, myEventProc); + + /* Sends a PGEVT_RESULTCREATE to myEventProc */ + res = PQexec(conn, "SELECT 1 + 1"); + + /* result instanceData is available */ + data = PQresultInstanceData(res, myEventProc); + + /* If PG_COPYRES_EVENTS is used, sends a PGEVT_RESULTCOPY to myEventProc */ + res_copy = PQcopyResult(res, PG_COPYRES_TUPLES | PG_COPYRES_EVENTS); + + /* result instanceData is available if PG_COPYRES_EVENTS was + * used during the PQcopyResult call. + */ + data = PQresultInstanceData(res_copy, myEventProc); + + /* Both clears send a PGEVT_RESULTDESTROY to myEventProc */ + PQclear(res); + PQclear(res_copy); + + /* Sends a PGEVT_CONNDESTROY to myEventProc */ + PQfinish(conn); + + return 0; +} + +static int +myEventProc(PGEventId evtId, void *evtInfo, void *passThrough) +{ + switch (evtId) + { + case PGEVT_REGISTER: + { + PGEventRegister *e = (PGEventRegister *)evtInfo; + mydata *data = get_mydata(e->conn); + + /* associate app specific data with connection */ + PQsetInstanceData(e->conn, myEventProc, data); + break; + } + + case PGEVT_CONNRESET: + { + PGEventConnReset *e = (PGEventConnReset *)evtInfo; + mydata *data = PQinstanceData(e->conn, myEventProc); + + if (data) + memset(data, 0, sizeof(mydata)); + break; + } + + case PGEVT_CONNDESTROY: + { + PGEventConnDestroy *e = (PGEventConnDestroy *)evtInfo; + mydata *data = PQinstanceData(e->conn, myEventProc); + + /* free instance data because the conn is being destroyed */ + if (data) + free_mydata(data); + break; + } + + case PGEVT_RESULTCREATE: + { + PGEventResultCreate *e = (PGEventResultCreate *)evtInfo; + mydata *conn_data = PQinstanceData(e->conn, myEventProc); + mydata *res_data = dup_mydata(conn_data); + + /* associate app specific data with result (copy it from conn) */ + PQsetResultInstanceData(e->result, myEventProc, res_data); + break; + } + + case PGEVT_RESULTCOPY: + { + PGEventResultCopy *e = (PGEventResultCopy *)evtInfo; + mydata *src_data = PQresultInstanceData(e->src, myEventProc); + mydata *dest_data = dup_mydata(src_data); + + /* associate app specific data with result (copy it from a result) */ + PQsetResultInstanceData(e->dest, myEventProc, dest_data); + break; + } + + case PGEVT_RESULTDESTROY: + { + PGEventResultDestroy *e = (PGEventResultDestroy *)evtInfo; + mydata *data = PQresultInstanceData(e->result, myEventProc); + + /* free instance data because the result is being destroyed */ + if (data) + free_mydata(data); + break; + } + + /* unknown event ID, just return true. */ + default: + break; + } + + return true; /* event processing succeeded */ +} +``` diff --git a/docs/X/libpq-exec.md b/docs/en/libpq-exec.md similarity index 100% rename from docs/X/libpq-exec.md rename to docs/en/libpq-exec.md diff --git a/docs/en/libpq-exec.zh.md b/docs/en/libpq-exec.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a3dc6d8dc9f2dcc43623c07d04f074caaafd8cdf --- /dev/null +++ b/docs/en/libpq-exec.zh.md @@ -0,0 +1,727 @@ +## 34.3. Command Execution Functions + +[34.3.1. Main Functions](libpq-exec.html#LIBPQ-EXEC-MAIN) + +[34.3.2. Retrieving Query Result Information](libpq-exec.html#LIBPQ-EXEC-SELECT-INFO) + +[34.3.3. Retrieving Other Result Information](libpq-exec.html#LIBPQ-EXEC-NONSELECT) + +[34.3.4. Escaping Strings for Inclusion in SQL Commands](libpq-exec.html#LIBPQ-EXEC-ESCAPE-STRING) + +Once a connection to a database server has been successfully established, the functions described here are used to perform SQL queries and commands. + +### 34.3.1. Main Functions + +`PQexec`[](<>) + +Submits a command to the server and waits for the result. + +``` +PGresult *PQexec(PGconn *conn, const char *command); +``` + +Returns a`PGresult`pointer or possibly a null pointer. A non-null pointer will generally be returned except in out-of-memory conditions or serious errors such as inability to send the command to the server. The[`PQresultStatus`](libpq-exec.html#LIBPQ-PQRESULTSTATUS)function should be called to check the return value for any errors (including the value of a null pointer, in which case it will return`PGRES_FATAL_ERROR`). Use[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)to get more information about such errors. + +The command string can include multiple SQL commands (separated by semicolons). Multiple queries sent in a single[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)call are processed in a single transaction, unless there are explicit`BEGIN`/`COMMIT`commands included in the query string to divide it into multiple transactions. (See[Section 53.2.2.1](protocol-flow.html#PROTOCOL-FLOW-MULTI-STATEMENT)for more details about how the server handles multi-query strings.) Note however that the returned`PGresult`structure describes only the result of the last command executed from the string. Should one of the commands fail, processing of the string stops with it and the returned`PGresult`描述错误情况。 + +`PQexec 参数`[](<>) + +向服务器提交命令并等待结果,能够将参数与 SQL 命令文本分开传递。 + +``` +PGresult *PQexecParams(PGconn *conn, + const char *command, + int nParams, + const Oid *paramTypes, + const char * const *paramValues, + const int *paramLengths, + const int *paramFormats, + int resultFormat); +``` + +[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS)就好像[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC), 但提供了额外的功能:参数值可以与命令字符串分开指定,查询结果可以以文本或二进制格式请求。 + +函数参数是: + +*`康恩`* + +发送命令的连接对象。 + +*`命令`* + +要执行的 SQL 命令字符串。如果使用参数,它们在命令字符串中被称为`1美元`,`2美元`, 等等。 + +*`参数`* + +提供的参数数量;它是数组的长度*`参数类型[]`*,*`参数值[]`*,*`参数长度[]`*, 和*`参数格式[]`*.(数组指针可以是`空值`什么时候*`参数`*为零。) + +*`参数类型[]`* + +通过 OID 指定要分配给参数符号的数据类型。如果*`参数类型`*是`空值`,或数组中的任何特定元素为零,服务器推断参数符号的数据类型与它对无类型文字字符串所做的相同。 + +*`参数值[]`* + +指定参数的实际值。该数组中的空指针表示对应的参数为空;否则,指针指向以零结尾的文本字符串(对于文本格式)或服务器预期格式的二进制数据(对于二进制格式)。 + +*`参数长度[]`* + +指定二进制格式参数的实际数据长度。对于空参数和文本格式参数,它会被忽略。当没有二进制参数时,数组指针可以为空。 + +*`参数格式[]`* + +指定参数是文本(在相应参数的数组条目中放置一个零)还是二进制(在相应参数的数组条目中放置一个)。如果数组指针为空,则假定所有参数都是文本字符串。 + +以二进制格式传递的值需要了解后端期望的内部表示。例如,整数必须按网络字节顺序传递。通过`数字`值需要了解服务器存储格式,如在`src/backend/utils/adt/numeric.c::numeric_send()`和`src/backend/utils/adt/numeric.c::numeric_recv()`. + +*`结果格式`* + +指定零以获取文本格式的结果,或指定一以获取二进制格式的结果。(目前没有规定以不同格式获取不同的结果列,尽管在底层协议中这是可能的。) + +的主要优势[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS)超过[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)是参数值可以从命令字符串中分离出来,从而避免了繁琐且容易出错的引用和转义。 + +不像[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC),[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS)allows at most one SQL command in the given string. (There can be semicolons in it, but not more than one nonempty command.) This is a limitation of the underlying protocol, but has some usefulness as an extra defense against SQL-injection attacks. + +### Tip + +Specifying parameter types via OIDs is tedious, particularly if you prefer not to hard-wire particular OID values into your program. However, you can avoid doing so even in cases where the server by itself cannot determine the type of the parameter, or chooses a different type than you want. In the SQL command text, attach an explicit cast to the parameter symbol to show what data type you will send. For example: + +``` +SELECT * FROM mytable WHERE x = $1::bigint; +``` + +This forces parameter`$1`to be treated as`bigint`, whereas by default it would be assigned the same type as`x`. Forcing the parameter type decision, either this way or by specifying a numeric type OID, is strongly recommended when sending parameter values in binary format, because binary format has less redundancy than text format and so there is less chance that the server will detect a type mismatch mistake for you. + +`PQprepare`[](<>) + +Submits a request to create a prepared statement with the given parameters, and waits for completion. + +``` +PGresult *PQprepare(PGconn *conn, + const char *stmtName, + const char *query, + int nParams, + const Oid *paramTypes); +``` + +[`PQprepare`](libpq-exec.html#LIBPQ-PQPREPARE)creates a prepared statement for later execution with[`PQexecPrepared`](libpq-exec.html#LIBPQ-PQEXECPREPARED). This feature allows commands to be executed repeatedly without being parsed and planned each time; see[PREPARE](sql-prepare.html)for details. + +The function creates a prepared statement named*`stmtName`*from the*`query`*string, which must contain a single SQL command.*`stmtName`*can be`""`to create an unnamed statement, in which case any pre-existing unnamed statement is automatically replaced; otherwise it is an error if the statement name is already defined in the current session. If any parameters are used, they are referred to in the query as`$1`,`$2`, 等等。*`参数`*是在数组中预先指定类型的参数的数量*`参数类型[]`*.(数组指针可以是`空值`什么时候*`参数`*为零。)*`参数类型[]`*通过 OID 指定要分配给参数符号的数据类型。如果*`参数类型`*是`空值`,或者数组中的任何特定元素为零,服务器将数据类型分配给参数符号,其方式与对无类型文字字符串的处理方式相同。此外,查询可以使用数字大于的参数符号*`参数`*;还将为这些符号推断数据类型。(看[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED)找出推断出哪些数据类型的方法。) + +与[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC),结果通常是`PG结果`其内容指示服务器端成功或失败的对象。空结果表示内存不足或根本无法发送命令。采用[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)获取有关此类错误的更多信息。 + +准备好的语句用于[`PQexec 准备`](libpq-exec.html#LIBPQ-PQEXECPREPARED)也可以通过执行SQL来创建[准备](sql-prepare.html)陈述。此外,虽然没有用于删除准备好的语句的 libpq 函数,但 SQL[解除分配](sql-deallocate.html)声明可用于此目的。 + +`PQexec 准备`[](<>) + +发送请求以执行具有给定参数的准备好的语句,并等待结果。 + +``` +PGresult *PQexecPrepared(PGconn *conn, + const char *stmtName, + int nParams, + const char * const *paramValues, + const int *paramLengths, + const int *paramFormats, + int resultFormat); +``` + +[`PQexec 准备`](libpq-exec.html#LIBPQ-PQEXECPREPARED)就好像[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS),但要执行的命令是通过命名先前准备的语句来指定的,而不是给出查询字符串。此功能允许将重复使用的命令仅解析和计划一次,而不是每次执行。该声明必须在当前会话中事先准备好。 + +参数与[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS),除了给出的是准备好的语句的名称而不是查询字符串,并且*`参数类型[]`*参数不存在(不需要,因为准备语句的参数类型是在创建时确定的)。 + +`PQdescribePrepared`[](<>) + +提交请求以获取有关指定准备好的语句的信息,并等待完成。 + +``` +PGresult *PQdescribePrepared(PGconn *conn, const char *stmtName); +``` + +[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED)允许应用程序获取有关先前准备好的语句的信息。 + +*`名称`*可`“”`或者`空值`引用未命名的语句,否则它必须是现有准备好的语句的名称。成功时,一个`PG结果`有状态`PGRES_COMMAND_OK`被退回。功能[`PQn参数`](libpq-exec.html#LIBPQ-PQNPARAMS)和[`PQ参数类型`](libpq-exec.html#LIBPQ-PQPARAMTYPE)可以应用到这个`PG结果`获取有关准备好的语句的参数和函数的信息[`量子场`](libpq-exec.html#LIBPQ-PQNFIELDS),[`名称`](libpq-exec.html#LIBPQ-PQFNAME),[`PQf型`](libpq-exec.html#LIBPQ-PQFTYPE)等提供有关语句的结果列(如果有)的信息。 + +`PQdescribePortal`[](<>) + +提交请求以获取有关指定门户的信息,并等待完成。 + +``` +PGresult *PQdescribePortal(PGconn *conn, const char *portalName); +``` + +[`PQdescribePortal`](libpq-exec.html#LIBPQ-PQDESCRIBEPORTAL)允许应用程序获取有关先前创建的门户的信息。(libpq 不提供对门户的任何直接访问,但您可以使用此函数检查使用`声明光标`SQL 命令。) + +*`门户名称`*可`“”`或者`空值`引用未命名的门户,否则它必须是现有门户的名称。成功时,一个`PG结果`有状态`PGRES_COMMAND_OK`被退回。功能[`量子场`](libpq-exec.html#LIBPQ-PQNFIELDS),[`名称`](libpq-exec.html#LIBPQ-PQFNAME),[`PQf型`](libpq-exec.html#LIBPQ-PQFTYPE)等可以应用于`PG结果`获取有关门户的结果列(如果有)的信息。 + +这`PG结果`[](<>)结构封装了服务器返回的结果。libpq 应用程序程序员应该小心维护`PG结果`抽象。使用下面的访问器函数来获取`PG结果`.避免直接引用`PG结果`结构,因为它们将来可能会发生变化。 + +`PQresult状态`[](<>) + +返回命令的结果状态。 + +``` +ExecStatusType PQresultStatus(const PGresult *res); +``` + +[`PQresult状态`](libpq-exec.html#LIBPQ-PQRESULTSTATUS)可以返回以下值之一: + +`PGRES_EMPTY_QUERY` + +发送到服务器的字符串为空。 + +`PGRES_COMMAND_OK` + +成功完成不返回数据的命令。 + +`PGRES_TUPLES_OK` + +成功完成返回数据的命令(例如`选择`或者`显示`)。 + +`PGRES_COPY_OUT` + +复制输出(从服务器)数据传输开始。 + +`PGRES_COPY_IN` + +Copy In(到服务器)数据传输开始。 + +`PGRES_BAD_RESPONSE` + +无法理解服务器的响应。 + +`PGRES_NONFATAL_ERROR` + +发生非致命错误(通知或警告)。 + +`PGRES_FATAL_ERROR` + +发生致命错误。 + +`PGRES_COPY_BOTH` + +复制输入/输出(到和从服务器)数据传输开始。此功能目前仅用于流式复制,因此在普通应用程序中不应出现此状态。 + +`PGRES_SINGLE_TUPLE` + +这`PG结果`包含来自当前命令的单个结果元组。仅当为查询选择了单行模式时才会出现此状态(请参阅[第 34.6 节](libpq-single-row-mode.html))。 + +`PGRES_PIPELINE_SYNC` + +这`PG结果`表示管道模式下的同步点,由[`PQpipelineSync`](libpq-pipeline-mode.html#LIBPQ-PQPIPELINESYNC).This status occurs only when pipeline mode has been selected. + +`PGRES_PIPELINE_ABORTED` + +这`PG结果`表示从服务器收到错误的管道。`PQgetResult`必须重复调用,每次都会返回这个状态码,直到当前流水线结束,此时才会返回`PGRES_PIPELINE_SYNC`并且可以恢复正常处理。 + +如果结果状态是`PGRES_TUPLES_OK`或者`PGRES_SINGLE_TUPLE`,那么下面描述的函数可用于检索查询返回的行。请注意,一个`选择`碰巧检索零行的命令仍然显示`PGRES_TUPLES_OK`.`PGRES_COMMAND_OK`适用于永远不能返回行的命令(`插入`或者`更新`没有`返回`条款等)。的回应`PGRES_EMPTY_QUERY`可能表明客户端软件中存在错误。 + +状态的结果`PGRES_NONFATAL_ERROR`永远不会被直接退回[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)或其他查询执行功能;这种类型的结果被传递给通知处理器(参见[第 34.13 节](libpq-notice-processing.html))。 + +`PQresStatus`[](<>) + +转换返回的枚举类型[`PQresult状态`](libpq-exec.html#LIBPQ-PQRESULTSTATUS)成描述状态码的字符串常量。调用者不应释放结果。 + +``` +char *PQresStatus(ExecStatusType status); +``` + +`PQresultErrorMessage`[](<>) + +返回与命令关联的错误消息,如果没有错误,则返回空字符串。 + +``` +char *PQresultErrorMessage(const PGresult *res); +``` + +如果出现错误,返回的字符串将包含一个尾随换行符。调用者不应直接释放结果。当关联的时候它会被释放`PG结果`句柄被传递给[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR). + +紧接着一个[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC)或者[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)称呼,[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)(在连接上)将返回相同的字符串[`PQresultErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTERRORMESSAGE)(关于结果)。然而,一个`PG结果`将保留其错误消息直到销毁,而连接的错误消息将在后续操作完成时更改。采用[`PQresultErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTERRORMESSAGE)当您想知道与特定的关联的状态时`PG结果`;采用[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)当您想从连接上的最新操作中了解状态时。 + +`PQresultVerboseErrorMessage`[](<>) + +返回与 a 关联的错误消息的重新格式化版本`PG结果`目的。 + +``` +char *PQresultVerboseErrorMessage(const PGresult *res, + PGVerbosity verbosity, + PGContextVisibility show_context); +``` + +在某些情况下,客户可能希望获得先前报告的错误的更详细版本。[`PQresultVerboseErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTVERBOSEERRORMESSAGE)通过计算本应由[`PQresultErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTERRORMESSAGE)如果指定的详细设置在给定时对连接有效`PG结果`生成了。如果`PG结果`不是错误结果,而是报告“PGresult 不是错误结果”。返回的字符串包含一个尾随换行符。 + +与大多数其他用于从`PG结果`,这个函数的结果是一个新分配的字符串。调用者必须使用释放它`PQfreemem()`当不再需要该字符串时。 + +如果内存不足,则可能返回 NULL。 + +`PQresultErrorField`[](<>) + +返回错误报告的单个字段。 + +``` +char *PQresultErrorField(const PGresult *res, int fieldcode); +``` + +*`域代码`*是错误字段标识符;请参阅下面列出的符号。`空值`如果返回`PG结果`不是错误或警告结果,或不包含指定字段。字段值通常不包括尾随换行符。调用者不应直接释放结果。当关联的时候它会被释放`PG结果`句柄被传递给[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR). + +以下字段代码可用: + +`PG_DIAG_SEVERITY` + +严重程度;字段内容是`错误`,`致命的`, 或者`恐慌`(在错误消息中),或`警告`,`注意`,`调试`,`信息`, 或者`日志`(在通知消息中),或其中之一的本地化翻译。一直在场。 + +`PG_DIAG_SEVERITY_NONLOCALIZED` + +严重程度;字段内容是`错误`,`致命的`, 或者`恐慌`(在错误消息中),或`警告`,`注意`,`调试`,`信息`, 或者`日志`(在通知消息中)。这与`PG_DIAG_SEVERITY`字段,但内容从未本地化。这仅存在于 PostgreSQL 9.6 及更高版本生成的报告中。 + +`PG_DIAG_SQLSTATE`[](<>) + +错误的 SQLSTATE 代码。SQLSTATE 代码标识已发生的错误类型;前端应用程序可以使用它来执行特定操作(例如错误处理)以响应特定的数据库错误。有关可能的 SQLSTATE 代码的列表,请参见[附录 A](errcodes-appendix.html).该字段不可本地化,并且始终存在。 + +`PG_DIAG_MESSAGE_PRIMARY` + +主要的人类可读错误消息(通常是一行)。一直在场。 + +`PG_DIAG_MESSAGE_DETAIL` + +详细信息:可选的辅助错误消息,包含有关问题的更多详细信息。可能会跑到多行。 + +`PG_DIAG_MESSAGE_HINT` + +提示:一个可选的建议如何处理这个问题。这旨在与细节不同,因为它提供建议(可能不合适)而不是确凿的事实。可能会跑到多行。 + +`PG_DIAG_STATEMENT_POSITION` + +一个包含十进制整数的字符串,指示错误光标位置作为原始语句字符串的索引。第一个字符的索引为 1,位置以字符而不是字节为单位。 + +`PG_DIAG_INTERNAL_POSITION` + +这与定义相同`PG_DIAG_STATEMENT_POSITION`字段,但当光标位置指向内部生成的命令而不是客户端提交的命令时使用。这`PG_DIAG_INTERNAL_QUERY`当该字段出现时,该字段将始终出现。 + +`PG_DIAG_INTERNAL_QUERY` + +失败的内部生成命令的文本。例如,这可能是由 PL/pgSQL 函数发出的 SQL 查询。 + +`PG_DIAG_CONTEXT` + +指示发生错误的上下文。目前,这包括活动过程语言函数和内部生成的查询的调用堆栈回溯。跟踪是每行一个条目,最近的第一个。 + +`PG_DIAG_SCHEMA_NAME` + +如果错误与特定数据库对象相关联,则包含该对象的模式的名称(如果有)。 + +`PG_DIAG_TABLE_NAME` + +如果错误与特定表相关联,则为表的名称。(有关表模式的名称,请参阅模式名称字段。) + +`PG_DIAG_COLUMN_NAME` + +如果错误与特定表列相关联,则为该列的名称。(参考架构和表名字段来识别表。) + +`PG_DIAG_DATATYPE_NAME` + +如果错误与特定数据类型相关联,则为数据类型的名称。(有关数据类型模式的名称,请参阅模式名称字段。) + +`PG_DIAG_CONSTRAINT_NAME` + +如果错误与特定约束相关联,则为约束的名称。有关关联的表或域,请参阅上面列出的字段。(为此,索引被视为约束,即使它们不是使用约束语法创建的。) + +`PG_DIAG_SOURCE_FILE` + +报告错误的源代码位置的文件名。 + +`PG_DIAG_SOURCE_LINE` + +报告错误的源代码位置的行号。 + +`PG_DIAG_SOURCE_FUNCTION` + +报告错误的源代码函数的名称。 + +### 笔记 + +模式名称、表名称、列名称、数据类型名称和约束名称的字段仅提供给有限数量的错误类型;看[附录 A](errcodes-appendix.html).不要假设任何这些字段的存在保证了另一个字段的存在。核心错误源观察到上面提到的相互关系,但用户定义的函数可能会以其他方式使用这些字段。同样,不要假设这些字段表示当前数据库中的当代对象。 + +客户负责格式化显示的信息以满足其需求;特别是它应该根据需要打破长线。出现在错误消息字段中的换行符应被视为换行符,而不是换行符。 + +libpq 内部生成的错误将具有严重性和主要消息,但通常没有其他字段。 + +请注意,错误字段仅可从`PG结果`对象,不是`PGconn`物体;没有`PQerror字段`功能。 + +`PQclear`[](<>) + +释放与 a 关联的存储空间`PG结果`.每个命令结果都应该通过[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)当不再需要它时。 + +``` +void PQclear(PGresult *res); +``` + +你可以保留一个`PG结果`只要你需要它就可以对象;当您发出新命令时它不会消失,即使您关闭连接也不会消失。要摆脱它,您必须致电[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR).不这样做将导致应用程序中的内存泄漏。 + +### 34.3.2. + +检索查询结果信息`这些函数用于从`PG结果`表示成功查询结果的对象(即具有状态的对象)`PGRES_TUPLES_OK`或者`PGRES_SINGLE_TUPLE)。它们还可用于从成功的 Describe 操作中提取信息:Describe 的结果具有与实际执行查询所提供的所有相同的列信息,但它有零行。 + +`对于具有其他状态值的对象,这些函数的行为就像结果具有零行和零列一样。`[](<>) + +元组返回查询结果中的行数(元组)。`(注意`PG结果`对象限于不超过`INT_MAX`行,所以`整数 + +``` +int PQntuples(const PGresult *res); +``` + +`结果就足够了。)`[](<>) + +量子场 + +``` +int PQnfields(const PGresult *res); +``` + +`返回查询结果每行的列(字段)数。`[](<>) + +名称返回与给定列号关联的列名。列号从 0 开始。调用者不应直接释放结果。当关联的时候它会被释放`PG结果`句柄被传递给[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR). + +``` +char *PQfname(const PGresult *res, + int column_number); +``` + +`空值`如果列号超出范围,则返回。 + +`PQf 编号`[](<>) + +返回与给定列名关联的列号。 + +``` +int PQfnumber(const PGresult *res, + const char *column_name); +``` + + -如果给定名称与任何列都不匹配,则返回 1。 + +给定的名称被视为 SQL 命令中的标识符,也就是说,除非双引号,否则它是小写的。例如,给定从 SQL 命令生成的查询结果: + +``` +SELECT 1 AS FOO, 2 AS "BAR"; +``` + +我们会得到结果: + +``` +PQfname(res, 0) foo +PQfname(res, 1) BAR +PQfnumber(res, "FOO") 0 +PQfnumber(res, "foo") 0 +PQfnumber(res, "BAR") -1 +PQfnumber(res, "\"BAR\"") 1 +``` + +`PQftable`[](<>) + +返回从中获取给定列的表的 OID。列号从 0 开始。 + +``` +Oid PQftable(const PGresult *res, + int column_number); +``` + +`无效`如果列号超出范围,或者指定的列不是对表列的简单引用,则返回。可以查询系统表`pg_class`以确定究竟引用了哪个表。 + +方式`样的`和常数`无效`将在包含 libpq 头文件时定义。它们都将是某种整数类型。 + +`PQftablecol`[](<>) + +返回构成指定查询结果列的列的列号(在其表中)。查询结果列编号从 0 开始,但表列具有非零编号。 + +``` +int PQftablecol(const PGresult *res, + int column_number); +``` + +如果列号超出范围,或者指定的列不是对表列的简单引用,则返回零。 + +`PQf格式`[](<>) + +返回指示给定列格式的格式代码。列号从 0 开始。 + +``` +int PQfformat(const PGresult *res, + int column_number); +``` + +格式代码 0 表示文本数据表示,而格式代码 1 表示二进制表示。(其他代码保留供将来定义。) + +`PQf型`[](<>) + +返回与给定列号关联的数据类型。返回的整数是该类型的内部 OID 编号。列号从 0 开始。 + +``` +Oid PQftype(const PGresult *res, + int column_number); +``` + +可以查询系统表`pg_type`获取各种数据类型的名称和属性。内置数据类型的 OID 在文件中定义`目录/pg_type_d.h`in the PostgreSQL installation's`include`directory. + +`PQfmod`[](<>) + +Returns the type modifier of the column associated with the given column number. Column numbers start at 0. + +``` +int PQfmod(const PGresult *res, + int column_number); +``` + +The interpretation of modifier values is type-specific; they typically indicate precision or size limits. The value -1 is used to indicate “no information available”. Most data types do not use modifiers, in which case the value is always -1. + +`PQfsize`[](<>) + +Returns the size in bytes of the column associated with the given column number. Column numbers start at 0. + +``` +int PQfsize(const PGresult *res, + int column_number); +``` + +[`PQfsize`](libpq-exec.html#LIBPQ-PQFSIZE)returns the space allocated for this column in a database row, in other words the size of the server's internal representation of the data type. (Accordingly, it is not really very useful to clients.) A negative value indicates the data type is variable-length. + +`PQbinaryTuples`[](<>) + +Returns 1 if the`PGresult`contains binary data and 0 if it contains text data. + +``` +int PQbinaryTuples(const PGresult *res); +``` + +This function is deprecated (except for its use in connection with`COPY`), because it is possible for a single`PGresult`to contain text data in some columns and binary data in others.[`PQfformat`](libpq-exec.html#LIBPQ-PQFFORMAT)is preferred.[`PQbinaryTuples`](libpq-exec.html#LIBPQ-PQBINARYTUPLES)returns 1 only if all columns of the result are binary (format 1). + +`PQgetvalue`[](<>) + +Returns a single field value of one row of a`PGresult`. Row and column numbers start at 0. The caller should not free the result directly. It will be freed when the associated`PGresult`handle is passed to[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR). + +``` +char *PQgetvalue(const PGresult *res, + int row_number, + int column_number); +``` + +对于文本格式的数据,返回值[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE)是字段值的以空字符结尾的字符串表示形式。对于二进制格式的数据,值是由数据类型决定的二进制表示`打字发送`和`预感`职能。(在这种情况下,该值实际上也跟着一个零字节,但这通常没有用,因为该值可能包含嵌入的空值。) + +如果字段值为空,则返回空字符串。看[`PQgetisnull`](libpq-exec.html#LIBPQ-PQGETISNULL)区分空值和空字符串值。 + +返回的指针[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE)指向作为一部分的存储`PG结果`结构体。一个人不应该修改它指向的数据,并且如果要在数据的生命周期之后使用它,则必须明确地将数据复制到其他存储中。`PG结果`结构本身。 + +`PQgetisnull`[](<>)[](<>) + +测试一个字段的空值。行号和列号从 0 开始。 + +``` +int PQgetisnull(const PGresult *res, + int row_number, + int column_number); +``` + +如果字段为空,此函数返回 1,如果它包含非空值,则返回 0。(注意[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE)将为空字段返回一个空字符串,而不是空指针。) + +`PQgetlength`[](<>) + +返回字段值的实际长度(以字节为单位)。行号和列号从 0 开始。 + +``` +int PQgetlength(const PGresult *res, + int row_number, + int column_number); +``` + +这是特定数据值的实际数据长度,即指向的对象的大小[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE).对于文本数据格式,这与`strlen()`.对于二进制格式,这是必不可少的信息。请注意,应*not*rely on[`PQfsize`](libpq-exec.html#LIBPQ-PQFSIZE)to obtain the actual data length. + +`PQnparams`[](<>) + +Returns the number of parameters of a prepared statement. + +``` +int PQnparams(const PGresult *res); +``` + +This function is only useful when inspecting the result of[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED). For other types of queries it will return zero. + +`PQparamtype`[](<>) + +Returns the data type of the indicated statement parameter. Parameter numbers start at 0. + +``` +Oid PQparamtype(const PGresult *res, int param_number); +``` + +This function is only useful when inspecting the result of[`PQdescribePrepared`](libpq-exec.html#LIBPQ-PQDESCRIBEPREPARED). For other types of queries it will return zero. + +`PQprint`[](<>) + +Prints out all the rows and, optionally, the column names to the specified output stream. + +``` +void PQprint(FILE *fout, /* output stream */ + const PGresult *res, + const PQprintOpt *po); +typedef struct +{ + pqbool header; /* print output field headings and row count */ + pqbool align; /* fill align the fields */ + pqbool standard; /* old brain dead format */ + pqbool html3; /* output HTML tables */ + pqbool expanded; /* expand tables */ + pqbool pager; /* use pager for output if needed */ + char *fieldSep; /* field separator */ + char *tableOpt; /* attributes for HTML table element */ + char *caption; /* HTML table caption */ + char **fieldName; /* null-terminated array of replacement field names */ +} PQprintOpt; +``` + +This function was formerly used by psql to print query results, but this is no longer the case. Note that it assumes all the data is in text format. + +### 34.3.3. Retrieving Other Result Information + +These functions are used to extract other information from`PGresult`objects. + +`PQcmdStatus`[](<>) + +Returns the command status tag from the SQL command that generated the`PGresult`. + +``` +char *PQcmdStatus(PGresult *res); +``` + +Commonly this is just the name of the command, but it might include additional data such as the number of rows processed. The caller should not free the result directly. It will be freed when the associated`PGresult`handle is passed to[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR). + +`PQcmd 元组`[](<>) + +返回受 SQL 命令影响的行数。 + +``` +char *PQcmdTuples(PGresult *res); +``` + +此函数返回一个字符串,其中包含受生成的 SQL 语句影响的行数`PG结果`.此功能只能在执行后使用`选择`,`创建表为`,`插入`,`更新`,`删除`,`移动`,`拿来`, 或者`复制`声明,或`执行`包含一个准备好的查询`插入`,`更新`, 或者`删除`陈述。如果生成的命令`PG结果`was anything else,[`PQcmdTuples`](libpq-exec.html#LIBPQ-PQCMDTUPLES)returns an empty string. The caller should not free the return value directly. It will be freed when the associated`PGresult`handle is passed to[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR). + +`PQoidValue`[](<>) + +Returns the OID[](<>)of the inserted row, if the SQL command was an`INSERT`that inserted exactly one row into a table that has OIDs, or a`EXECUTE`of a prepared query containing a suitable`INSERT`statement. Otherwise, this function returns`InvalidOid`. This function will also return`InvalidOid`if the table affected by the`INSERT`statement does not contain OIDs. + +``` +Oid PQoidValue(const PGresult *res); +``` + +`PQoidStatus`[](<>) + +This function is deprecated in favor of[`PQoidValue`](libpq-exec.html#LIBPQ-PQOIDVALUE)and is not thread-safe. It returns a string with the OID of the inserted row, while[`PQoidValue`](libpq-exec.html#LIBPQ-PQOIDVALUE)returns the OID value. + +``` +char *PQoidStatus(const PGresult *res); +``` + +### 34.3.4. Escaping Strings for Inclusion in SQL Commands + +[](<>) + +`PQescapeLiteral`[](<>) + +``` +char *PQescapeLiteral(PGconn *conn, const char *str, size_t length); +``` + +[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL)转义字符串以在 SQL 命令中使用。这在将数据值作为文字常量插入 SQL 命令时很有用。某些字符(例如引号和反斜杠)必须转义以防止它们被 SQL 解析器专门解释。[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL)执行此操作。 + +[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL)返回的转义版本*`字符串`*分配的内存中的参数`malloc()`.应该使用释放此内存`PQfreemem()`当不再需要结果时。不需要终止零字节,也不应计入*`长度`*.(如果之前找到终止零字节*`长度`*字节被处理,[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL)停在零;因此,这种行为很像`字符串`.) 返回字符串已替换所有特殊字符,以便 PostgreSQL 字符串文字解析器可以正确处理它们。还添加了一个终止零字节。必须围绕 PostgreSQL 字符串文字的单引号包含在结果字符串中。 + +出错时,[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL)返回`空值`并且合适的消息存储在*`康恩`*目的。 + +### 提示 + +在处理从不可靠来源收到的字符串时,进行适当的转义尤为重要。否则存在安全风险:您很容易受到“SQL 注入”攻击,其中不需要的 SQL 命令被馈送到您的数据库。 + +请注意,当数据值作为单独的参数传递时,进行转义既没有必要也不正确[`PQexec 参数`](libpq-exec.html#LIBPQ-PQEXECPARAMS)或其兄弟例程。 + +`PQescape标识符`[](<>) + +``` +char *PQescapeIdentifier(PGconn *conn, const char *str, size_t length); +``` + +[`PQescape标识符`](libpq-exec.html#LIBPQ-PQESCAPEIDENTIFIER)转义字符串以用作 SQL 标识符,例如表、列或函数名。当用户提供的标识符可能包含特殊字符时,这很有用,否则 SQL 解析器不会将其解释为标识符的一部分,或者当标识符可能包含应保留大小写的大写字符时。 + +[`PQescape标识符`](libpq-exec.html#LIBPQ-PQESCAPEIDENTIFIER)返回一个版本的*`字符串`*参数在分配的内存中转义为 SQL 标识符`malloc()`.必须使用释放此内存`PQfreemem()`当不再需要结果时。不需要终止零字节,也不应计入*`长度`*.(如果之前找到终止零字节*`长度`*字节被处理,[`PQescape标识符`](libpq-exec.html#LIBPQ-PQESCAPEIDENTIFIER)停在零;因此,这种行为很像`字符串`.) 返回字符串已替换所有特殊字符,以便将其作为 SQL 标识符正确处理。还添加了一个终止零字节。返回字符串也将被双引号括起来。 + +出错时,[`PQescape标识符`](libpq-exec.html#LIBPQ-PQESCAPEIDENTIFIER)返回`空值`并且合适的消息存储在*`康恩`*目的。 + +### 提示 + +与字符串文字一样,为了防止 SQL 注入攻击,当从不可靠的来源接收到 SQL 标识符时,必须对其进行转义。 + +`PQescapeStringConn`[](<>) + +``` +size_t PQescapeStringConn(PGconn *conn, + char *to, const char *from, size_t length, + int *error); +``` + +[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN)转义字符串文字,很像[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL).不像[`PQescape 字面量`](libpq-exec.html#LIBPQ-PQESCAPELITERAL),调用者负责提供适当大小的缓冲区。此外,[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN)不生成必须围绕 PostgreSQL 字符串文字的单引号;它们应该在插入结果的 SQL 命令中提供。参数*`从`*指向要转义的字符串的第一个字符,并且*`长度`*参数给出此字符串中的字节数。不需要终止零字节,也不应计入*`长度`*.(如果之前找到终止零字节*`长度`*字节被处理,[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN)停在零;因此,这种行为很像`字符串`.)*`到`*应指向一个缓冲区,该缓冲区能够保存至少比值的两倍多一个字节*`长度`*,否则行为未定义。行为同样是未定义的,如果*`到`*和*`从`*字符串重叠。 + +如果*`错误`*参数不是`空值`, 然后`*错误`成功时设置为零,错误时设置为非零。目前唯一可能的错误条件涉及源字符串中的无效多字节编码。输出字符串仍然会在错误时生成,但可以预期服务器会因为格式错误而拒绝它。出错时,将适当的消息存储在*`康恩`*对象,无论是否*`错误`*是`空值`. + +[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN)返回写入的字节数*`到`*,不包括终止的零字节。 + +`PQescapeString`[](<>) + +[`PQescapeString`](libpq-exec.html#LIBPQ-PQESCAPESTRING)是旧的,已弃用的版本[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN). + +``` +size_t PQescapeString (char *to, const char *from, size_t length); +``` + +唯一的区别是[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN)就是它[`PQescapeString`](libpq-exec.html#LIBPQ-PQESCAPESTRING)不采取`PGconn`或者*`错误`*参数。因此,它无法根据连接属性(例如字符编码)调整其行为,因此*它可能会给出错误的结果*.此外,它无法报告错误情况。 + +[`PQescapeString`](libpq-exec.html#LIBPQ-PQESCAPESTRING)可以在一次只使用一个 PostgreSQL 连接的客户端程序中安全地使用(在这种情况下,它可以找出“幕后”需要知道的内容)。在其他情况下,这是一种安全隐患,应避免使用[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN). + +`PQescapeByteaConn`[](<>) + +转义二进制数据以在 SQL 命令中使用,类型为`拜茶`.与[`PQescapeStringConn`](libpq-exec.html#LIBPQ-PQESCAPESTRINGCONN), 这仅在将数据直接插入 SQL 命令字符串时使用。 + +``` +unsigned char *PQescapeByteaConn(PGconn *conn, + const unsigned char *from, + size_t from_length, + size_t *to_length); +``` + +某些字节值在用作`拜茶`SQL 语句中的文字。[`PQescapeByteaConn`](libpq-exec.html#LIBPQ-PQESCAPEBYTEACONN)使用十六进制编码或反斜杠转义来转义字节。看[第 8.4 节](datatype-binary.html)了解更多信息。 + +这*`从`*参数指向要转义的字符串的第一个字节,并且*`from_length`*参数给出此二进制字符串中的字节数。(终止零字节既不需要也不计数。)*`to_length`*参数指向一个变量,该变量将保存结果转义字符串长度。此结果字符串长度包括结果的终止零字节。 + +[`PQescapeByteaConn`](libpq-exec.html#LIBPQ-PQESCAPEBYTEACONN)返回的转义版本*`从`*分配的内存中的参数二进制字符串`malloc()`.应该使用释放此内存`PQfreemem()`当不再需要结果时。返回字符串已替换所有特殊字符,以便它们可以被 PostgreSQL 字符串文字解析器正确处理,并且`拜茶`输入功能。还添加了一个终止零字节。必须围绕 PostgreSQL 字符串文字的单引号不是结果字符串的一部分。 + +出错时,返回一个空指针,并将适当的错误消息存储在*`康恩`*目的。目前,唯一可能的错误是结果字符串的内存不足。 + +`PQescapeBytea`[](<>) + +[`PQescapeBytea`](libpq-exec.html#LIBPQ-PQESCAPEBYTEA)是旧的,已弃用的版本[`PQescapeByteaConn`](libpq-exec.html#LIBPQ-PQESCAPEBYTEACONN). + +``` +unsigned char *PQescapeBytea(const unsigned char *from, + size_t from_length, + size_t *to_length); +``` + +唯一的区别是[`PQescapeByteaConn`](libpq-exec.html#LIBPQ-PQESCAPEBYTEACONN)就是它[`PQescapeBytea`](libpq-exec.html#LIBPQ-PQESCAPEBYTEA)不采取`PGconn`范围。因为这,[`PQescapeBytea`](libpq-exec.html#LIBPQ-PQESCAPEBYTEA)只能在一次使用单个 PostgreSQL 连接的客户端程序中安全使用(在这种情况下,它可以找出“幕后”需要知道的内容)。它*可能会给出错误的结果*如果在使用多个数据库连接的程序中使用(使用[`PQescapeByteaConn`](libpq-exec.html#LIBPQ-PQESCAPEBYTEACONN)在这种情况下)。 + +`PQunescapeBytea`[](<>) + +将二进制数据的字符串表示形式转换为二进制数据——与[`PQescapeBytea`](libpq-exec.html#LIBPQ-PQESCAPEBYTEA).这是检索时需要的`拜茶`文本格式的数据,但在以二进制格式检索时则不然。 + +``` +unsigned char *PQunescapeBytea(const unsigned char *from, size_t *to_length); +``` + +这*`从`*参数指向一个字符串,例如可能由[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE)当应用于`拜茶`柱子。[`PQunescapeBytea`](libpq-exec.html#LIBPQ-PQUNESCAPEBYTEA)将此字符串表示形式转换为其二进制表示形式。它返回一个指向分配的缓冲区的指针`malloc()`, 或者`空值`出错时,将缓冲区的大小放入*`to_length`*.结果必须使用[`PQfreemem`](libpq-misc.html#LIBPQ-PQFREEMEM)当不再需要它时。 + +这种转换不完全是相反的[`PQescapeBytea`](libpq-exec.html#LIBPQ-PQESCAPEBYTEA),因为从[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE).特别是这意味着不需要考虑字符串引用,因此不需要`PGconn`范围。 diff --git a/docs/X/libpq-fastpath.md b/docs/en/libpq-fastpath.md similarity index 100% rename from docs/X/libpq-fastpath.md rename to docs/en/libpq-fastpath.md diff --git a/docs/en/libpq-fastpath.zh.md b/docs/en/libpq-fastpath.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..b9987e8ba47368df3fc22f9feeb0aab4d7707a86 --- /dev/null +++ b/docs/en/libpq-fastpath.zh.md @@ -0,0 +1,42 @@ +## 34.8. The Fast-Path Interface + +[](<>) + +PostgreSQL provides a fast-path interface to send simple function calls to the server. + +### Tip + +This interface is somewhat obsolete, as one can achieve similar performance and greater functionality by setting up a prepared statement to define the function call. Then, executing the statement with binary transmission of parameters and results substitutes for a fast-path function call. + +The function`PQfn`[](<>)requests execution of a server function via the fast-path interface: + +``` +PGresult *PQfn(PGconn *conn, + int fnid, + int *result_buf, + int *result_len, + int result_is_int, + const PQArgBlock *args, + int nargs); + +typedef struct +{ + int len; + int isint; + union + { + int *ptr; + int integer; + } u; +} PQArgBlock; +``` + +The*`fnid`*argument is the OID of the function to be executed.*`args`*and*`nargs`*define the parameters to be passed to the function; they must match the declared function argument list. When the*`isint`*field of a parameter structure is true, the*`u.integer`*value is sent to the server as an integer of the indicated length (this must be 2 or 4 bytes); proper byte-swapping occurs. When*`isint`*is false, the indicated number of bytes at*\`*u.ptr`* are sent with no processing; the data must be in the format expected by the server for binary transmission of the function's argument data type. (The declaration of *`u.ptr`* as being of type`int*`is historical; it would be better to consider it`void*`.) *`result_buf`* points to the buffer in which to place the function's return value. The caller must have allocated sufficient space to store the return value. (There is no check!) The actual result length in bytes will be returned in the integer pointed to by *`result_len`*. If a 2- or 4-byte integer result is expected, set *`result_is_int`* to 1, otherwise set it to 0. Setting *`result_is_int`* to 1 causes libpq to byte-swap the value if necessary, so that it is delivered as a proper`int`value for the client machine; note that a 4-byte integer is delivered into *`*result_buf\`*for either allowed result size. When*`result_is_int`*is 0, the binary-format byte string sent by the server is returned unmodified. (In this case it's better to consider*`result_buf`*as being of type`void *`.) + +`PQfn`always returns a valid`PGresult`pointer, with status`PGRES_COMMAND_OK`为了成功或`PGRES_FATAL_ERROR`如果遇到一些问题。使用结果前应检查结果状态。调用者负责释放`PG结果`和[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)当不再需要它时。 + +要将 NULL 参数传递给函数,请设置*`连`*该参数结构的字段`-1`;这*`伊森特`*和*`你`*那么字段就无关紧要了。 + +如果函数返回 NULL,*\`*结果\_len`* 设定为`-1`, 和 *`*结果缓冲区\`*未修改。 + +请注意,使用此接口时无法处理设置值结果。此外,该函数必须是普通函数,而不是聚合、窗口函数或过程。 diff --git a/docs/X/libpq-misc.md b/docs/en/libpq-misc.md similarity index 100% rename from docs/X/libpq-misc.md rename to docs/en/libpq-misc.md diff --git a/docs/en/libpq-misc.zh.md b/docs/en/libpq-misc.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..75e4d31bc3a309e8a0a8d956bfb541996d00bf7c --- /dev/null +++ b/docs/en/libpq-misc.zh.md @@ -0,0 +1,141 @@ +## 34.12. Miscellaneous Functions + +As always, there are some functions that just don't fit anywhere. + +`PQfreemem`[](<>) + +Frees memory allocated by libpq. + +``` +void PQfreemem(void *ptr); +``` + +Frees memory allocated by libpq, particularly[`PQescapeByteaConn`](libpq-exec.html#LIBPQ-PQESCAPEBYTEACONN),[`PQescapeBytea`](libpq-exec.html#LIBPQ-PQESCAPEBYTEA),[`PQunescapeBytea`](libpq-exec.html#LIBPQ-PQUNESCAPEBYTEA), and`PQnotifies`. It is particularly important that this function, rather than`free()`, be used on Microsoft Windows. This is because allocating memory in a DLL and releasing it in the application works only if multithreaded/single-threaded, release/debug, and static/dynamic flags are the same for the DLL and the application. On non-Microsoft Windows platforms, this function is the same as the standard library function`free()`. + +`PQconninfoFree`[](<>) + +Frees the data structures allocated by[`PQconndefaults`](libpq-connect.html#LIBPQ-PQCONNDEFAULTS)or[`PQconninfoParse`](libpq-connect.html#LIBPQ-PQCONNINFOPARSE). + +``` +void PQconninfoFree(PQconninfoOption *connOptions); +``` + +A simple[`PQfreemem`](libpq-misc.html#LIBPQ-PQFREEMEM)will not do for this, since the array contains references to subsidiary strings. + +`PQencryptPasswordConn`[](<>) + +Prepares the encrypted form of a PostgreSQL password. + +``` +char *PQencryptPasswordConn(PGconn *conn, const char *passwd, const char *user, const char *algorithm); +``` + +此功能旨在供希望发送命令的客户端应用程序使用`更改用户乔密码'pwd'`.最好不要在此类命令中发送原始明文密码,因为它可能会暴露在命令日志、活动显示等中。相反,在发送密码之前,请使用此功能将密码转换为加密形式。 + +这*`密码`*和*`用户`*参数是明文密码,以及它所针对的用户的 SQL 名称。*`算法`*指定用于加密密码的加密算法。目前支持的算法是`md5`和`scram-sha-256`(`在`和`离开`也被接受为别名`md5`, 为了与旧的服务器版本兼容)。请注意,支持`scram-sha-256`是在 PostgreSQL 版本 10 中引入的,并且不能在旧版本的服务器上正常工作。如果*`算法`*是`空值`,此函数将向服务器查询当前值[密码\_加密](runtime-config-connection.html#GUC-PASSWORD-ENCRYPTION)环境。这可能会阻塞,并且如果当前事务被中止,或者如果连接正忙于执行另一个查询,则会失败。如果您希望对服务器使用默认算法但又想避免阻塞,请查询`密码加密`打电话之前自己[`PQencryptPasswordConn`](libpq-misc.html#LIBPQ-PQENCRYPTPASSWORDCONN),并将该值作为*`算法`*. + +返回值是由分配的字符串`malloc`.调用者可以假设字符串不包含任何需要转义的特殊字符。采用[`PQfreemem`](libpq-misc.html#LIBPQ-PQFREEMEM)完成后释放结果。出错,返回`空值`,并且合适的消息存储在连接对象中。 + +`PQencrypt密码`[](<>) + +准备 PostgreSQL 密码的 md5 加密形式。 + +``` +char *PQencryptPassword(const char *passwd, const char *user); +``` + +[`PQencrypt密码`](libpq-misc.html#LIBPQ-PQENCRYPTPASSWORD)是旧的,已弃用的版本[`PQencryptPasswordConn`](libpq-misc.html#LIBPQ-PQENCRYPTPASSWORDCONN).不同之处在于[`PQencrypt密码`](libpq-misc.html#LIBPQ-PQENCRYPTPASSWORD)不需要连接对象,并且`md5`始终用作加密算法。 + +`PQmakeEmptyPGresult`[](<>) + +构造一个空的`PG结果`具有给定状态的对象。 + +``` +PGresult *PQmakeEmptyPGresult(PGconn *conn, ExecStatusType status); +``` + +这是 libpq 分配和初始化一个空的内部函数`PG结果`目的。这个函数返回`空值`如果无法分配内存。它被导出是因为一些应用程序发现自己生成结果对象(特别是具有错误状态的对象)很有用。如果*`康恩`*不为空并且*`状态`*表示错误,将指定连接的当前错误信息复制到`PG结果`.另外,如果*`康恩`*不为空,在连接中注册的任何事件过程都被复制到`PG结果`.(他们没有得到`PGEVT_RESULTCREATE`打电话,但见[`PQfireResultCreateEvents`](libpq-misc.html#LIBPQ-PQFIRERESULTCREATEEVENTS)。) 注意[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)最终应该在对象上调用,就像使用`PG结果`由 libpq 本身返回。 + +`PQfireResultCreateEvents`[](<>) + +发射一个`PGEVT_RESULTCREATE`事件(见[第 34.14 节](libpq-events.html)) 对于在`PG结果`目的。返回非零表示成功,如果任何事件过程失败则返回零。 + +``` +int PQfireResultCreateEvents(PGconn *conn, PGresult *res); +``` + +这`康恩`参数被传递给事件过程,但不直接使用。有可能`空值`如果事件程序不使用它。 + +已经收到的事件过程`PGEVT_RESULTCREATE`或者`PGEVT_RESULTCOPY`此对象的事件不会再次触发。 + +此功能与[`PQmakeEmptyPGresult`](libpq-misc.html#LIBPQ-PQMAKEEMPTYPGRESULT)是它通常是适当的创建一个`PG结果`并在调用事件过程之前用数据填充它。 + +`PQcopyResult`[](<>) + +制作一个副本`PG结果`目的。该副本未以任何方式链接到源结果,并且[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)不再需要副本时必须调用。如果函数失败,`空值`被退回。 + +``` +PGresult *PQcopyResult(const PGresult *src, int flags); +``` + +这不是为了制作精确的副本。返回的结果总是放入`PGRES_TUPLES_OK`状态,并且不会复制源中的任何错误消息。(但是,它确实复制了命令状态字符串。)*`旗帜`*参数确定要复制的其他内容。它是几个标志的按位或。`PG_COPYRES_ATTRS`指定复制源结果的属性(列定义)。`PG_COPYRES_TUPLES`指定复制源结果的元组。(这也意味着复制属性。)`PG_COPYRES_NOTICEHOOKS`指定复制源结果的通知挂钩。`PG_COPYRES_EVENTS`指定复制源结果的事件。(但不会复制与源关联的任何实例数据。) + +`PQsetResultAttrs`[](<>) + +设置一个属性`PG结果`目的。 + +``` +int PQsetResultAttrs(PGresult *res, int numAttributes, PGresAttDesc *attDescs); +``` + +提供的*`描述`*被复制到结果中。如果*`描述`*指针是`空值`或者*`numAttributes`*小于一,请求被忽略,函数成功。如果*`资源`*已经包含属性,该函数将失败。如果函数失败,则返回值为零。如果函数成功,则返回值非零。 + +`PQset值`[](<>) + +设置一个元组字段值`PG结果`目的。 + +``` +int PQsetvalue(PGresult *res, int tup_num, int field_num, char *value, int len); +``` + +该函数将根据需要自动增长结果的内部元组数组。然而*`tup_num`*参数必须小于或等于[`元组`](libpq-exec.html#LIBPQ-PQNTUPLES),这意味着此函数一次只能将元组数组增长一个元组。但是任何现有元组的任何字段都可以按任何顺序修改。如果一个值在*`字段编号`*已经存在,它将被覆盖。如果*`连`*是 -1 或*`价值`*是`空值`,字段值将设置为 SQL 空值。这*`价值`*被复制到结果的私有存储中,因此在函数返回后不再需要。如果函数失败,则返回值为零。如果函数成功,则返回值非零。 + +`PQresultAlloc`[](<>) + +为 a 分配辅助存储`PG结果`目的。 + +``` +void *PQresultAlloc(PGresult *res, size_t nBytes); +``` + +使用此函数分配的任何内存都将在以下情况下被释放*`资源`*被清除。如果函数失败,返回值为`空值`.结果保证与任何类型的数据充分对齐,就像`malloc`. + +`PQresultMemorySize`[](<>) + +检索为 a 分配的字节数`PG结果`目的。 + +``` +size_t PQresultMemorySize(const PGresult *res); +``` + +这个值是所有的总和`malloc`与相关的请求`PG结果`对象,即所有将被释放的空间[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR).此信息可用于管理内存消耗。 + +`PQlib版本`[](<>) + +返回正在使用的 libpq 版本。 + +``` +int PQlibVersion(void); +``` + +此函数的结果可用于在运行时确定当前加载的 libpq 版本中是否提供特定功能。例如,该功能可用于确定哪些连接选项可用[`PQconnectdb`](libpq-connect.html#LIBPQ-PQCONNECTDB). + +结果是通过将库的主要版本号乘以 10000 并加上次要版本号形成的。例如,版本 10.1 将返回为 100001,版本 11.0 将返回为 110000。 + +在主要版本 10 之前,PostgreSQL 使用三部分版本号,其中前两部分一起代表主要版本。对于那些版本,[`PQlib版本`](libpq-misc.html#LIBPQ-PQLIBVERSION)每个部分使用两位数字;例如版本 9.1.5 将返回为 90105,版本 9.2.0 将返回为 90200。 + +因此,为了确定功能兼容性,应用程序应该划分结果[`PQlib版本`](libpq-misc.html#LIBPQ-PQLIBVERSION)通过 100 而不是 10000 来确定逻辑主版本号。在所有版本系列中,次要版本(错误修复版本)之间只有最后两位数不同。 + +### 笔记 + +该函数出现在 PostgreSQL 9.1 版本中,因此它不能用于检测早期版本中所需的功能,因为调用它会创建对 9.1 或更高版本的链接依赖。 diff --git a/docs/X/libpq-notice-processing.md b/docs/en/libpq-notice-processing.md similarity index 100% rename from docs/X/libpq-notice-processing.md rename to docs/en/libpq-notice-processing.md diff --git a/docs/en/libpq-notice-processing.zh.md b/docs/en/libpq-notice-processing.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..ad9fc6eef44fd2f470ec32eb27be706e99c8a3ac --- /dev/null +++ b/docs/en/libpq-notice-processing.zh.md @@ -0,0 +1,45 @@ +## 34.13. Notice Processing + +[](<>) + +Notice and warning messages generated by the server are not returned by the query execution functions, since they do not imply failure of the query. Instead they are passed to a notice handling function, and execution continues normally after the handler returns. The default notice handling function prints the message on`stderr`, but the application can override this behavior by supplying its own handling function. + +For historical reasons, there are two levels of notice handling, called the notice receiver and notice processor. The default behavior is for the notice receiver to format the notice and pass a string to the notice processor for printing. However, an application that chooses to provide its own notice receiver will typically ignore the notice processor layer and just do all the work in the notice receiver. + +The function`PQsetNoticeReceiver` [](<>) [](<>)sets or examines the current notice receiver for a connection object. Similarly,`PQsetNoticeProcessor` [](<>) [](<>)sets or examines the current notice processor. + +``` +typedef void (*PQnoticeReceiver) (void *arg, const PGresult *res); + +PQnoticeReceiver +PQsetNoticeReceiver(PGconn *conn, + PQnoticeReceiver proc, + void *arg); + +typedef void (*PQnoticeProcessor) (void *arg, const char *message); + +PQnoticeProcessor +PQsetNoticeProcessor(PGconn *conn, + PQnoticeProcessor proc, + void *arg); +``` + +Each of these functions returns the previous notice receiver or processor function pointer, and sets the new value. If you supply a null function pointer, no action is taken, but the current pointer is returned. + +When a notice or warning message is received from the server, or generated internally by libpq, the notice receiver function is called. It is passed the message in the form of a`PGRES_NONFATAL_ERROR` `PGresult`. (This allows the receiver to extract individual fields using[`PQresultErrorField`](libpq-exec.html#LIBPQ-PQRESULTERRORFIELD), or obtain a complete preformatted message using[`PQresultErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTERRORMESSAGE)or[`PQresultVerboseErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTVERBOSEERRORMESSAGE).) The same void pointer passed to`PQsetNoticeReceiver`也通过了。(如果需要,此指针可用于访问特定于应用程序的状态。) + +默认通知接收者只是简单地提取消息(使用[`PQresultErrorMessage`](libpq-exec.html#LIBPQ-PQRESULTERRORMESSAGE)) 并将其传递给通知处理器。 + +通知处理器负责处理以文本形式给出的通知或警告消息。它被传递消息的字符串文本(包括尾随换行符),加上一个与传递给相同的空指针`PQsetNoticeProcessor`.(如果需要,此指针可用于访问特定于应用程序的状态。) + +默认通知处理器很简单: + +``` +static void +defaultNoticeProcessor(void *arg, const char *message) +{ + fprintf(stderr, "%s", message); +} +``` + +一旦您设置了通知接收器或处理器,您应该期望该函数可以被调用,只要`PGconn`对象或`PG结果`由它制成的物体存在。在创建一个`PG结果`, 这`PGconn`的当前通知处理指针被复制到`PG结果`供可能的功能使用,例如[`PQgetvalue`](libpq-exec.html#LIBPQ-PQGETVALUE). diff --git a/docs/X/libpq-notify.md b/docs/en/libpq-notify.md similarity index 100% rename from docs/X/libpq-notify.md rename to docs/en/libpq-notify.md diff --git a/docs/en/libpq-notify.zh.md b/docs/en/libpq-notify.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..1b49314fdfd5cd2fad59587868b65f7873a843f6 --- /dev/null +++ b/docs/en/libpq-notify.zh.md @@ -0,0 +1,28 @@ +## 34.9. Asynchronous Notification + +[](<>) + +PostgreSQL offers asynchronous notification via the`LISTEN`and`NOTIFY`commands. A client session registers its interest in a particular notification channel with the`LISTEN`command (and can stop listening with the`UNLISTEN`command). All sessions listening on a particular channel will be notified asynchronously when a`NOTIFY`command with that channel name is executed by any session. A “payload” string can be passed to communicate additional data to the listeners. + +libpq applications submit`LISTEN`,`UNLISTEN`, and`NOTIFY`commands as ordinary SQL commands. The arrival of`NOTIFY`messages can subsequently be detected by calling`PQnotifies`.[](<>) + +The function`PQnotifies`returns the next notification from a list of unhandled notification messages received from the server. It returns a null pointer if there are no pending notifications. Once a notification is returned from`通知`,则视为已处理,并将从通知列表中删除。 + +``` +PGnotify *PQnotifies(PGconn *conn); + +typedef struct pgNotify +{ + char *relname; /* notification channel name */ + int be_pid; /* process ID of notifying server process */ + char *extra; /* notification payload string */ +} PGnotify; +``` + +处理完一个`PGnotify`返回的对象`通知`,一定要用[`PQfreemem`](libpq-misc.html#LIBPQ-PQFREEMEM).这就足够释放`PGnotify`指针;这个`重新命名`和`额外的`字段不代表单独的分配。(这些字段的名称是历史的;尤其是通道名称不需要与关系名称有任何关系。) + +[例34.2](libpq-example.html#LIBPQ-EXAMPLE-2)给出了一个示例程序,演示了异步通知的使用。 + +`通知`实际上不从服务器读取数据;它只返回以前被另一个libpq函数吸收的消息。在古代版本的libpq中,确保及时收到`通知`信息是不断提交命令,甚至是空命令,然后检查`通知`每次之后[`PQexec`](libpq-exec.html#LIBPQ-PQEXEC)。虽然这仍然有效,但它被认为是一种浪费处理能力的做法。 + +一种更好的检查`通知`当您没有要执行的有用命令时的消息是调用[`PQconsume输入` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT),然后检查`PQ 通知`.您可以使用`选择()`等待数据从服务器到达,因此除非有事可做,否则不会使用 CPU 功率。(看[`PQsocket`](libpq-status.html#LIBPQ-PQSOCKET)获取要使用的文件描述符编号`选择()`.) 请注意,无论您使用以下命令提交命令,这都可以正常工作[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)/[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)或简单地使用[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC).但是,您应该记得检查`PQ 通知`每次之后[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)要么[`执行程序`](libpq-exec.html#LIBPQ-PQEXEC),查看在命令处理过程中是否有任何通知进来。 diff --git a/docs/X/libpq-pgpass.md b/docs/en/libpq-pgpass.md similarity index 100% rename from docs/X/libpq-pgpass.md rename to docs/en/libpq-pgpass.md diff --git a/docs/en/libpq-pgpass.zh.md b/docs/en/libpq-pgpass.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..cb44e6fb6bddb8100340c52b4f1557c8b4221a6e --- /dev/null +++ b/docs/en/libpq-pgpass.zh.md @@ -0,0 +1,15 @@ +## 34.16. The Password File + +[](<>)[](<>) + +The file`.pgpass`in a user's home directory can contain passwords to be used if the connection requires a password (and no password has been specified otherwise). On Microsoft Windows the file is named`%APPDATA%\postgresql\pgpass.conf`(where`%APPDATA%`refers to the Application Data subdirectory in the user's profile). Alternatively, a password file can be specified using the connection parameter[passfile](libpq-connect.html#LIBPQ-CONNECT-PASSFILE)or the environment variable`PGPASSFILE`. + +This file should contain lines of the following format: + +``` +hostname:port:database:username:password +``` + +(You can add a reminder comment to the file by copying the line above and preceding it with`#`.) Each of the first four fields can be a literal value, or`*`, which matches anything. The password field from the first line that matches the current connection parameters will be used. (Therefore, put more-specific entries first when you are using wildcards.) If an entry needs to contain`:`or`\`, 转义这个字符`\`.主机名字段与`主持人`如果指定了连接参数,则为`主机地址`参数(如果指定);如果两者都没有给出,那么主机名`本地主机`被搜索。主机名`本地主机`当连接是 Unix 域套接字连接并且`主持人`参数匹配 libpq 的默认套接字目录路径。在备用服务器中,数据库字段`复制`匹配到主服务器的流复制连接。否则,数据库字段的用处有限,因为用户对同一集群中的所有数据库都具有相同的密码。 + +在 Unix 系统上,密码文件的权限必须禁止对世界或组的任何访问;通过以下命令实现此目的`chmod 0600 ~/.pgpass`.如果权限不那么严格,则该文件将被忽略。在 Microsoft Windows 上,假定文件存储在安全的目录中,因此不进行特殊权限检查。 diff --git a/docs/X/libpq-pipeline-mode.md b/docs/en/libpq-pipeline-mode.md similarity index 100% rename from docs/X/libpq-pipeline-mode.md rename to docs/en/libpq-pipeline-mode.md diff --git a/docs/en/libpq-pipeline-mode.zh.md b/docs/en/libpq-pipeline-mode.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..75b1787c618ecee6f4e0424d3649c9787fbc1206 --- /dev/null +++ b/docs/en/libpq-pipeline-mode.zh.md @@ -0,0 +1,160 @@ +## 34.5. Pipeline Mode + +[34.5.1. Using Pipeline Mode](libpq-pipeline-mode.html#LIBPQ-PIPELINE-USING) + +[34.5.2. Functions Associated with Pipeline Mode](libpq-pipeline-mode.html#LIBPQ-PIPELINE-FUNCTIONS) + +[34.5.3. When to Use Pipeline Mode](libpq-pipeline-mode.html#LIBPQ-PIPELINE-TIPS) + +[](<>)[](<>)[](<>) + +libpq pipeline mode allows applications to send a query without having to read the result of the previously sent query. Taking advantage of the pipeline mode, a client will wait less for the server, since multiple queries/results can be sent/received in a single network transaction. + +While pipeline mode provides a significant performance boost, writing clients using the pipeline mode is more complex because it involves managing a queue of pending queries and finding which result corresponds to which query in the queue. + +Pipeline mode also generally consumes more memory on both the client and server, though careful and aggressive management of the send/receive queue can mitigate this. This applies whether or not the connection is in blocking or non-blocking mode. + +While the pipeline API was introduced in PostgreSQL 14, it is a client-side feature which doesn't require special server support and works on any server that supports the v3 extended query protocol. + +### 34.5.1. Using Pipeline Mode + +To issue pipelines, the application must switch the connection into pipeline mode, which is done with[`PQenterPipelineMode`](libpq-pipeline-mode.html#LIBPQ-PQENTERPIPELINEMODE).[`PQpipelineStatus`](libpq-pipeline-mode.html#LIBPQ-PQPIPELINESTATUS)can be used to test whether pipeline mode is active. In pipeline mode, only[asynchronous operations](libpq-async.html)are permitted, command strings containing multiple SQL commands are disallowed, and so is`COPY`. Using synchronous command execution functions such as`PQfn`,`PQexec`,`PQexecParams`,`PQprepare`,`PQexecPrepared`,`PQdescribePrepared`,`PQdescribePortal`, is an error condition. Once all dispatched commands have had their results processed, and the end pipeline result has been consumed, the application may return to non-pipelined mode with[`PQexitPipelineMode`](libpq-pipeline-mode.html#LIBPQ-PQEXITPIPELINEMODE). + +### Note + +It is best to use pipeline mode with libpq in[non-blocking mode](libpq-async.html#LIBPQ-PQSETNONBLOCKING). If used in blocking mode it is possible for a client/server deadlock to occur.[\[15\]](#ftn.id-1.7.3.12.9.3.1.3) + +#### 34.5.1.1. Issuing Queries + +After entering pipeline mode, the application dispatches requests using[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY),[`PQsendQueryParams`](libpq-async.html#LIBPQ-PQSENDQUERYPARAMS), or its prepared-query sibling[`PQsendQueryPrepared`](libpq-async.html#LIBPQ-PQSENDQUERYPREPARED). These requests are queued on the client-side until flushed to the server; this occurs when[`PQpipelineSync`](libpq-pipeline-mode.html#LIBPQ-PQPIPELINESYNC)is used to establish a synchronization point in the pipeline, or when[`PQflush`](libpq-async.html#LIBPQ-PQFLUSH)is called. The functions[`PQsendPrepare`](libpq-async.html#LIBPQ-PQSENDPREPARE),[`PQsendDescribePrepared`](libpq-async.html#LIBPQ-PQSENDDESCRIBEPREPARED), and[`PQsendDescribePortal`](libpq-async.html#LIBPQ-PQSENDDESCRIBEPORTAL)also work in pipeline mode. Result processing is described below. + +The server executes statements, and returns results, in the order the client sends them. The server will begin executing the commands in the pipeline immediately, not waiting for the end of the pipeline. Note that results are buffered on the server side; the server flushes that buffer when a synchronization point is established with`PQpipelineSync`, or when`PQsendFlushRequest`is called. If any statement encounters an error, the server aborts the current transaction and does not execute any subsequent command in the queue until the next synchronization point; a`PGRES_PIPELINE_ABORTED`result is produced for each such command. (This remains true even if the commands in the pipeline would rollback the transaction.) Query processing resumes after the synchronization point. + +It's fine for one operation to depend on the results of a prior one; for example, one query may define a table that the next query in the same pipeline uses. Similarly, an application may create a named prepared statement and execute it with later statements in the same pipeline. + +#### 34.5.1.2. Processing Results + +To process the result of one query in a pipeline, the application calls`PQgetResult`repeatedly and handles each result until`PQgetResult`returns null. The result from the next query in the pipeline may then be retrieved using`PQgetResult`again and the cycle repeated. The application handles individual statement results as normal. When the results of all the queries in the pipeline have been returned,`PQgetResult`returns a result containing the status value`PGRES_PIPELINE_SYNC` + +The client may choose to defer result processing until the complete pipeline has been sent, or interleave that with sending further queries in the pipeline; see[Section 34.5.1.4](libpq-pipeline-mode.html#LIBPQ-PIPELINE-INTERLEAVE). + +To enter single-row mode, call`PQsetSingleRowMode`before retrieving results with`PQgetResult`. This mode selection is effective only for the query currently being processed. For more information on the use of`PQsetSingleRowMode`, refer to[Section 34.6](libpq-single-row-mode.html). + +`PQgetResult`behaves the same as for normal asynchronous processing except that it may contain the new`PGresult`types`PGRES_PIPELINE_SYNC`and`PGRES_PIPELINE_ABORTED`.`PGRES_PIPELINE_SYNC`is reported exactly once for each`PQpipelineSync`at the corresponding point in the pipeline.`PGRES_PIPELINE_ABORTED`is emitted in place of a normal query result for the first error and all subsequent results until the next`PGRES_PIPELINE_SYNC`; see[Section 34.5.1.3](libpq-pipeline-mode.html#LIBPQ-PIPELINE-ERRORS). + +`PQisBusy`,`PQconsumeInput`, etc operate as normal when processing pipeline results. In particular, a call to`PQisBusy`in the middle of a pipeline returns 0 if the results for all the queries issued so far have been consumed. + +libpq does not provide any information to the application about the query currently being processed (except that`PQgetResult`returns null to indicate that we start returning the results of next query). The application must keep track of the order in which it sent queries, to associate them with their corresponding results. Applications will typically use a state machine or a FIFO queue for this. + +#### 34.5.1.3. Error Handling + +从客户的角度来看,之后`PQresult状态`返回`PGRES_FATAL_ERROR`,管道被标记为中止。`PQresult状态`将报告一个`PGRES_PIPELINE_ABORTED`中止管道中每个剩余排队操作的结果。结果为`PQpipelineSync`报告为`PGRES_PIPELINE_SYNC`发出终止流水线的信号并恢复正常结果处理。 + +客户端*必须*处理结果`PQgetResult`在错误恢复期间。 + +如果管道使用隐式事务,则回滚已执行的操作,并完全跳过排队等待失败操作的操作。如果管道启动并提交单个显式事务(即第一条语句是`开始`最后一个是`犯罪`) 除了会话在管道结束时仍处于中止事务状态。如果管道包含*多个显式事务*,在错误之前提交的所有事务保持提交,当前正在进行的事务被中止,并且所有后续操作被完全跳过,包括后续事务。如果管道同步点出现在显式事务块处于中止状态时,下一个管道将立即中止,除非下一个命令将事务置于正常模式`回滚`. + +### 笔记 + +客户不得假定工作已提交*sends*a`COMMIT`— only when the corresponding result is received to confirm the commit is complete. Because errors arrive asynchronously, the application needs to be able to restart from the last*received*committed change and resend work done after that point if something goes wrong. + +#### 34.5.1.4. Interleaving Result Processing and Query Dispatch + +To avoid deadlocks on large pipelines the client should be structured around a non-blocking event loop using operating system facilities such as`select`,`poll`,`WaitForMultipleObjectEx`, etc. + +The client application should generally maintain a queue of work remaining to be dispatched and a queue of work that has been dispatched but not yet had its results processed. When the socket is writable it should dispatch more work. When the socket is readable it should read results and process them, matching them up to the next entry in its corresponding results queue. Based on available memory, results from the socket should be read frequently: there's no need to wait until the pipeline end to read the results. Pipelines should be scoped to logical units of work, usually (but not necessarily) one transaction per pipeline. There's no need to exit pipeline mode and re-enter it between pipelines, or to wait for one pipeline to finish before sending the next. + +An example using`select()`and a simple state machine to track sent and received work is in`src/test/modules/libpq_pipeline/libpq_pipeline.c`in the PostgreSQL source distribution. + +### 34.5.2. Functions Associated with Pipeline Mode + +`PQpipelineStatus`[](<>) + +Returns the current pipeline mode status of the libpq connection. + +``` +PGpipelineStatus PQpipelineStatus(const PGconn *conn); +``` + +`PQpipelineStatus`can return one of the following values: + +`PQ_PIPELINE_ON` + +The libpq connection is in pipeline mode. + +`PQ_PIPELINE_OFF` + +The libpq connection is*not*in pipeline mode. + +`PQ_PIPELINE_ABORTED` + +The libpq connection is in pipeline mode and an error occurred while processing the current pipeline. The aborted flag is cleared when`PQgetResult`returns a result of type`PGRES_PIPELINE_SYNC`. + +`PQenterPipelineMode`[](<>) + +Causes a connection to enter pipeline mode if it is currently idle or already in pipeline mode. + +``` +int PQenterPipelineMode(PGconn *conn); +``` + +Returns 1 for success. Returns 0 and has no effect if the connection is not currently idle, i.e., it has a result ready, or it is waiting for more input from the server, etc. This function does not actually send anything to the server, it just changes the libpq connection state. + +`PQexitPipelineMode`[](<>) + +Causes a connection to exit pipeline mode if it is currently in pipeline mode with an empty queue and no pending results. + +``` +int PQexitPipelineMode(PGconn *conn); +``` + +Returns 1 for success. Returns 1 and takes no action if not in pipeline mode. If the current statement isn't finished processing, or`PQgetResult`has not been called to collect results from all previously sent query, returns 0 (in which case, use[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE)to get more information about the failure). + +`PQpipelineSync`[](<>) + +Marks a synchronization point in a pipeline by sending a[sync message](protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY)and flushing the send buffer. This serves as the delimiter of an implicit transaction and an error recovery point; see[Section 34.5.1.3](libpq-pipeline-mode.html#LIBPQ-PIPELINE-ERRORS). + +``` +int PQpipelineSync(PGconn *conn); +``` + +Returns 1 for success. Returns 0 if the connection is not in pipeline mode or sending a[sync message](protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY)failed. + +`PQsendFlushRequest`[](<>) + +Sends a request for the server to flush its output buffer. + +``` +int PQsendFlushRequest(PGconn *conn); +``` + +Returns 1 for success. Returns 0 on any failure. + +The server flushes its output buffer automatically as a result of`PQpipelineSync`being called, or on any request when not in pipeline mode; this function is useful to cause the server to flush its output buffer in pipeline mode without establishing a synchronization point. Note that the request is not itself flushed to the server automatically; use`PQflush`if necessary. + +### 34.5.3. When to Use Pipeline Mode + +Much like asynchronous query mode, there is no meaningful performance overhead when using pipeline mode. It increases client application complexity, and extra caution is required to prevent client/server deadlocks, but pipeline mode can offer considerable performance improvements, in exchange for increased memory usage from leaving state around longer. + +Pipeline mode is most useful when the server is distant, i.e., network latency (“ping time”) is high, and also when many small operations are being performed in rapid succession. There is usually less benefit in using pipelined commands when each query takes many multiples of the client/server round-trip time to execute. A 100-statement operation run on a server 300 ms round-trip-time away would take 30 seconds in network latency alone without pipelining; with pipelining it may spend as little as 0.3 s waiting for results from the server. + +Use pipelined commands when your application does lots of small`INSERT`,`UPDATE`and`DELETE`operations that can't easily be transformed into operations on sets, or into a`COPY`operation. + +Pipeline mode is not useful when information from one operation is required by the client to produce the next operation. In such cases, the client would have to introduce a synchronization point and wait for a full client/server round-trip to get the results it needs. However, it's often possible to adjust the client design to exchange the required information server-side. Read-modify-write cycles are especially good candidates; for example: + +``` +BEGIN; +SELECT x FROM mytable WHERE id = 42 FOR UPDATE; +-- result: x=2 +-- client adds 1 to x: +UPDATE mytable SET x = 3 WHERE id = 42; +COMMIT; +``` + +could be much more efficiently done with: + +``` +UPDATE mytable SET x = x + 1 WHERE id = 42; +``` + +Pipelining is less useful, and more complex, when a single pipeline contains multiple transactions (see[Section 34.5.1.3](libpq-pipeline-mode.html#LIBPQ-PIPELINE-ERRORS)). diff --git a/docs/X/libpq-single-row-mode.md b/docs/en/libpq-single-row-mode.md similarity index 100% rename from docs/X/libpq-single-row-mode.md rename to docs/en/libpq-single-row-mode.md diff --git a/docs/en/libpq-single-row-mode.zh.md b/docs/en/libpq-single-row-mode.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c3cc6d20e408e37da0aa794f10ec4a3e7bb85698 --- /dev/null +++ b/docs/en/libpq-single-row-mode.zh.md @@ -0,0 +1,23 @@ +## 34.6. Retrieving Query Results Row-by-Row + +[](<>) + +Ordinarily, libpq collects an SQL command's entire result and returns it to the application as a single`PGresult`. This can be unworkable for commands that return a large number of rows. For such cases, applications can use[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)and[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)in*single-row mode*. In this mode, the result row(s) are returned to the application one at a time, as they are received from the server. + +To enter single-row mode, call[`PQsetSingleRowMode`](libpq-single-row-mode.html#LIBPQ-PQSETSINGLEROWMODE)immediately after a successful call of[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)(or a sibling function). This mode selection is effective only for the currently executing query. Then call[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)repeatedly, until it returns null, as documented in[Section 34.4](libpq-async.html). If the query returns any rows, they are returned as individual`PGresult`objects, which look like normal query results except for having status code`PGRES_SINGLE_TUPLE`instead of`PGRES_TUPLES_OK`. After the last row, or immediately if the query returns zero rows, a zero-row object with status`PGRES_TUPLES_OK`is returned; this is the signal that no more rows will arrive. (But note that it is still necessary to continue calling[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT)until it returns null.) All of these`PGresult`objects will contain the same row description data (column names, types, etc) that an ordinary`PGresult`object for the query would have. Each object should be freed with[`PQclear`](libpq-exec.html#LIBPQ-PQCLEAR)as usual. + +When using pipeline mode, single-row mode needs to be activated for each query in the pipeline before retrieving results for that query with`PQgetResult`. See[Section 34.5](libpq-pipeline-mode.html)for more information. + +`PQsetSingleRowMode`[](<>) + +Select single-row mode for the currently-executing query. + +``` +int PQsetSingleRowMode(PGconn *conn); +``` + +This function can only be called immediately after[`PQsendQuery`](libpq-async.html#LIBPQ-PQSENDQUERY)or one of its sibling functions, before any other operation on the connection such as[`PQconsumeInput` ](libpq-async.html#LIBPQ-PQCONSUMEINPUT)or[`PQgetResult`](libpq-async.html#LIBPQ-PQGETRESULT). If called at the correct time, the function activates single-row mode for the current query and returns 1. Otherwise the mode stays unchanged and the function returns 0. In any case, the mode reverts to normal after completion of the current query. + +### Caution + +While processing a query, the server may return some rows and then encounter an error, causing the query to be aborted. Ordinarily, libpq discards any such rows and reports only the error. But in single-row mode, those rows will have already been returned to the application. Hence, the application will see some`PGRES_SINGLE_TUPLE` `PGresult`objects followed by a`PGRES_FATAL_ERROR`object. For proper transactional behavior, the application must be designed to discard or undo whatever has been done with the previously-processed rows, if the query ultimately fails. diff --git a/docs/X/libpq-ssl.md b/docs/en/libpq-ssl.md similarity index 100% rename from docs/X/libpq-ssl.md rename to docs/en/libpq-ssl.md diff --git a/docs/en/libpq-ssl.zh.md b/docs/en/libpq-ssl.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..73b1e78572340843e3a575a603419d7694acddad --- /dev/null +++ b/docs/en/libpq-ssl.zh.md @@ -0,0 +1,127 @@ +## 34.19. SSL Support + +[34.19.1. Client Verification of Server Certificates](libpq-ssl.html#LIBQ-SSL-CERTIFICATES) + +[34.19.2. Client Certificates](libpq-ssl.html#LIBPQ-SSL-CLIENTCERT) + +[34.19.3. Protection Provided in Different Modes](libpq-ssl.html#LIBPQ-SSL-PROTECTION) + +[34.19.4. SSL Client File Usage](libpq-ssl.html#LIBPQ-SSL-FILEUSAGE) + +[34.19.5. SSL Library Initialization](libpq-ssl.html#LIBPQ-SSL-INITIALIZE) + +[](<>) + +PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased security. See[Section 19.9](ssl-tcp.html)for details about the server-side SSL functionality. + +libpq reads the system-wide OpenSSL configuration file. By default, this file is named`openssl.cnf`and is located in the directory reported by`openssl version -d`. This default can be overridden by setting environment variable`OPENSSL_CONF`to the name of the desired configuration file. + +### 34.19.1. Client Verification of Server Certificates + +By default, PostgreSQL will not perform any verification of the server certificate. This means that it is possible to spoof the server identity (for example by modifying a DNS record or by taking over the server IP address) without the client knowing. In order to prevent spoofing, the client must be able to verify the server's identity via a chain of trust. A chain of trust is established by placing a root (self-signed) certificate authority (CA) certificate on one computer and a leaf certificate*signed*by the root certificate on another computer. It is also possible to use an “intermediate” certificate which is signed by the root certificate and signs leaf certificates. + +To allow the client to verify the identity of the server, place a root certificate on the client and a leaf certificate signed by the root certificate on the server. To allow the server to verify the identity of the client, place a root certificate on the server and a leaf certificate signed by the root certificate on the client. One or more intermediate certificates (usually stored with the leaf certificate) can also be used to link the leaf certificate to the root certificate. + +Once a chain of trust has been established, there are two ways for the client to validate the leaf certificate sent by the server. If the parameter`sslmode`is set to`verify-ca`, libpq will verify that the server is trustworthy by checking the certificate chain up to the root certificate stored on the client. If`sslmode`is set to`verify-full`, libpq 将*还*验证服务器主机名是否与存储在服务器证书中的名称匹配。如果无法验证服务器证书,SSL 连接将失败。`验证完整`建议在大多数安全敏感环境中使用。 + +在`验证完整`模式下,主机名与证书的主题备用名称属性匹配,如果没有类型的主题备用名称,则与通用名称属性匹配`域名系统名称`存在。如果证书的名称属性以星号 (`*`),星号将被视为通配符,它​​将匹配所有字符*除了*一个点(`.`)。这意味着证书将不匹配子域。如果使用 IP 地址而不是主机名进行连接,则 IP 地址将被匹配(不进行任何 DNS 查找)。 + +要允许服务器证书验证,必须在文件中放置一个或多个根证书`~/.postgresql/root.crt`在用户的主目录中。(在 Microsoft Windows 上,文件名为`%APPDATA%\postgresql\root.crt`.) 如果需要将服务器发送的证书链链接到客户端上存储的根证书,还应将中间证书添加到文件中。 + +证书吊销列表 (CRL) 条目也会被检查`~/.postgresql/root.crl`存在(`%APPDATA%\postgresql\root.crl`在 Microsoft Windows 上)。 + +可以通过设置连接参数来更改根证书文件和 CRL 的位置`sslrootcert`和`sslcrl`or the environment variables`PGSSLROOTCERT`and`PGSSLCRL`.`sslcrldir`or the environment variable`PGSSLCRLDIR`can also be used to specify a directory containing CRL files. + +### Note + +For backwards compatibility with earlier versions of PostgreSQL, if a root CA file exists, the behavior of`sslmode`=`require`will be the same as that of`verify-ca`, meaning the server certificate is validated against the CA. Relying on this behavior is discouraged, and applications that need certificate validation should always use`verify-ca`or`verify-full`. + +### 34.19.2. Client Certificates + +If the server attempts to verify the identity of the client by requesting the client's leaf certificate, libpq will send the certificates stored in file`~/.postgresql/postgresql.crt`in the user's home directory. The certificates must chain to the root certificate trusted by the server. A matching private key file`~/.postgresql/postgresql.key`must also be present. The private key file must not allow any access to world or group; achieve this by the command`chmod 0600 ~/.postgresql/postgresql.key`.在 Microsoft Windows 上,这些文件被命名为`%APPDATA%\postgresql\postgresql.crt`和`%APPDATA%\postgresql\postgresql.key`,并且没有特殊的权限检查,因为该目录被假定为安全的。证书和密钥文件的位置可以被连接参数覆盖`sslcert`和`sslkey`或环境变量`PGSSL证书`和`PGSSLKEY`. + +第一张证书在`postgresql.crt`必须是客户端的证书,因为它必须与客户端的私钥匹配。可以选择将“中间”证书附加到文件中——这样做可以避免在服务器上存储中间证书([ssl\_约\_文件](runtime-config-connection.html#GUC-SSL-CA-FILE))。 + +证书和密钥可以是 PEM 或 ASN.1 DER 格式。 + +密钥可以明文存储或使用 OpenSSL 支持的任何算法(如 AES-128)使用密码加密。如果密钥是加密存储的,那么密码可以在[ssl密码](libpq-connect.html#LIBPQ-CONNECT-SSLPASSWORD)连接选项。如果提供了加密密钥并且`ssl密码`如果选项不存在或为空,则 OpenSSL 将通过交互方式提示输入密码,并带有`输入 PEM 密码:`prompt if a TTY is available. Applications can override the client certificate prompt and the handling of the`sslpassword`parameter by supplying their own key password callback; see[`PQsetSSLKeyPassHook_OpenSSL`](libpq-connect.html#LIBPQ-PQSETSSLKEYPASSHOOK-OPENSSL). + +For instructions on creating certificates, see[Section 19.9.5](ssl-tcp.html#SSL-CERTIFICATE-CREATION). + +### 34.19.3. Protection Provided in Different Modes + +The different values for the`sslmode`parameter provide different levels of protection. SSL can provide protection against three types of attacks: + +Eavesdropping + +If a third party can examine the network traffic between the client and the server, it can read both connection information (including the user name and password) and the data that is passed. SSL uses encryption to prevent this. + +Man-in-the-middle (MITM) + +If a third party can modify the data while passing between the client and server, it can pretend to be the server and therefore see and modify data*even if it is encrypted*. The third party can then forward the connection information and data to the original server, making it impossible to detect this attack. Common vectors to do this include DNS poisoning and address hijacking, whereby the client is directed to a different server than intended. There are also several other attack methods that can accomplish this. SSL uses certificate verification to prevent this, by authenticating the server to the client. + +Impersonation + +If a third party can pretend to be an authorized client, it can simply access data it should not have access to. Typically this can happen through insecure password management. SSL uses client certificates to prevent this, by making sure that only holders of valid certificates can access the server. + +For a connection to be known SSL-secured, SSL usage must be configured on*both the client and the server*before the connection is made. If it is only configured on the server, the client may end up sending sensitive information (e.g., passwords) before it knows that the server requires high security. In libpq, secure connections can be ensured by setting the`sslmode`parameter to`verify-full`or`verify-ca`, and providing the system with a root certificate to verify against. This is analogous to using an`https`URL for encrypted web browsing. + +Once the server has been authenticated, the client can pass sensitive data. This means that up until this point, the client does not need to know if certificates will be used for authentication, making it safe to specify that only in the server configuration. + +All SSL options carry overhead in the form of encryption and key-exchange, so there is a trade-off that has to be made between performance and security.[Table 34.1](libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS)illustrates the risks the different`sslmode`values protect against, and what statement they make about security and overhead. + +**Table 34.1. SSL Mode Descriptions** + +| `sslmode` | Eavesdropping protection | MITM protection | Statement | +| --------- | ------------------------ | --------------- | --------- | +| `disable` | No | No | I don't care about security, and I don't want to pay the overhead of encryption. | +| `allow` | Maybe | No | I don't care about security, but I will pay the overhead of encryption if the server insists on it. | +| `prefer` | Maybe | No | I don't care about encryption, but I wish to pay the overhead of encryption if the server supports it. | +| `require` | Yes | No | I want my data to be encrypted, and I accept the overhead. I trust that the network will make sure I always connect to the server I want. | +| `验证ca` | 对 | 取决于CA策略 | 我希望我的数据加密,我接受开销。我想确保连接到我信任的服务器。 | +| `验证完整` | 对 | 对 | 我希望我的数据加密,我接受开销。我希望确保连接到我信任的服务器,并且它是我指定的服务器。 | + +两者的区别`验证ca`和`验证完整`取决于根CA的策略。如果使用公共CA,`验证ca`允许连接到*其他人*可能已经在CA注册了。在这种情况下,`验证完整`应该一直使用。如果使用本地CA,甚至是自签名证书,请使用`验证ca`通常提供足够的保护。 + +的默认值`sslmode`是`更喜欢`.如表中所示,从安全角度来看,这没有任何意义,它只承诺在可能的情况下增加性能开销。它仅作为向后兼容性的默认设置提供,不建议在安全部署中使用。 + +### 34.19.4.SSL客户端文件使用 + +[表34.2](libpq-ssl.html#LIBPQ-SSL-FILE-USAGE)总结与客户端上的SSL设置相关的文件。 + +**Table 34.2. Libpq/Client SSL File Usage** + +| File | Contents | Effect | +| ---- | -------- | ------ | +| `~/.postgresql/postgresql.crt` | client certificate | sent to server | +| `~/.postgresql/postgresql.key` | client private key | proves client certificate sent by owner; does not indicate certificate owner is trustworthy | +| `~/.postgresql/root.crt` | trusted certificate authorities | checks that server certificate is signed by a trusted certificate authority | +| `~/.postgresql/root.crl` | certificates revoked by certificate authorities | server certificate must not be on this list | + +### 34.19.5. SSL Library Initialization + +If your application initializes`libssl`and/or`libcrypto`libraries and libpq is built with SSL support, you should call[`PQinitOpenSSL`](libpq-ssl.html#LIBPQ-PQINITOPENSSL)to tell libpq that the`libssl`and/or`libcrypto`libraries have been initialized by your application, so that libpq will not also initialize those libraries. + +`PQinitOpenSSL`[](<>) + +允许应用程序选择要初始化的安全库。 + +``` +void PQinitOpenSSL(int do_ssl, int do_crypto); +``` + +什么时候*`do_ssl`*非零,libpq 将在首次打开数据库连接之前初始化 OpenSSL 库。什么时候*`do_crypto`*非零,则`libcrypto`库将被初始化。默认情况下(如果[`PQinitOpenSSL`](libpq-ssl.html#LIBPQ-PQINITOPENSSL)不调用),两个库都被初始化。如果未编译 SSL 支持,则此函数存在但不执行任何操作。 + +如果您的应用程序使用并初始化 OpenSSL 或其底层`libcrypto`图书馆你*必须*在第一次打开数据库连接之前,使用零为适当的参数调用此函数。还要确保在打开数据库连接之前已经完成了初始化。 + +`PQinitSSL`[](<>) + +允许应用程序选择要初始化的安全库。 + +``` +void PQinitSSL(int do_ssl); +``` + +这个函数相当于`PQinitOpenSSL(do_ssl, do_ssl)`.对于同时初始化或不初始化 OpenSSL 和`libcrypto`. + +[`PQinitSSL`](libpq-ssl.html#LIBPQ-PQINITSSL)自 PostgreSQL 8.0 以来一直存在,而[`PQinitOpenSSL`](libpq-ssl.html#LIBPQ-PQINITOPENSSL)是在 PostgreSQL 8.4 中添加的,所以[`PQinitSSL`](libpq-ssl.html#LIBPQ-PQINITSSL)对于需要使用旧版本 libpq 的应用程序可能更可取。 diff --git a/docs/X/libpq-status.md b/docs/en/libpq-status.md similarity index 100% rename from docs/X/libpq-status.md rename to docs/en/libpq-status.md diff --git a/docs/en/libpq-status.zh.md b/docs/en/libpq-status.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..6972c779a0b6e5810d7141eed4c181e5e970aabb --- /dev/null +++ b/docs/en/libpq-status.zh.md @@ -0,0 +1,291 @@ +## 34.2. Connection Status Functions + +These functions can be used to interrogate the status of an existing database connection object. + +### Tip + +[](<>) [](<>)libpq application programmers should be careful to maintain the`PGconn`abstraction. Use the accessor functions described below to get at the contents of`PGconn`. Reference to internal`PGconn`fields using`libpq-int.h`is not recommended because they are subject to change in the future. + +The following functions return parameter values established at connection. These values are fixed for the life of the connection. If a multi-host connection string is used, the values of[`PQhost`](libpq-status.html#LIBPQ-PQHOST),[`PQport`](libpq-status.html#LIBPQ-PQPORT), and[`PQpass`](libpq-status.html#LIBPQ-PQPASS)can change if a new connection is established using the same`PGconn`object. Other values are fixed for the lifetime of the`PGconn`object. + +`PQdb`[](<>) + +Returns the database name of the connection. + +``` +char *PQdb(const PGconn *conn); +``` + +`PQuser`[](<>) + +Returns the user name of the connection. + +``` +char *PQuser(const PGconn *conn); +``` + +`PQpass`[](<>) + +Returns the password of the connection. + +``` +char *PQpass(const PGconn *conn); +``` + +[`PQpass`](libpq-status.html#LIBPQ-PQPASS)将返回连接参数中指定的密码,或者如果没有密码并且密码是从[密码文件](libpq-pgpass.html),它会返回那个。在后一种情况下,如果在连接参数中指定了多个主机,则无法依赖于[`通行证`](libpq-status.html#LIBPQ-PQPASS)直到建立连接。可以使用该功能检查连接状态[`PQ状态`](libpq-status.html#LIBPQ-PQSTATUS). + +`主机`[](<>) + +返回活动连接的服务器主机名。如果连接是通过 Unix 套接字,这可以是主机名、IP 地址或目录路径。(路径情况可以区分,因为它始终是绝对路径,以`/`.) + +``` +char *PQhost(const PGconn *conn); +``` + +如果连接参数同时指定`主持人`和`主机地址`, 然后[`主机`](libpq-status.html#LIBPQ-PQHOST)将返回`主持人`信息。要是`主机地址`被指定,然后返回。如果在连接参数中指定了多个主机,[`主机`](libpq-status.html#LIBPQ-PQHOST)返回实际连接的主机。 + +[`主机`](libpq-status.html#LIBPQ-PQHOST)返回`空值`如果*`康恩`*论据是`空值`.否则,如果生成主机信息时出错(可能连接尚未完全建立或出现错误),则返回一个空字符串。 + +如果在连接参数中指定了多个主机,则无法依赖于[`主机`](libpq-status.html#LIBPQ-PQHOST)直到建立连接。可以使用该功能检查连接状态[`PQ状态`](libpq-status.html#LIBPQ-PQSTATUS). + +`PQhostaddr`[](<>) + +返回活动连接的服务器 IP 地址。这可以是主机名解析到的地址,也可以是通过`主机地址`范围。 + +``` +char *PQhostaddr(const PGconn *conn); +``` + +[`PQhostaddr`](libpq-status.html#LIBPQ-PQHOSTADDR)返回`空值`如果*`康恩`*论据是`空值`.否则,如果生成主机信息时出错(可能连接尚未完全建立或出现错误),则返回一个空字符串。 + +`端口`[](<>) + +返回活动连接的端口。 + +``` +char *PQport(const PGconn *conn); +``` + +如果在连接参数中指定了多个端口,[`端口`](libpq-status.html#LIBPQ-PQPORT)返回实际连接的端口。 + +[`端口`](libpq-status.html#LIBPQ-PQPORT)返回`空值`如果*`康恩`*论据是`空值`.否则,如果生成端口信息时出错(可能连接尚未完全建立或出现错误),则返回一个空字符串。 + +如果在连接参数中指定了多个端口,则无法依赖于[`端口`](libpq-status.html#LIBPQ-PQPORT)直到建立连接。可以使用该功能检查连接状态[`PQ状态`](libpq-status.html#LIBPQ-PQSTATUS). + +`数量`[](<>) + +这个函数不再做任何事情,但它仍然是为了向后兼容。该函数总是返回一个空字符串,或者`空值`如果*`康恩`*论据是`空值`. + +``` +char *PQtty(const PGconn *conn); +``` + +`PQ选项`[](<>) + +返回连接请求中传递的命令行选项。 + +``` +char *PQoptions(const PGconn *conn); +``` + +以下函数返回状态数据,这些数据可以随着在`PGconn`目的。 + +`PQ状态`[](<>) + +返回连接的状态。 + +``` +ConnStatusType PQstatus(const PGconn *conn); +``` + +状态可以是多个值之一。但是,在异步连接过程之外只能看到其中两个:`CONNECTION_OK`和`CONNECTION_BAD`.与数据库的良好连接具有状态`CONNECTION_OK`.失败的连接尝试由状态发出信号`CONNECTION_BAD`.通常,OK 状态会一直保持到[`PQ完成`](libpq-connect.html#LIBPQ-PQFINISH),但通信失败可能会导致状态更改为`CONNECTION_BAD`过早地。在这种情况下,应用程序可以尝试通过调用来恢复[`复位`](libpq-connect.html#LIBPQ-PQRESET). + +请参阅条目[`PQconnectStartParams`](libpq-connect.html#LIBPQ-PQCONNECTSTARTPARAMS),`PQconnectStart`和`PQconnect 轮询`关于可能返回的其他状态代码。 + +`PQ事务状态`[](<>) + +返回服务器当前的事务中状态。 + +``` +PGTransactionStatusType PQtransactionStatus(const PGconn *conn); +``` + +状态可以是`PQTRANS_IDLE`(目前空闲),`PQTRANS_ACTIVE`(一个命令正在进行中),`PQTRANS_INTRANS`(空闲,在有效的交易块中),或`PQTRANS_INERROR`(空闲,在失败的事务块中)。`PQTRANS_UNKNOWN`如果连接不好,则报告。`PQTRANS_ACTIVE`仅当查询已发送到服务器但尚未完成时才报告。 + +`PQ参数状态`[](<>) + +查找服务器的当前参数设置。 + +``` +const char *PQparameterStatus(const PGconn *conn, const char *paramName); +``` + +服务器会在连接启动时或每当它们的值发生变化时自动报告某些参数值。[`PQ参数状态`](libpq-status.html#LIBPQ-PQPARAMETERSTATUS)可用于询问这些设置。如果已知,则返回参数的当前值,或者`空值`如果参数未知。 + +截至当前版本报告的参数包括`服务器版本`,`server_encoding`,`客户端编码`,`应用名称`,`default_transaction_read_only`,`in_hot_standby`,`is_superuser`,`session_authorization`,`日期样式`,`间隔样式`,`时区`,`整数日期时间`, 和`standard_conforming_strings`.(`server_encoding`,`时区`, 和`整数日期时间`8.0 之前的版本未报告;`standard_conforming_strings`8.1 之前的版本未报告;`间隔样式`8.4 之前的版本未报告;`应用名称`9.0 之前的版本未报告;`default_transaction_read_only`和`in_hot_standby`14 之前的版本没有报告。)请注意`服务器版本`,`server_encoding`和`整数日期时间`启动后无法更改。 + +如果没有价值`standard_conforming_strings`被报告,应用程序可以假设它是`离开`,也就是说,反斜杠在字符串文字中被视为转义。此外,此参数的存在可以视为转义字符串语法 (`E'...'`) 被接受。 + +虽然声明了返回的指针`常量`,它实际上指向与`PGconn`结构体。假设指针在查询中保持有效是不明智的。 + +`PQ协议版本`[](<>) + +询问正在使用的前端/后端协议。 + +``` +int PQprotocolVersion(const PGconn *conn); +``` + +应用程序可能希望使用此功能来确定是否支持某些功能。目前,可能的值为 3(3.0 协议)或零(连接不良)。协议版本在连接启动完成后不会改变,但理论上可以在连接重置期间改变。PostgreSQL 服务器版本 7.4 及更高版本支持 3.0 协议。 + +`PQserver版本`[](<>) + +返回一个表示服务器版本的整数。 + +``` +int PQserverVersion(const PGconn *conn); +``` + +应用程序可能会使用此函数来确定它们所连接的数据库服务器的版本。结果是通过将服务器的主要版本号乘以 10000 并加上次要版本号形成的。例如,版本 10.1 将返回为 100001,版本 11.0 将返回为 110000。如果连接不良,则返回零。在主要版本 10 之前,PostgreSQL 使用三部分版本号,其中前两部分一起代表主要版本。 + +对于那些版本,PQserver版本[`每个部分使用两位数字;`](libpq-status.html#LIBPQ-PQSERVERVERSION)例如版本 9.1.5 将返回为 90105,版本 9.2.0 将返回为 90200。因此,为了确定功能兼容性,应用程序应该划分结果 + +PQserver版本[`通过 100 而不是 10000 来确定逻辑主版本号。`](libpq-status.html#LIBPQ-PQSERVERVERSION)在所有版本系列中,次要版本(错误修复版本)之间只有最后两位数不同。PQerrorMessage + +`返回由对连接的操作最近生成的错误消息。`[](<>) + +[](<>)几乎所有的 libpq 函数都会为 + +``` +char *PQerrorMessage(const PGconn *conn); +``` + +PQerrorMessage[`如果他们失败了。`](libpq-status.html#LIBPQ-PQERRORMESSAGE)请注意,根据 libpq 约定,非空PQerrorMessage[`结果可以由多行组成,并且将包含一个尾随换行符。`](libpq-status.html#LIBPQ-PQERRORMESSAGE)调用者不应直接释放结果。当关联的时候它会被释放PGconn`句柄被传递给`PQ完成[`.`](libpq-connect.html#LIBPQ-PQFINISH)不应期望结果字符串在`PGconn`结构体。 + +`PQsocket`[](<>) + +获取到服务器的连接套接字的文件描述符编号。一个有效的描述符将大于或等于 0;-1 结果表示当前没有打开服务器连接。(这在正常操作期间不会改变,但在连接设置或重置期间可能会改变。) + +``` +int PQsocket(const PGconn *conn); +``` + +`PQbackendPID`[](<>) + +返回进程 ID (PID)[](<>)处理此连接的后端进程。 + +``` +int PQbackendPID(const PGconn *conn); +``` + +后端 PID 可用于调试目的和比较`通知`消息(包括通知后端进程的 PID)。请注意,PID 属于在数据库服务器主机上执行的进程,而不是本地主机! + +`PQconnectionNeedsPassword`[](<>) + +如果连接身份验证方法需要密码,但没有可用密码,则返回 true (1)。如果不是,则返回 false (0)。 + +``` +int PQconnectionNeedsPassword(const PGconn *conn); +``` + +此功能可在连接尝试失败后应用,以决定是否提示用户输入密码。 + +`PQconnectionUsedPassword`[](<>) + +如果连接身份验证方法使用密码,则返回 true (1)。如果不是,则返回 false (0)。 + +``` +int PQconnectionUsedPassword(const PGconn *conn); +``` + +此功能可以在连接尝试失败或成功后应用,以检测服务器是否要求输入密码。 + +以下函数返回与 SSL 相关的信息。建立连接后,此信息通常不会更改。 + +`PQsslInUse`[](<>) + +如果连接使用 SSL,则返回 true (1),否则返回 false (0)。 + +``` +int PQsslInUse(const PGconn *conn); +``` + +`PQssl属性`[](<>) + +返回有关连接的 SSL 相关信息。 + +``` +const char *PQsslAttribute(const PGconn *conn, const char *attribute_name); +``` + +可用属性列表因使用的 SSL 库和连接类型而异。如果属性不可用,则返回 NULL。 + +以下属性通常可用: + +`图书馆` + +正在使用的 SSL 实现的名称。(目前只有`“OpenSSL”`已实施) + +`协议` + +正在使用的 SSL/TLS 版本。共同的价值观是`“TLSv1”`,`“TLSv1.1”`和`“TLSv1.2”`,但如果使用其他协议,实现可能会返回其他字符串。 + +`关键位` + +加密算法使用的密钥位数。 + +`密码` + +使用的密码套件的简称,例如,`“DHE-RSA-DES-CBC3-SHA”`.这些名称特定于每个 SSL 实现。 + +`压缩` + +如果正在使用 SSL 压缩,则返回压缩算法的名称,如果使用了压缩但算法未知,则返回“on”。如果未使用压缩,则返回“关闭”。 + +`PQsslAttributeNames`[](<>) + +返回可用的 SSL 属性名称数组。数组由 NULL 指针终止。 + +``` +const char * const * PQsslAttributeNames(const PGconn *conn); +``` + +`PQssl结构`[](<>) + +返回指向描述连接的特定于 SSL 实现的对象的指针。 + +``` +void *PQsslStruct(const PGconn *conn, const char *struct_name); +``` + +可用的结构取决于使用的 SSL 实现。对于 OpenSSL,有一个名为“OpenSSL”的结构,它返回一个指向 OpenSSL 的指针`SSL`结构。要使用此功能,可以使用以下代码: + +``` +#include +#include + +... + + SSL *ssl; + + dbconn = PQconnectdb(...); + ... + + ssl = PQsslStruct(dbconn, "OpenSSL"); + if (ssl) + { + /* use OpenSSL functions to access ssl */ + } +``` + +此结构可用于验证加密级别、检查服务器证书等。有关此结构的信息,请参阅 OpenSSL 文档。 + +`PQgetssl`[](<>) + +[](<>)返回连接中使用的 SSL 结构,如果 SSL 未使用,则返回 null。 + +``` +void *PQgetssl(const PGconn *conn); +``` + +这个函数相当于`PQsslStruct(conn, "OpenSSL")`.它不应该在新的应用程序中使用,因为返回的结构是特定于 OpenSSL 的,并且如果使用另一个 SSL 实现将不可用。要检查连接是否使用 SSL,请调用[`PQsslInUse`](libpq-status.html#LIBPQ-PQSSLINUSE)相反,有关连接的更多详细信息,请使用[`PQssl属性`](libpq-status.html#LIBPQ-PQSSLATTRIBUTE). diff --git a/docs/X/limits.md b/docs/en/limits.md similarity index 100% rename from docs/X/limits.md rename to docs/en/limits.md diff --git a/docs/en/limits.zh.md b/docs/en/limits.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8f1f3c4ee54ced0d6de57ef28dab832a83333b87 --- /dev/null +++ b/docs/en/limits.zh.md @@ -0,0 +1,23 @@ +## Appendix K. PostgreSQL Limits + +[Table K.1](limits.html#LIMITS-TABLE)describes various hard limits of PostgreSQL. However, practical limits, such as performance limitations or available disk space may apply before absolute hard limits are reached. + +**Table K.1. PostgreSQL Limitations** + +| Item | Upper Limit | Comment | +| ---- | ----------- | ------- | +| database size | unlimited | | +| number of databases | 4,294,950,911 | | +| relations per database | 1,431,650,303 | | +| relation size | 32 TB | with the default`BLCKSZ`of 8192 bytes | +| rows per table | limited by the number of tuples that can fit onto 4,294,967,295 pages | | +| columns per table | 1600 | further limited by tuple size fitting on a single page; see note below | +| field size | 1 GB | | +| 标识符长度 | 63 字节 | 可以通过重新编译PostgreSQL来增加 | +| 每个表的索引 | 无限 | 受每个数据库的最大关系限制 | +| 每个索引的列 | 32 | 可以通过重新编译PostgreSQL来增加 | +| 分区键 | 32 | 可以通过重新编译PostgreSQL来增加 | + +由于存储的元组必须适合单个 8192 字节的堆页,因此表的最大列数会进一步减少。例如,排除元组头,一个由 1600 组成的元组`整数`列将消耗 6400 字节并且可以存储在堆页中,但是 1600 的元组`大整数`列将消耗 12800 字节,因此不适合堆页。类型的可变长度字段,例如`文本`,`varchar`, 和`字符`当值大到足以需要它时,可以将它们的值存储在表的 TOAST 表中。只有一个 18 字节的指针必须保留在表堆中的元组内。对于较短长度的可变长度字段,使用 4 字节或 1 字节的字段标头,并将值存储在堆元组内。 + +已从表中删除的列也会影响最大列限制。此外,虽然新创建的元组的删除列值在元组的空位图中内部标记为空,但空位图也占用空间。 diff --git a/docs/X/lo-examplesect.md b/docs/en/lo-examplesect.md similarity index 100% rename from docs/X/lo-examplesect.md rename to docs/en/lo-examplesect.md diff --git a/docs/en/lo-examplesect.zh.md b/docs/en/lo-examplesect.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8215353ea37e28cf0301667421fb22f94bb30079 --- /dev/null +++ b/docs/en/lo-examplesect.zh.md @@ -0,0 +1,277 @@ +## 35.5. Example Program + +[Example 35.1](lo-examplesect.html#LO-EXAMPLE)is a sample program which shows how the large object interface in libpq can be used. Parts of the program are commented out but are left in the source for the reader's benefit. This program can also be found in`src/test/examples/testlo.c`in the source distribution. + +**Example 35.1. Large Objects with libpq Example Program** + +``` +/*----------------------------------------------------------------- + * + * testlo.c + * test using large objects with libpq + * + * Portions Copyright (c) 1996-2021, PostgreSQL Global Development Group + * Portions Copyright (c) 1994, Regents of the University of California + * + * + * IDENTIFICATION + * src/test/examples/testlo.c + * + *----------------------------------------------------------------- + */ +#include +#include + +#include +#include +#include +#include + +#include "libpq-fe.h" +#include "libpq/libpq-fs.h" + +#define BUFSIZE 1024 + +/* + * importFile - + * import file "in_filename" into database as large object "lobjOid" + * + */ +static Oid +importFile(PGconn *conn, char *filename) +{ + Oid lobjId; + int lobj_fd; + char buf[BUFSIZE]; + int nbytes, + tmp; + int fd; + + /* + * open the file to be read in + */ + fd = open(filename, O_RDONLY, 0666); + if (fd < 0) + { /* error */ + fprintf(stderr, "cannot open unix file\"%s\"\n", filename); + } + + /* + * create the large object + */ + lobjId = lo_creat(conn, INV_READ | INV_WRITE); + if (lobjId == 0) + fprintf(stderr, "cannot create large object"); + + lobj_fd = lo_open(conn, lobjId, INV_WRITE); + + /* + * read in from the Unix file and write to the inversion file + */ + while ((nbytes = read(fd, buf, BUFSIZE)) > 0) + { + tmp = lo_write(conn, lobj_fd, buf, nbytes); + if (tmp < nbytes) + fprintf(stderr, "error while reading \"%s\"", filename); + } + + close(fd); + lo_close(conn, lobj_fd); + + return lobjId; +} + +static void +pickout(PGconn *conn, Oid lobjId, int start, int len) +{ + int lobj_fd; + char *buf; + int nbytes; + int nread; + + lobj_fd = lo_open(conn, lobjId, INV_READ); + if (lobj_fd < 0) + fprintf(stderr, "cannot open large object %u", lobjId); + + lo_lseek(conn, lobj_fd, start, SEEK_SET); + buf = malloc(len + 1); + + nread = 0; + while (len - nread > 0) + { + nbytes = lo_read(conn, lobj_fd, buf, len - nread); + buf[nbytes] = '\0'; + fprintf(stderr, ">>> %s", buf); + nread += nbytes; + if (nbytes <= 0) + break; /* no more data? */ + } + free(buf); + fprintf(stderr, "\n"); + lo_close(conn, lobj_fd); +} + +static void +overwrite(PGconn *conn, Oid lobjId, int start, int len) +{ + int lobj_fd; + char *buf; + int nbytes; + int nwritten; + int i; + + lobj_fd = lo_open(conn, lobjId, INV_WRITE); + if (lobj_fd < 0) + fprintf(stderr, "cannot open large object %u", lobjId); + + lo_lseek(conn, lobj_fd, start, SEEK_SET); + buf = malloc(len + 1); + + for (i = 0; i < len; i++) + buf[i] = 'X'; + buf[i] = '\0'; + + nwritten = 0; + while (len - nwritten > 0) + { + nbytes = lo_write(conn, lobj_fd, buf + nwritten, len - nwritten); + nwritten += nbytes; + if (nbytes <= 0) + { + fprintf(stderr, "\nWRITE FAILED!\n"); + break; + } + } + free(buf); + fprintf(stderr, "\n"); + lo_close(conn, lobj_fd); +} + +/* + * exportFile - + * export large object "lobjOid" to file "out_filename" + * + */ +static void +exportFile(PGconn *conn, Oid lobjId, char *filename) +{ + int lobj_fd; + char buf[BUFSIZE]; + int nbytes, + tmp; + int fd; + + /* + * open the large object + */ + lobj_fd = lo_open(conn, lobjId, INV_READ); + if (lobj_fd < 0) + fprintf(stderr, "cannot open large object %u", lobjId); + + /* + * open the file to be written to + */ + fd = open(filename, O_CREAT | O_WRONLY | O_TRUNC, 0666); + if (fd < 0) + { /* error */ + fprintf(stderr, "cannot open unix file\"%s\"", + filename); + } + + /* + * read in from the inversion file and write to the Unix file + */ + while ((nbytes = lo_read(conn, lobj_fd, buf, BUFSIZE)) > 0) + { + tmp = write(fd, buf, nbytes); + if (tmp < nbytes) + { + fprintf(stderr, "error while writing \"%s\"", + filename); + } + } + + lo_close(conn, lobj_fd); + close(fd); +} + +static void +exit_nicely(PGconn *conn) +{ + PQfinish(conn); + exit(1); +} + +int +main(int argc, char **argv) +{ + char *in_filename, + *out_filename; + char *database; + Oid lobjOid; + PGconn *conn; + PGresult *res; + + if (argc != 4) + { + fprintf(stderr, "Usage: %s database_name in_filename out_filename\n", + argv[0]); + exit(1); + } + + database = argv[1]; + in_filename = argv[2]; + out_filename = argv[3]; + + /* + * set up the connection + */ + conn = PQsetdb(NULL, NULL, NULL, NULL, database); + + /* check to see that the backend connection was successfully made */ + if (PQstatus(conn) != CONNECTION_OK) + { + fprintf(stderr, "%s", PQerrorMessage(conn)); + exit_nicely(conn); + } + + /* Set always-secure search path, so malicious users can't take control. */ + res = PQexec(conn, + "SELECT pg_catalog.set_config('search_path', '', false)"); + if (PQresultStatus(res) != PGRES_TUPLES_OK) + { + fprintf(stderr, "SET failed: %s", PQerrorMessage(conn)); + PQclear(res); + exit_nicely(conn); + } + PQclear(res); + + res = PQexec(conn, "begin"); + PQclear(res); + printf("importing file \"%s\" ...\n", in_filename); +/* lobjOid = importFile(conn, in_filename); */ + lobjOid = lo_import(conn, in_filename); + if (lobjOid == 0) + fprintf(stderr, "%s\n", PQerrorMessage(conn)); + else + { + printf("\tas large object %u.\n", lobjOid); + + printf("picking out bytes 1000-2000 of the large object\n"); + pickout(conn, lobjOid, 1000, 1000); + + printf("overwriting bytes 1000-2000 of the large object with X's\n"); + overwrite(conn, lobjOid, 1000, 1000); + + printf("exporting large object to file \"%s\" ...\n", out_filename); +/* exportFile(conn, lobjOid, out_filename); */ + if (lo_export(conn, lobjOid, out_filename) < 0) + fprintf(stderr, "%s\n", PQerrorMessage(conn)); + } + + res = PQexec(conn, "end"); + PQclear(res); + PQfinish(conn); + return 0; +} +``` diff --git a/docs/X/lo-implementation.md b/docs/en/lo-implementation.md similarity index 100% rename from docs/X/lo-implementation.md rename to docs/en/lo-implementation.md diff --git a/docs/en/lo-implementation.zh.md b/docs/en/lo-implementation.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..b19c3b9f4d36730ff202c0abc933abfbb387480d --- /dev/null +++ b/docs/en/lo-implementation.zh.md @@ -0,0 +1,7 @@ +## 35.2. Implementation Features + +The large object implementation breaks large objects up into “chunks” and stores the chunks in rows in the database. A B-tree index guarantees fast searches for the correct chunk number when doing random access reads and writes. + +The chunks stored for a large object do not have to be contiguous. For example, if an application opens a new large object, seeks to offset 1000000, and writes a few bytes there, this does not result in allocation of 1000000 bytes worth of storage; only of chunks covering the range of data bytes actually written. A read operation will, however, read out zeroes for any unallocated locations preceding the last existing chunk. This corresponds to the common behavior of “sparsely allocated” files in Unix file systems. + +As of PostgreSQL 9.0, large objects have an owner and a set of access permissions, which can be managed using[GRANT](sql-grant.html)and[REVOKE](sql-revoke.html).`SELECT`privileges are required to read a large object, and`UPDATE`privileges are required to write or truncate it. Only the large object's owner (or a database superuser) can delete, comment on, or change the owner of a large object. To adjust this behavior for compatibility with prior releases, see the[lo_compat_privileges](runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES)run-time parameter. diff --git a/docs/X/lo-interfaces.md b/docs/en/lo-interfaces.md similarity index 100% rename from docs/X/lo-interfaces.md rename to docs/en/lo-interfaces.md diff --git a/docs/en/lo-interfaces.zh.md b/docs/en/lo-interfaces.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..24e219f1500450ff1a1088a0a587b3fc1ef0a720 --- /dev/null +++ b/docs/en/lo-interfaces.zh.md @@ -0,0 +1,227 @@ +## 35.3. Client Interfaces + +[35.3.1. Creating a Large Object](lo-interfaces.html#LO-CREATE) + +[35.3.2. Importing a Large Object](lo-interfaces.html#LO-IMPORT) + +[35.3.3. Exporting a Large Object](lo-interfaces.html#LO-EXPORT) + +[35.3.4. Opening an Existing Large Object](lo-interfaces.html#LO-OPEN) + +[35.3.5. Writing Data to a Large Object](lo-interfaces.html#LO-WRITE) + +[35.3.6. Reading Data from a Large Object](lo-interfaces.html#LO-READ) + +[35.3.7. Seeking in a Large Object](lo-interfaces.html#LO-SEEK) + +[35.3.8. Obtaining the Seek Position of a Large Object](lo-interfaces.html#LO-TELL) + +[35.3.9. Truncating a Large Object](lo-interfaces.html#LO-TRUNCATE) + +[35.3.10. Closing a Large Object Descriptor](lo-interfaces.html#LO-CLOSE) + +[35.3.11. Removing a Large Object](lo-interfaces.html#LO-UNLINK) + +This section describes the facilities that PostgreSQL's libpq client interface library provides for accessing large objects. The PostgreSQL large object interface is modeled after the Unix file-system interface, with analogues of`open`,`read`,`write`,`lseek`, etc. + +All large object manipulation using these functions*must*take place within an SQL transaction block, since large object file descriptors are only valid for the duration of a transaction. + +If an error occurs while executing any one of these functions, the function will return an otherwise-impossible value, typically 0 or -1. A message describing the error is stored in the connection object and can be retrieved with[`PQerrorMessage`](libpq-status.html#LIBPQ-PQERRORMESSAGE). + +Client applications that use these functions should include the header file`libpq/libpq-fs.h`and link with the libpq library. + +Client applications cannot use these functions while a libpq connection is in pipeline mode. + +### 35.3.1. Creating a Large Object + +[](<>)The function + +``` +Oid lo_creat(PGconn *conn, int mode); +``` + +creates a new large object. The return value is the OID that was assigned to the new large object, or`InvalidOid`(zero) on failure.*`mode`*is unused and ignored as of PostgreSQL 8.1; however, for backward compatibility with earlier releases it is best to set it to`INV_READ`,`INV_WRITE`, or`INV_READ` `|` `INV_WRITE`. (These symbolic constants are defined in the header file`libpq/libpq-fs.h`.) + +An example: + +``` +inv_oid = lo_creat(conn, INV_READ|INV_WRITE); +``` + +[](<>)The function + +``` +Oid lo_create(PGconn *conn, Oid lobjId); +``` + +also creates a new large object. The OID to be assigned can be specified by*`lobjId`*; if so, failure occurs if that OID is already in use for some large object. If*`lobjId`*is`InvalidOid`(零)那么`lo_create`分配一个未使用的OID(这与`你创造了什么`)。返回值是分配给新大对象的OID,或`残疾人`失败时(零)。 + +`lo_create`从PostgreSQL 8.1开始是新的;如果此功能在较旧的服务器版本上运行,它将失败并返回`残疾人`. + +举个例子: + +``` +inv_oid = lo_create(conn, desired_oid); +``` + +### 35.3.2.导入大型对象 + +[](<>)要将操作系统文件作为大型对象导入,请调用 + +``` +Oid lo_import(PGconn *conn, const char *filename); +``` + +*`文件名`*指定要作为大型对象导入的文件的操作系统名称。返回值是分配给新大对象的OID,或`残疾人`失败时(零)。请注意,该文件由客户端接口库读取,而不是由服务器读取;因此,它必须存在于客户机文件系统中,并且可以被客户机应用程序读取。 + +[](<>)功能 + +``` +Oid lo_import_with_oid(PGconn *conn, const char *filename, Oid lobjId); +``` + +还将导入一个新的大型对象。要分配的OID可以由指定*`洛布吉德`*; 如果是这样,如果该OID已用于某个大型对象,则会发生故障。如果*`洛布吉德`*是`残疾人`(零)那么`lo_导入带有_oid的`分配一个未使用的OID(这与`lo_import`). The return value is the OID that was assigned to the new large object, or`InvalidOid`(zero) on failure. + +`lo_import_with_oid`is new as of PostgreSQL 8.4 and uses`lo_create`internally which is new in 8.1; if this function is run against 8.0 or before, it will fail and return`InvalidOid`. + +### 35.3.3. Exporting a Large Object + +[](<>)To export a large object into an operating system file, call + +``` +int lo_export(PGconn *conn, Oid lobjId, const char *filename); +``` + +The*`lobjId`*argument specifies the OID of the large object to export and the*`filename`*argument specifies the operating system name of the file. Note that the file is written by the client interface library, not by the server. Returns 1 on success, -1 on failure. + +### 35.3.4. Opening an Existing Large Object + +[](<>)To open an existing large object for reading or writing, call + +``` +int lo_open(PGconn *conn, Oid lobjId, int mode); +``` + +The*`lobjId`*argument specifies the OID of the large object to open. The*`mode`*bits control whether the object is opened for reading (`INV_READ`), writing (`INV_WRITE`), or both. (These symbolic constants are defined in the header file`libpq/libpq-fs.h`.)`lo_open`返回一个(非负)大对象描述符以供以后使用`lo_read`,`lo_write`,`lo_lseek`,`lo_lseek64`,`lo_tell`,`lo_tell64`,`lo_truncate`,`lo_truncate64`, 和`lo_close`.描述符仅在当前事务期间有效。失败时,返回-1。 + +服务器目前不区分模式`INV_WRITE`和`INV_READ` `|` `INV_WRITE`:在任何一种情况下,您都可以从描述符中读取。但是,这些模式之间存在显着差异`INV_READ`单独:与`INV_READ`你不能在描述符上写,从它读取的数据将反映大对象在事务快照时的内容,该事务快照是活动的`lo_open`已执行,无论此事务或其他事务的后续写入如何。从打开的描述符中读取`INV_WRITE`返回反映其他已提交事务的所有写入以及当前事务的写入的数据。这类似于`可重复阅读`相对`阅读已提交`普通 SQL 的事务模式`选择`命令。 + +`lo_open`如果会失败`选择`特权不适用于大对象,或者如果`INV_WRITE`被指定并且`更新`特权不可用。(在 PostgreSQL 11 之前,这些权限检查是在使用描述符的第一次实际读取或写入调用时执行的。)这些权限检查可以通过[罗\_兼容\_特权](runtime-config-compatible.html#GUC-LO-COMPAT-PRIVILEGES)run-time parameter. + +An example: + +``` +inv_fd = lo_open(conn, inv_oid, INV_READ|INV_WRITE); +``` + +### 35.3.5. Writing Data to a Large Object + +[](<>)The function + +``` +int lo_write(PGconn *conn, int fd, const char *buf, size_t len); +``` + +writes*`len`*bytes from*`buf`*(which must be of size*`len`*) to large object descriptor*`fd`*. The*`fd`*argument must have been returned by a previous`lo_open`. The number of bytes actually written is returned (in the current implementation, this will always equal*`len`*unless there is an error). In the event of an error, the return value is -1. + +Although the*`len`*parameter is declared as`size_t`, this function will reject length values larger than`INT_MAX`. In practice, it's best to transfer data in chunks of at most a few megabytes anyway. + +### 35.3.6. Reading Data from a Large Object + +[](<>)The function + +``` +int lo_read(PGconn *conn, int fd, char *buf, size_t len); +``` + +reads up to*`len`*bytes from large object descriptor*`fd`*into*`buf`*(which must be of size*`len`*). The*`fd`*argument must have been returned by a previous`lo_open`. The number of bytes actually read is returned; this will be less than*`len`*if the end of the large object is reached first. In the event of an error, the return value is -1. + +Although the*`len`*parameter is declared as`size_t`, this function will reject length values larger than`INT_MAX`. In practice, it's best to transfer data in chunks of at most a few megabytes anyway. + +### 35.3.7. Seeking in a Large Object + +[](<>)To change the current read or write location associated with a large object descriptor, call + +``` +int lo_lseek(PGconn *conn, int fd, int offset, int whence); +``` + +This function moves the current location pointer for the large object descriptor identified by*`fd`*to the new location specified by*`offset`*. The valid values for*`whence`*are`SEEK_SET`(seek from object start),`SEEK_CUR`(seek from current position), and`SEEK_END`(seek from object end). The return value is the new location pointer, or -1 on error. + +[](<>)When dealing with large objects that might exceed 2GB in size, instead use + +``` +pg_int64 lo_lseek64(PGconn *conn, int fd, pg_int64 offset, int whence); +``` + +This function has the same behavior as`lo_lseek`, but it can accept an*`offset`*larger than 2GB and/or deliver a result larger than 2GB. Note that`lo_lseek`will fail if the new location pointer would be greater than 2GB. + +`lo_lseek64`is new as of PostgreSQL 9.3. If this function is run against an older server version, it will fail and return -1. + +### 35.3.8. Obtaining the Seek Position of a Large Object + +[](<>)To obtain the current read or write location of a large object descriptor, call + +``` +int lo_tell(PGconn *conn, int fd); +``` + +If there is an error, the return value is -1. + +[](<>)When dealing with large objects that might exceed 2GB in size, instead use + +``` +pg_int64 lo_tell64(PGconn *conn, int fd); +``` + +This function has the same behavior as`lo_tell`, but it can deliver a result larger than 2GB. Note that`lo_tell`will fail if the current read/write location is greater than 2GB. + +`lo_tell64`is new as of PostgreSQL 9.3. If this function is run against an older server version, it will fail and return -1. + +### 35.3.9. Truncating a Large Object + +[](<>)To truncate a large object to a given length, call + +``` +int lo_truncate(PGconn *conn, int fd, size_t len); +``` + +This function truncates the large object descriptor*`fd`*to length*`len`*. The*`fd`*argument must have been returned by a previous`lo_open`. If*`len`*is greater than the large object's current length, the large object is extended to the specified length with null bytes ('\\0'). On success,`lo_truncate`returns zero. On error, the return value is -1. + +The read/write location associated with the descriptor*`fd`*is not changed. + +Although the*`len`*parameter is declared as`size_t`,`lo_truncate`will reject length values larger than`INT_MAX`. + +[](<>)When dealing with large objects that might exceed 2GB in size, instead use + +``` +int lo_truncate64(PGconn *conn, int fd, pg_int64 len); +``` + +This function has the same behavior as`lo_truncate`, but it can accept a*`len`*value exceeding 2GB. + +`lo_truncate`is new as of PostgreSQL 8.3; if this function is run against an older server version, it will fail and return -1. + +`lo_truncate64`is new as of PostgreSQL 9.3; if this function is run against an older server version, it will fail and return -1. + +### 35.3.10. Closing a Large Object Descriptor + +[](<>)A large object descriptor can be closed by calling + +``` +int lo_close(PGconn *conn, int fd); +``` + +where*`fd`*is a large object descriptor returned by`lo_open`. On success,`lo_close`returns zero. On error, the return value is -1. + +Any large object descriptors that remain open at the end of a transaction will be closed automatically. + +### 35.3.11. Removing a Large Object + +[](<>)To remove a large object from the database, call + +``` +int lo_unlink(PGconn *conn, Oid lobjId); +``` + +The*`lobjId`*argument specifies the OID of the large object to remove. Returns 1 if successful, -1 on failure. diff --git a/docs/X/lo.md b/docs/en/lo.md similarity index 100% rename from docs/X/lo.md rename to docs/en/lo.md diff --git a/docs/en/lo.zh.md b/docs/en/lo.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..74f15c81d9e50cf85daa74dc541ebca7e9ff0e16 --- /dev/null +++ b/docs/en/lo.zh.md @@ -0,0 +1,48 @@ +## F.20. lo + +[F.20.1. Rationale](lo.html#id-1.11.7.29.5)[F.20.2. How to Use It](lo.html#id-1.11.7.29.6)[F.20.3. Limitations](lo.html#id-1.11.7.29.7)[F.20.4. Author](lo.html#id-1.11.7.29.8) + +[](<>) + +The`lo`module provides support for managing Large Objects (also called LOs or BLOBs). This includes a data type`lo`and a trigger`lo_manage`. + +This module is considered “trusted”, that is, it can be installed by non-superusers who have`CREATE`privilege on the current database. + +### F.20.1. Rationale + +One of the problems with the JDBC driver (and this affects the ODBC driver also), is that the specification assumes that references to BLOBs (Binary Large OBjects) are stored within a table, and if that entry is changed, the associated BLOB is deleted from the database. + +As PostgreSQL stands, this doesn't occur. Large objects are treated as objects in their own right; a table entry can reference a large object by OID, but there can be multiple table entries referencing the same large object OID, so the system doesn't delete the large object just because you change or remove one such entry. + +Now this is fine for PostgreSQL-specific applications, but standard code using JDBC or ODBC won't delete the objects, resulting in orphan objects — objects that are not referenced by anything, and simply occupy disk space. + +The`lo`module allows fixing this by attaching a trigger to tables that contain LO reference columns. The trigger essentially just does a`lo_unlink`whenever you delete or modify a value referencing a large object. When you use this trigger, you are assuming that there is only one database reference to any large object that is referenced in a trigger-controlled column! + +The module also provides a data type`lo`, which is really just a domain of the`oid`type. This is useful for differentiating database columns that hold large object references from those that are OIDs of other things. You don't have to use the`lo`type to use the trigger, but it may be convenient to use it to keep track of which columns in your database represent large objects that you are managing with the trigger. It is also rumored that the ODBC driver gets confused if you don't use`lo`for BLOB columns. + +### F.20.2. How to Use It + +Here's a simple example of usage: + +``` +CREATE TABLE image (title text, raster lo); + +CREATE TRIGGER t_raster BEFORE UPDATE OR DELETE ON image + FOR EACH ROW EXECUTE FUNCTION lo_manage(raster); +``` + +For each column that will contain unique references to large objects, create a`BEFORE UPDATE OR DELETE`trigger, and give the column name as the sole trigger argument. You can also restrict the trigger to only execute on updates to the column by using`BEFORE UPDATE OF` *`column_name`*. If you need multiple`lo`columns in the same table, create a separate trigger for each one, remembering to give a different name to each trigger on the same table. + +### F.20.3. Limitations + +- Dropping a table will still orphan any objects it contains, as the trigger is not executed. You can avoid this by preceding the`DROP TABLE`with`DELETE FROM *`table`*`. + + `TRUNCATE`has the same hazard. + + If you already have, or suspect you have, orphaned large objects, see the[vacuumlo](vacuumlo.html)module to help you clean them up. It's a good idea to run vacuumlo occasionally as a back-stop to the`lo_manage`trigger. + +- Some frontends may create their own tables, and will not create the associated trigger(s). Also, users may not remember (or know) to create the triggers. + +### F.20.4. Author + +Peter Mount`<[peter@retep.org.uk](mailto:peter@retep.org.uk)>` diff --git a/docs/X/locale.md b/docs/en/locale.md similarity index 100% rename from docs/X/locale.md rename to docs/en/locale.md diff --git a/docs/en/locale.zh.md b/docs/en/locale.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..d7b40dac43f1fcbe2aa1283b19bc77fe679358dc --- /dev/null +++ b/docs/en/locale.zh.md @@ -0,0 +1,81 @@ +## 24.1. Locale Support + +[24.1.1. Overview](locale.html#id-1.6.11.3.4) + +[24.1.2. Behavior](locale.html#id-1.6.11.3.5) + +[24.1.3. Problems](locale.html#id-1.6.11.3.6) + +[](<>) + +*Locale*support refers to an application respecting cultural preferences regarding alphabets, sorting, number formatting, etc. PostgreSQL uses the standard ISO C and POSIX locale facilities provided by the server operating system. For additional information refer to the documentation of your system. + +### 24.1.1. Overview + +Locale support is automatically initialized when a database cluster is created using`initdb`.`initdb`will initialize the database cluster with the locale setting of its execution environment by default, so if your system is already set to use the locale that you want in your database cluster then there is nothing else you need to do. If you want to use a different locale (or you are not sure which locale your system is set to), you can instruct`initdb`exactly which locale to use by specifying the`--locale`option. For example: + +``` +initdb --locale=sv_SE +``` + +This example for Unix systems sets the locale to Swedish (`sv`) as spoken in Sweden (`SE`). Other possibilities might include`en_US`(U.S. English) and`fr_CA`(French Canadian). If more than one character set can be used for a locale then the specifications can take the form*`language_territory.codeset`*. For example,`fr_BE.UTF-8`represents the French language (fr) as spoken in Belgium (BE), with a UTF-8 character set encoding. + +What locales are available on your system under what names depends on what was provided by the operating system vendor and what was installed. On most Unix systems, the command`locale -a`will provide a list of available locales. Windows uses more verbose locale names, such as`German_Germany`or`Swedish_Sweden.1252`, but the principles are the same. + +Occasionally it is useful to mix rules from several locales, e.g., use English collation rules but Spanish messages. To support that, a set of locale subcategories exist that control only certain aspects of the localization rules: + +| `LC_COLLATE` | String sort order | +| ------------ | ----------------- | +| `LC_CTYPE` | Character classification (What is a letter? Its upper-case equivalent?) | +| `LC_MESSAGES` | Language of messages | +| `LC_MONETARY` | Formatting of currency amounts | +| `LC_NUMERIC` | Formatting of numbers | +| `LC_TIME` | Formatting of dates and times | + +The category names translate into names of`initdb`options to override the locale choice for a specific category. For instance, to set the locale to French Canadian, but use U.S. rules for formatting currency, use`initdb --locale=fr_CA --lc-monetary=en_US`. + +如果您希望系统表现得好像没有语言环境支持,请使用特殊的语言环境名称`C`, 或等效地`POSIX`. + +创建数据库时,某些语言环境类别的值必须固定。您可以对不同的数据库使用不同的设置,但是一旦创建了数据库,就不能再为该数据库更改它们。`LC_COLLATE`和`LC_CTYPE`是这些类别。它们会影响索引的排序顺序,因此它们必须保持固定,否则文本列上的索引会损坏。(但您可以使用排序规则来缓解此限制,如[第 24.2 节](collation.html).) 这些类别的默认值是在何时确定的`初始化数据库`运行,并且在创建新数据库时使用这些值,除非在`创建数据库`命令。 + +通过设置与语言环境类别同名的服务器配置参数,可以随时更改其他语言环境类别(请参阅[第 20.11.2 节](runtime-config-client.html#RUNTIME-CONFIG-CLIENT-FORMAT)详情)。选择的值`初始化数据库`实际上只是写入配置文件`postgresql.conf`在服务器启动时作为默认值。如果您从`postgresql.conf`然后服务器将从其执行环境中继承设置。 + +请注意,服务器的语言环境行为是由服务器看到的环境变量决定的,而不是由任何客户端的环境决定的。因此,在启动服务器之前,请注意配置正确的区域设置。这样做的结果是,如果客户端和服务器设置在不同的语言环境中,则消息可能会以不同的语言显示,具体取决于它们的来源。 + +### 笔记 + +When we speak of inheriting the locale from the execution environment, this means the following on most operating systems: For a given locale category, say the collation, the following environment variables are consulted in this order until one is found to be set:`LC_ALL`,`LC_COLLATE`(or the variable corresponding to the respective category),`LANG`. If none of these environment variables are set then the locale defaults to`C`. + +Some message localization libraries also look at the environment variable`LANGUAGE`which overrides all other locale settings for the purpose of setting the language of messages. If in doubt, please refer to the documentation of your operating system, in particular the documentation about gettext. + +To enable messages to be translated to the user's preferred language, NLS must have been selected at build time (`configure --enable-nls`). All other locale support is built in automatically. + +### 24.1.2. Behavior + +The locale settings influence the following SQL features: + +- Sort order in queries using`ORDER BY`or the standard comparison operators on textual data[](<>) + +- The`upper`,`lower`, and`initcap`functions[](<>) [](<>) + +- 模式匹配运算符 (`喜欢`,`相似`, 和 POSIX 风格的正则表达式);语言环境影响不区分大小写的匹配和字符类正则表达式的字符分类[](<>) [](<>) + +- 这`to_char`函数族[](<>) + +- 使用索引的能力`喜欢`条款 + + 使用其他语言环境的缺点`C`要么`POSIX`在 PostgreSQL 中是它的性能影响。它会减慢字符处理速度并防止普通索引被`喜欢`.因此,仅当您确实需要它们时才使用语言环境。 + + 作为允许 PostgreSQL 使用索引的解决方法`喜欢`在非 C 语言环境下的子句中,存在几个自定义运算符类。这些允许创建执行严格的逐个字符比较的索引,而忽略区域设置比较规则。参考[第 11.10 节](indexes-opclass.html)了解更多信息。另一种方法是使用`C`整理,如中所述[第 24.2 节](collation.html). + +### 24.1.3. Problems + +If locale support doesn't work according to the explanation above, check that the locale support in your operating system is correctly configured. To check what locales are installed on your system, you can use the command`locale -a`if your operating system provides it. + +Check that PostgreSQL is actually using the locale that you think it is. The`LC_COLLATE`and`LC_CTYPE`settings are determined when a database is created, and cannot be changed except by creating a new database. Other locale settings including`LC_MESSAGES`and`LC_MONETARY`are initially determined by the environment the server is started in, but can be changed on-the-fly. You can check the active locale settings using the`SHOW`command. + +The directory`src/test/locale`in the source distribution contains a test suite for PostgreSQL's locale support. + +Client applications that handle server-side errors by parsing the text of the error message will obviously have problems when the server's messages are in a different language. Authors of such applications are advised to make use of the error code scheme instead. + +Maintaining catalogs of message translations requires the on-going efforts of many volunteers that want to see PostgreSQL speak their preferred language well. If messages in your language are currently not available or not fully translated, your assistance would be appreciated. If you want to help, refer to[Chapter 55](nls.html)or write to the developers' mailing list. diff --git a/docs/X/logical-replication-architecture.md b/docs/en/logical-replication-architecture.md similarity index 100% rename from docs/X/logical-replication-architecture.md rename to docs/en/logical-replication-architecture.md diff --git a/docs/en/logical-replication-architecture.zh.md b/docs/en/logical-replication-architecture.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..245ec46e7ade9d6233af23fbc052d53713401bf8 --- /dev/null +++ b/docs/en/logical-replication-architecture.zh.md @@ -0,0 +1,15 @@ +## 31.5. Architecture + +[31.5.1. Initial Snapshot](logical-replication-architecture.html#LOGICAL-REPLICATION-SNAPSHOT) + +Logical replication starts by copying a snapshot of the data on the publisher database. Once that is done, changes on the publisher are sent to the subscriber as they occur in real time. The subscriber applies data in the order in which commits were made on the publisher so that transactional consistency is guaranteed for the publications within any single subscription. + +Logical replication is built with an architecture similar to physical streaming replication (see[Section 27.2.5](warm-standby.html#STREAMING-REPLICATION)). It is implemented by “walsender” and “apply” processes. The walsender process starts logical decoding (described in[Chapter 49](logicaldecoding.html)) of the WAL and loads the standard logical decoding plugin (pgoutput). The plugin transforms the changes read from WAL to the logical replication protocol (see[Section 53.5](protocol-logical-replication.html)) and filters the data according to the publication specification. The data is then continuously transferred using the streaming replication protocol to the apply worker, which maps the data to local tables and applies the individual changes as they are received, in correct transactional order. + +The apply process on the subscriber database always runs with`session_replication_role`set to`replica`, which produces the usual effects on triggers and constraints. + +The logical replication apply process currently only fires row triggers, not statement triggers. The initial table synchronization, however, is implemented like a`COPY`command and thus fires both row and statement triggers for`INSERT`. + +### 31.5.1. Initial Snapshot + +The initial data in existing subscribed tables are snapshotted and copied in a parallel instance of a special kind of apply process. This process will create its own replication slot and copy the existing data. As soon as the copy is finished the table contents will become visible to other backends. Once existing data is copied, the worker enters synchronization mode, which ensures that the table is brought up to a synchronized state with the main apply process by streaming any changes that happened during the initial data copy using standard logical replication. During this synchronization phase, the changes are applied and committed in the same order as they happened on the publisher. Once synchronization is done, control of the replication of the table is given back to the main apply process where replication continues as normal. diff --git a/docs/X/logical-replication-config.md b/docs/en/logical-replication-config.md similarity index 100% rename from docs/X/logical-replication-config.md rename to docs/en/logical-replication-config.md diff --git a/docs/en/logical-replication-config.zh.md b/docs/en/logical-replication-config.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..d2bca7cc3020cb40d82f1c61f1316cb1943c26c1 --- /dev/null +++ b/docs/en/logical-replication-config.zh.md @@ -0,0 +1,7 @@ +## 31.8. Configuration Settings + +Logical replication requires several configuration options to be set. + +On the publisher side,`wal_level`must be set to`logical`, and`max_replication_slots`must be set to at least the number of subscriptions expected to connect, plus some reserve for table synchronization. And`max_wal_senders`should be set to at least the same as`max_replication_slots`plus the number of physical replicas that are connected at the same time. + +`max_replication_slots`must also be set on the subscriber. It should be set to at least the number of subscriptions that will be added to the subscriber, plus some reserve for table synchronization.`max_logical_replication_workers`must be set to at least the number of subscriptions, again plus some reserve for the table synchronization. Additionally the`max_worker_processes`may need to be adjusted to accommodate for replication workers, at least (`max_logical_replication_workers`+`1`). Note that some extensions and parallel queries also take worker slots from`max_worker_processes`. diff --git a/docs/X/logical-replication-conflicts.md b/docs/en/logical-replication-conflicts.md similarity index 100% rename from docs/X/logical-replication-conflicts.md rename to docs/en/logical-replication-conflicts.md diff --git a/docs/en/logical-replication-conflicts.zh.md b/docs/en/logical-replication-conflicts.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..935a81e966b46991c2b4b2fefeca73a26105a408 --- /dev/null +++ b/docs/en/logical-replication-conflicts.zh.md @@ -0,0 +1,7 @@ +## 31.3. Conflicts + +Logical replication behaves similarly to normal DML operations in that the data will be updated even if it was changed locally on the subscriber node. If incoming data violates any constraints the replication will stop. This is referred to as a*conflict*. When replicating`UPDATE`or`DELETE`operations, missing data will not produce a conflict and such operations will simply be skipped. + +A conflict will produce an error and will stop the replication; it must be resolved manually by the user. Details about the conflict can be found in the subscriber's server log. + +The resolution can be done either by changing data on the subscriber so that it does not conflict with the incoming change or by skipping the transaction that conflicts with the existing data. The transaction can be skipped by calling the[`pg_replication_origin_advance()`](functions-admin.html#PG-REPLICATION-ORIGIN-ADVANCE)function with a*`node_name`*corresponding to the subscription name, and a position. The current position of origins can be seen in the[`pg_replication_origin_status`](view-pg-replication-origin-status.html)system view. diff --git a/docs/X/logical-replication-publication.md b/docs/en/logical-replication-publication.md similarity index 100% rename from docs/X/logical-replication-publication.md rename to docs/en/logical-replication-publication.md diff --git a/docs/en/logical-replication-publication.zh.md b/docs/en/logical-replication-publication.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..26f04cab962e48d55a10edc5a8126324e62cbd3d --- /dev/null +++ b/docs/en/logical-replication-publication.zh.md @@ -0,0 +1,15 @@ +## 31.1. Publication + +A*publication*can be defined on any physical replication primary. The node where a publication is defined is referred to as*publisher*. A publication is a set of changes generated from a table or a group of tables, and might also be described as a change set or replication set. Each publication exists in only one database. + +Publications are different from schemas and do not affect how the table is accessed. Each table can be added to multiple publications if needed. Publications may currently only contain tables. Objects must be added explicitly, except when a publication is created for`ALL TABLES`. + +Publications can choose to limit the changes they produce to any combination of`INSERT`,`UPDATE`,`DELETE`, and`TRUNCATE`, similar to how triggers are fired by particular event types. By default, all operation types are replicated. + +A published table must have a “replica identity” configured in order to be able to replicate`UPDATE`and`DELETE`operations, so that appropriate rows to update or delete can be identified on the subscriber side. By default, this is the primary key, if there is one. Another unique index (with certain additional requirements) can also be set to be the replica identity. If the table does not have any suitable key, then it can be set to replica identity “full”, which means the entire row becomes the key. This, however, is very inefficient and should only be used as a fallback if no other solution is possible. If a replica identity other than “full” is set on the publisher side, a replica identity comprising the same or fewer columns must also be set on the subscriber side. See[`副本身份`](sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY)有关如何设置副本标识的详细信息。如果将没有副本标识的表添加到复制的发布中`更新`要么`删除`随后的操作`更新`要么`删除`操作将导致发布者出错。`插入`无论任何副本身份如何,操作都可以继续进行。 + +每个发布都可以有多个订阅者。 + +发布是使用[`创建出版物`](sql-createpublication.html)命令,以后可以使用相应的命令进行更改或删除。 + +可以使用动态添加和删除各个表[`更改出版物`](sql-alterpublication.html).这俩`添加表格`和`删除表`操作是事务性的;因此一旦事务提交,表将在正确的快照处开始或停止复制。 diff --git a/docs/X/logical-replication-quick-setup.md b/docs/en/logical-replication-quick-setup.md similarity index 100% rename from docs/X/logical-replication-quick-setup.md rename to docs/en/logical-replication-quick-setup.md diff --git a/docs/en/logical-replication-quick-setup.zh.md b/docs/en/logical-replication-quick-setup.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..5e619df78e42440546b7ec4c175bf408a735c939 --- /dev/null +++ b/docs/en/logical-replication-quick-setup.zh.md @@ -0,0 +1,29 @@ +## 31.9.快速安装 + +首先在中设置配置选项`postgresql。形态`: + +``` +wal_level = logical +``` + +其他必需的设置具有足够基本设置的默认值。 + +`pg_hba。形态`需要调整以允许复制(此处的值取决于您的实际网络配置和要用于连接的用户): + +``` +host all repuser 0.0.0.0/0 md5 +``` + +然后在publisher数据库上: + +``` +CREATE PUBLICATION mypub FOR TABLE users, departments; +``` + +在订户数据库上: + +``` +CREATE SUBSCRIPTION mysub CONNECTION 'dbname=foo host=bar user=repuser' PUBLICATION mypub; +``` + +上述操作将启动复制过程,同步表的初始表内容`用户`和`部门`然后开始向这些表复制增量更改。 diff --git a/docs/X/logical-replication-restrictions.md b/docs/en/logical-replication-restrictions.md similarity index 100% rename from docs/X/logical-replication-restrictions.md rename to docs/en/logical-replication-restrictions.md diff --git a/docs/en/logical-replication-restrictions.zh.md b/docs/en/logical-replication-restrictions.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7128d45fad214e3b439d8732866943ee49f4d9e0 --- /dev/null +++ b/docs/en/logical-replication-restrictions.zh.md @@ -0,0 +1,15 @@ +## 31.4. Restrictions + +Logical replication currently has the following restrictions or missing functionality. These might be addressed in future releases. + +- The database schema and DDL commands are not replicated. The initial schema can be copied by hand using`pg_dump --schema-only`. Subsequent schema changes would need to be kept in sync manually. (Note, however, that there is no need for the schemas to be absolutely the same on both sides.) Logical replication is robust when schema definitions change in a live database: When the schema is changed on the publisher and replicated data starts arriving at the subscriber but does not fit into the table schema, replication will error until the schema is updated. In many cases, intermittent errors can be avoided by applying additive schema changes to the subscriber first. + +- Sequence data is not replicated. The data in serial or identity columns backed by sequences will of course be replicated as part of the table, but the sequence itself would still show the start value on the subscriber. If the subscriber is used as a read-only database, then this should typically not be a problem. If, however, some kind of switchover or failover to the subscriber database is intended, then the sequences would need to be updated to the latest values, either by copying the current data from the publisher (perhaps using`pg_dump`) or by determining a sufficiently high value from the tables themselves. + +- Replication of`TRUNCATE`commands is supported, but some care must be taken when truncating groups of tables connected by foreign keys. When replicating a truncate action, the subscriber will truncate the same group of tables that was truncated on the publisher, either explicitly specified or implicitly collected via`CASCADE`, minus tables that are not part of the subscription. This will work correctly if all affected tables are part of the same subscription. But if some tables to be truncated on the subscriber have foreign-key links to tables that are not part of the same (or any) subscription, then the application of the truncate action on the subscriber will fail. + +- Large objects (see[Chapter 35](largeobjects.html)) are not replicated. There is no workaround for that, other than storing data in normal tables. + +- Replication is only supported by tables, including partitioned tables. Attempts to replicate other types of relations, such as views, materialized views, or foreign tables, will result in an error. + +- When replicating between partitioned tables, the actual replication originates, by default, from the leaf partitions on the publisher, so partitions on the publisher must also exist on the subscriber as valid target tables. (They could either be leaf partitions themselves, or they could be further subpartitioned, or they could even be independent tables.) Publications can also specify that changes are to be replicated using the identity and schema of the partitioned root table instead of that of the individual leaf partitions in which the changes actually originate (see[`CREATE PUBLICATION`](sql-createpublication.html)). diff --git a/docs/X/logical-replication-security.md b/docs/en/logical-replication-security.md similarity index 100% rename from docs/X/logical-replication-security.md rename to docs/en/logical-replication-security.md diff --git a/docs/en/logical-replication-security.zh.md b/docs/en/logical-replication-security.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..55726dcb0ea76f5de7f7642fe5184779d2d4acbb --- /dev/null +++ b/docs/en/logical-replication-security.zh.md @@ -0,0 +1,17 @@ +## 31.7.保安 + +能够修改订阅服务器端表模式的用户可以作为超级用户执行任意代码。限制所有权和`触发`在这些表上授予超级用户信任的角色的权限。此外,如果不受信任的用户可以创建表,请仅使用明确列出表的发布。也就是说,创建订阅`所有的桌子`仅当超级用户信任每个被允许在发布服务器或订阅服务器上创建非临时表的用户时。 + +用于复制连接的角色必须具有`复制`属性(或成为超级用户)。如果角色缺乏`超级用户`和`绕过RLS`,可以执行发布者行安全策略。如果角色不信任所有表所有者,请包括`选项=-crow_security=off`在连接字符串中;如果表所有者随后添加了行安全策略,则该设置将导致复制停止,而不是执行该策略。必须在中配置角色的访问权限`pg_hba。形态`它必须有`登录`属性 + +为了能够复制初始表数据,用于复制连接的角色必须具有`选择`对已发布表的权限(或成为超级用户)。 + +要创建出版物,用户必须具有`创造`数据库中的权限。 + +要向发布中添加表,用户必须对表拥有所有权。要创建自动发布所有表的发布,用户必须是超级用户。 + +要创建订阅,用户必须是超级用户。 + +订阅应用进程将以超级用户的权限在本地数据库中运行。 + +权限仅在复制连接开始时检查一次。在从发布者读取每个更改记录时,不会重新检查它们,也不会在应用每个更改时重新检查它们。 diff --git a/docs/X/logicaldecoding-explanation.md b/docs/en/logicaldecoding-explanation.md similarity index 100% rename from docs/X/logicaldecoding-explanation.md rename to docs/en/logicaldecoding-explanation.md diff --git a/docs/en/logicaldecoding-explanation.zh.md b/docs/en/logicaldecoding-explanation.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..fd0dd4a3c71e96ec952d656d61943a3ef48eed2d --- /dev/null +++ b/docs/en/logicaldecoding-explanation.zh.md @@ -0,0 +1,49 @@ +## 49.2. Logical Decoding Concepts + +[49.2.1. Logical Decoding](logicaldecoding-explanation.html#id-1.8.14.8.2) + +[49.2.2. Replication Slots](logicaldecoding-explanation.html#LOGICALDECODING-REPLICATION-SLOTS) + +[49.2.3. Output Plugins](logicaldecoding-explanation.html#id-1.8.14.8.4) + +[49.2.4. Exported Snapshots](logicaldecoding-explanation.html#id-1.8.14.8.5) + +### 49.2.1. Logical Decoding + +[](<>) + +Logical decoding is the process of extracting all persistent changes to a database's tables into a coherent, easy to understand format which can be interpreted without detailed knowledge of the database's internal state. + +In PostgreSQL, logical decoding is implemented by decoding the contents of the[write-ahead log](wal.html), which describe changes on a storage level, into an application-specific form such as a stream of tuples or SQL statements. + +### 49.2.2. Replication Slots + +[](<>) + +In the context of logical replication, a slot represents a stream of changes that can be replayed to a client in the order they were made on the origin server. Each slot streams a sequence of changes from a single database. + +### Note + +PostgreSQL also has streaming replication slots (see[Section 27.2.5](warm-standby.html#STREAMING-REPLICATION)), but they are used somewhat differently there. + +A replication slot has an identifier that is unique across all databases in a PostgreSQL cluster. Slots persist independently of the connection using them and are crash-safe. + +A logical slot will emit each change just once in normal operation. The current position of each slot is persisted only at checkpoint, so in the case of a crash the slot may return to an earlier LSN, which will then cause recent changes to be sent again when the server restarts. Logical decoding clients are responsible for avoiding ill effects from handling the same message more than once. Clients may wish to record the last LSN they saw when decoding and skip over any repeated data or (when using the replication protocol) request that decoding start from that LSN rather than letting the server determine the start point. The Replication Progress Tracking feature is designed for this purpose, refer to[replication origins](replication-origins.html). + +Multiple independent slots may exist for a single database. Each slot has its own state, allowing different consumers to receive changes from different points in the database change stream. For most applications, a separate slot will be required for each consumer. + +A logical replication slot knows nothing about the state of the receiver(s). It's even possible to have multiple different receivers using the same slot at different times; they'll just get the changes following on from when the last receiver stopped consuming them. Only one receiver may consume changes from a slot at any given time. + +### Caution + +Replication slots persist across crashes and know nothing about the state of their consumer(s). They will prevent removal of required resources even when there is no connection using them. This consumes storage because neither required WAL nor required rows from the system catalogs can be removed by`VACUUM`as long as they are required by a replication slot. In extreme cases this could cause the database to shut down to prevent transaction ID wraparound (see[Section 25.1.5](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND)). So if a slot is no longer required it should be dropped. + +### 49.2.3. Output Plugins + +Output plugins transform the data from the write-ahead log's internal representation into the format the consumer of a replication slot desires. + +### 49.2.4. Exported Snapshots + +When a new replication slot is created using the streaming replication interface (see[CREATE_REPLICATION_SLOT](protocol-replication.html#PROTOCOL-REPLICATION-CREATE-SLOT)), a snapshot is exported (see[Section 9.27.5](functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION)), which will show exactly the state of the database after which all changes will be included in the change stream. This can be used to create a new replica by using[`SET TRANSACTION SNAPSHOT`](sql-set-transaction.html)to read the state of the database at the moment the slot was created. This transaction can then be used to dump the database's state at that point in time, which afterwards can be updated using the slot's contents without losing any changes. + +Creation of a snapshot is not always possible. In particular, it will fail when connected to a hot standby. Applications that do not require snapshot export may suppress it with the`NOEXPORT_SNAPSHOT`option. diff --git a/docs/X/monitoring-locks.md b/docs/en/monitoring-locks.md similarity index 100% rename from docs/X/monitoring-locks.md rename to docs/en/monitoring-locks.md diff --git a/docs/en/monitoring-locks.zh.md b/docs/en/monitoring-locks.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..3e795c38a4552a98c605981a07e36f1803c9749b --- /dev/null +++ b/docs/en/monitoring-locks.zh.md @@ -0,0 +1,13 @@ +## 28.3. Viewing Locks + +[](<>) + +Another useful tool for monitoring database activity is the`pg_locks`system table. It allows the database administrator to view information about the outstanding locks in the lock manager. For example, this capability can be used to: + +- View all the locks currently outstanding, all the locks on relations in a particular database, all the locks on a particular relation, or all the locks held by a particular PostgreSQL session. + +- Determine the relation in the current database with the most ungranted locks (which might be a source of contention among database clients). + +- Determine the effect of lock contention on overall database performance, as well as the extent to which contention varies with overall database traffic. + + Details of the`pg_locks`view appear in[Section 52.74](view-pg-locks.html). For more information on locking and managing concurrency with PostgreSQL, refer to[Chapter 13](mvcc.html). diff --git a/docs/X/monitoring-stats.md b/docs/en/monitoring-stats.md similarity index 100% rename from docs/X/monitoring-stats.md rename to docs/en/monitoring-stats.md diff --git a/docs/en/monitoring-stats.zh.md b/docs/en/monitoring-stats.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8543c7a8401f377d63005fe300f93712c53ecdd7 --- /dev/null +++ b/docs/en/monitoring-stats.zh.md @@ -0,0 +1,922 @@ +## 28.2.统计数据采集器 + +[28.2.1. 统计信息收集配置](monitoring-stats.html#MONITORING-STATS-SETUP) + +[28.2.2. 查看统计数据](monitoring-stats.html#MONITORING-STATS-VIEWS) + +[28.2.3.`pg_统计活动`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW) + +[28.2.4.`pg_stat_复制`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW) + +[28.2.5.`pg_stat_replication_插槽`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW) + +[28.2.6.`pg_stat_wal_接收器`](monitoring-stats.html#MONITORING-PG-STAT-WAL-RECEIVER-VIEW) + +[28.2.7.`pg_stat_订阅`](monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION) + +[28.2.8.`pg_stat_ssl`](monitoring-stats.html#MONITORING-PG-STAT-SSL-VIEW) + +[28.2.9.`pg_stat_gssapi`](monitoring-stats.html#MONITORING-PG-STAT-GSSAPI-VIEW) + +[28.2.10.`pg_stat_archiver`](monitoring-stats.html#MONITORING-PG-STAT-ARCHIVER-VIEW) + +[28.2.11.`pg_stat_bgwriter`](monitoring-stats.html#MONITORING-PG-STAT-BGWRITER-VIEW) + +[28.2.12.`pg_stat_wal`](monitoring-stats.html#MONITORING-PG-STAT-WAL-VIEW) + +[28.2.13.`pg_统计数据库`](monitoring-stats.html#MONITORING-PG-STAT-DATABASE-VIEW) + +[28.2.14.`pg_统计_数据库_冲突`](monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW) + +[28.2.15.`pg_统计_所有表格`](monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW) + +[28.2.16.`pg_stat_all_indexes`](monitoring-stats.html#MONITORING-PG-STAT-ALL-INDEXES-VIEW) + +[28.2.17.`pg_statio_all_tables`](monitoring-stats.html#MONITORING-PG-STATIO-ALL-TABLES-VIEW) + +[28.2.18.`pg_statio_all_indexes`](monitoring-stats.html#MONITORING-PG-STATIO-ALL-INDEXES-VIEW) + +[28.2.19.`pg_statio_all_sequences`](monitoring-stats.html#MONITORING-PG-STATIO-ALL-SEQUENCES-VIEW) + +[28.2.20.`pg_stat_user_functions`](monitoring-stats.html#MONITORING-PG-STAT-USER-FUNCTIONS-VIEW) + +[28.2.21.`pg_stat_slru`](monitoring-stats.html#MONITORING-PG-STAT-SLRU-VIEW) + +[28.2.22. Statistics Functions](monitoring-stats.html#MONITORING-STATS-FUNCTIONS) + +[](<>) + +PostgreSQL's*statistics collector*is a subsystem that supports collection and reporting of information about server activity. Presently, the collector can count accesses to tables and indexes in both disk-block and individual-row terms. It also tracks the total number of rows in each table, and information about vacuum and analyze actions for each table. It can also count calls to user-defined functions and the total time spent in each one. + +PostgreSQL also supports reporting dynamic information about exactly what is going on in the system right now, such as the exact command currently being executed by other server processes, and which other connections exist in the system. This facility is independent of the collector process. + +### 28.2.1. Statistics Collection Configuration + +Since collection of statistics adds some overhead to query execution, the system can be configured to collect or not collect information. This is controlled by configuration parameters that are normally set in`postgresql.conf`. (See[Chapter 20](runtime-config.html)for details about setting configuration parameters.) + +The parameter[track_activities](runtime-config-statistics.html#GUC-TRACK-ACTIVITIES)enables monitoring of the current command being executed by any server process. + +The parameter[track\_计数](runtime-config-statistics.html#GUC-TRACK-COUNTS)控制是否收集有关表和索引访问的统计信息。 + +参数[追踪\_职能](runtime-config-statistics.html#GUC-TRACK-FUNCTIONS)可以跟踪用户定义函数的使用情况。 + +参数[追踪\_io\_定时](runtime-config-statistics.html#GUC-TRACK-IO-TIMING)启用对块读取和写入时间的监控。 + +参数[追踪\_沃尔\_io\_定时](runtime-config-statistics.html#GUC-TRACK-WAL-IO-TIMING)启用对 WAL 写入时间的监控。 + +通常这些参数设置在`postgresql.conf`以便它们适用于所有服务器进程,但可以使用[放](sql-set.html)命令。(为了防止普通用户对管理员隐藏他们的活动,只允许超级用户更改这些参数`放`.) + +The statistics collector transmits the collected information to other PostgreSQL processes through temporary files. These files are stored in the directory named by the[stats_temp_directory](runtime-config-statistics.html#GUC-STATS-TEMP-DIRECTORY)parameter,`pg_stat_tmp`by default. For better performance,`stats_temp_directory`can be pointed at a RAM-based file system, decreasing physical I/O requirements. When the server shuts down cleanly, a permanent copy of the statistics data is stored in the`pg_stat`subdirectory, so that statistics can be retained across server restarts. When recovery is performed at server start (e.g., after immediate shutdown, server crash, and point-in-time recovery), all statistics counters are reset. + +### 28.2.2. Viewing Statistics + +Several predefined views, listed in[Table 28.1](monitoring-stats.html#MONITORING-STATS-DYNAMIC-VIEWS-TABLE), are available to show the current state of the system. There are also several other views, listed in[Table 28.2](monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE), available to show the results of statistics collection. Alternatively, one can build custom views using the underlying statistics functions, as discussed in[Section 28.2.22](monitoring-stats.html#MONITORING-STATS-FUNCTIONS). + +When using the statistics to monitor collected data, it is important to realize that the information does not update instantaneously. Each individual server process transmits new statistical counts to the collector just before going idle; so a query or transaction still in progress does not affect the displayed totals. Also, the collector itself emits a new report at most once per`PGSTAT_STAT_INTERVAL`milliseconds (500 ms unless altered while building the server). So the displayed information lags behind actual activity. However, current-query information collected by`track_activities`is always up-to-date. + +Another important point is that when a server process is asked to display any of these statistics, it first fetches the most recent report emitted by the collector process and then continues to use this snapshot for all statistical views and functions until the end of its current transaction. So the statistics will show static information as long as you continue the current transaction. Similarly, information about the current queries of all sessions is collected when any such information is first requested within a transaction, and the same information will be displayed throughout the transaction. This is a feature, not a bug, because it allows you to perform several queries on the statistics and correlate the results without worrying that the numbers are changing underneath you. But if you want to see new results with each query, be sure to do the queries outside any transaction block. Alternatively, you can invoke`pg_stat_clear_snapshot`(), which will discard the current transaction's statistics snapshot (if any). The next use of statistical information will cause a new snapshot to be fetched. + +A transaction can also see its own statistics (as yet untransmitted to the collector) in the views`pg_stat_xact_all_tables`,`pg_stat_xact_sys_tables`,`pg_stat_xact_user_tables`, and`pg_stat_xact_user_functions`. These numbers do not act as stated above; instead they update continuously throughout the transaction. + +Some of the information in the dynamic statistics views shown in[Table 28.1](monitoring-stats.html#MONITORING-STATS-DYNAMIC-VIEWS-TABLE)is security restricted. Ordinary users can only see all the information about their own sessions (sessions belonging to a role that they are a member of). In rows about other sessions, many columns will be null. Note, however, that the existence of a session and its general properties such as its sessions user and database are visible to all users. Superusers and members of the built-in role`pg_read_all_stats`(see also[Section 22.5](predefined-roles.html)) can see all the information about all sessions. + +**Table 28.1. Dynamic Statistics Views** + +| View Name | Description | +| --------- | ----------- | +| `pg_stat_activity` [](<>) | One row per server process, showing information related to the current activity of that process, such as state and current query. See[`pg_stat_activity`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)for details. | +| `pg_stat_replication`[](<>) | One row per WAL sender process, showing statistics about replication to that sender's connected standby server. See[`pg_stat_replication`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW)for details. | +| `pg_stat_wal_receiver`[](<>) | Only one row, showing statistics about the WAL receiver from that receiver's connected server. See[`pg_stat_wal_receiver`](monitoring-stats.html#MONITORING-PG-STAT-WAL-RECEIVER-VIEW)for details. | +| `pg_stat_subscription`[](<>) | 每个订阅至少一行,显示有关订阅工作者的信息。看[`pg_stat_subscription`](monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION)详情。 | +| `pg_stat_ssl`[](<>) | 每个连接(常规和复制)一行,显示有关此连接上使用的 SSL 的信息。看[`pg_stat_ssl`](monitoring-stats.html#MONITORING-PG-STAT-SSL-VIEW)详情。 | +| `pg_stat_gssapi`[](<>) | 每个连接(常规和复制)一行,显示有关此连接上使用的 GSSAPI 身份验证和加密的信息。看[`pg_stat_gssapi`](monitoring-stats.html#MONITORING-PG-STAT-GSSAPI-VIEW)详情。 | +| `pg_stat_progress_analyze`[](<>) | 每个运行的后端(包括 autovacuum 工作进程)一行`分析`,显示当前进度。看[第 28.4.1 节](progress-reporting.html#ANALYZE-PROGRESS-REPORTING). | +| `pg_stat_progress_create_index`[](<>) | 每个后端运行一行`创建索引`要么`重新索引`,显示当前进度。看[第 28.4.2 节](progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING). | +| `pg_stat_progress_vacuum`[](<>) | 每个运行的后端(包括 autovacuum 工作进程)一行`真空`,显示当前进度。看[Section 28.4.3](progress-reporting.html#VACUUM-PROGRESS-REPORTING). | +| `pg_stat_progress_cluster`[](<>) | One row for each backend running`CLUSTER`or`VACUUM FULL`, showing current progress. See[Section 28.4.4](progress-reporting.html#CLUSTER-PROGRESS-REPORTING). | +| `pg_stat_progress_basebackup`[](<>) | One row for each WAL sender process streaming a base backup, showing current progress. See[Section 28.4.5](progress-reporting.html#BASEBACKUP-PROGRESS-REPORTING). | +| `pg_stat_progress_copy`[](<>) | One row for each backend running`COPY`, showing current progress. See[Section 28.4.6](progress-reporting.html#COPY-PROGRESS-REPORTING). | + +**Table 28.2. Collected Statistics Views** + +| View Name | Description | +| --------- | ----------- | +| `pg_stat_archiver`[](<>) | One row only, showing statistics about the WAL archiver process's activity. See[`pg_stat_archiver`](monitoring-stats.html#MONITORING-PG-STAT-ARCHIVER-VIEW)for details. | +| `pg_stat_bgwriter`[](<>) | One row only, showing statistics about the background writer process's activity. See[`pg_stat_bgwriter`](monitoring-stats.html#MONITORING-PG-STAT-BGWRITER-VIEW)详情。 | +| `pg_stat_wal`[](<>) | 仅一行,显示有关 WAL 活动的统计信息。看[`pg_stat_wal`](monitoring-stats.html#MONITORING-PG-STAT-WAL-VIEW)详情。 | +| `pg_stat_database`[](<>) | 每个数据库一行,显示数据库范围的统计信息。看[`pg_stat_database`](monitoring-stats.html#MONITORING-PG-STAT-DATABASE-VIEW)详情。 | +| `pg_stat_database_conflicts`[](<>) | 每个数据库一行,显示有关由于与备用服务器上的恢复冲突而取消的查询的数据库范围统计信息。看[`pg_stat_database_conflicts`](monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW)详情。 | +| `pg_stat_all_tables`[](<>) | 当前数据库中每个表的一行,显示有关对该特定表的访问的统计信息。看[`pg_stat_all_tables`](monitoring-stats.html#MONITORING-PG-STAT-ALL-TABLES-VIEW)详情。 | +| `pg_stat_sys_tables`[](<>) | 与...一样`pg_stat_all_tables`, 除了只显示系统表。 | +| `pg_stat_user_tables`[](<>) | 与...一样`pg_stat_all_tables`,除了只显示用户表。 | +| `pg_stat_xact_all_tables`[](<>) | 相似`pg_stat_all_tables`,但计算当前事务中迄今为止采取的行动(即*不是*还包括在`pg_stat_all_tables`及相关观点)。此视图中不存在活动行数和死行数以及清理和分析操作的列。 | +| `pg_stat_xact_sys_tables`[](<>) | 如同`pg_stat_xact_all_tables`, 除了只显示系统表。 | +| `pg_stat_xact_user_tables`[](<>) | 如同`pg_stat_xact_all_tables`,除了只显示用户表。 | +| `pg_stat_all_indexes`[](<>) | 当前数据库中每个索引的一行,显示有关对该特定索引的访问的统计信息。看[`pg_stat_all_indexes`](monitoring-stats.html#MONITORING-PG-STAT-ALL-INDEXES-VIEW)详情。 | +| `pg_stat_sys_indexes`[](<>) | 如同`pg_stat_all_indexes`,除了只显示系统表上的索引。 | +| `pg_stat_user_indexes`[](<>) | 如同`pg_stat_all_indexes`,除了只显示用户表上的索引。 | +| `pg_statio_all_tables`[](<>) | 当前数据库中每个表的一行,显示有关该特定表的 I/O 的统计信息。看[`pg_statio_all_tables`](monitoring-stats.html#MONITORING-PG-STATIO-ALL-TABLES-VIEW)详情。 | +| `pg_statio_sys_tables`[](<>) | 如同`pg_statio_all_tables`, 除了只显示系统表。 | +| `pg_statio_user_tables`[](<>) | 与...一样`pg_statio_all_tables`,除了只显示用户表。 | +| `pg_statio_all_indexes`[](<>) | 当前数据库中每个索引的一行,显示有关该特定索引的 I/O 的统计信息。看[`pg_statio_all_indexes`](monitoring-stats.html#MONITORING-PG-STATIO-ALL-INDEXES-VIEW)详情。 | +| `pg_statio_sys_indexes`[](<>) | 与...一样`pg_statio_all_indexes`,除了只显示系统表上的索引。 | +| `pg_statio_user_indexes`[](<>) | 与...一样`pg_statio_all_indexes`,除了只显示用户表上的索引。 | +| `pg_statio_all_sequences`[](<>) | 当前数据库中每个序列的一行,显示有关该特定序列的 I/O 的统计信息。看[`pg_statio_all_sequences`](monitoring-stats.html#MONITORING-PG-STATIO-ALL-SEQUENCES-VIEW)详情。 | +| `pg_statio_sys_sequences`[](<>) | 与...一样`pg_statio_all_sequences`, 除了只显示系统序列。(目前,未定义系统序列,因此该视图始终为空。) | +| `pg_statio_user_sequences`[](<>) | 与...一样`pg_statio_all_sequences`,除了只显示用户序列。 | +| `pg_stat_user_functions`[](<>) | One row for each tracked function, showing statistics about executions of that function. See[`pg_stat_user_functions`](monitoring-stats.html#MONITORING-PG-STAT-USER-FUNCTIONS-VIEW)for details. | +| `pg_stat_xact_user_functions`[](<>) | Similar to`pg_stat_user_functions`, but counts only calls during the current transaction (which are*not*yet included in`pg_stat_user_functions`). | +| `pg_stat_slru`[](<>) | One row per SLRU, showing statistics of operations. See[`pg_stat_slru`](monitoring-stats.html#MONITORING-PG-STAT-SLRU-VIEW)for details. | +| `pg_stat_replication_slots`[](<>) | One row per replication slot, showing statistics about the replication slot's usage. See[`pg_stat_replication_slots`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW)for details. | + +The per-index statistics are particularly useful to determine which indexes are being used and how effective they are. + +The`pg_statio_`views are primarily useful to determine the effectiveness of the buffer cache. When the number of actual disk reads is much smaller than the number of buffer hits, then the cache is satisfying most read requests without invoking a kernel call. However, these statistics do not give the entire story: due to the way in which PostgreSQL handles disk I/O, data that is not in the PostgreSQL buffer cache might still reside in the kernel's I/O cache, and might therefore still be fetched without requiring a physical read. Users interested in obtaining more detailed information on PostgreSQL I/O behavior are advised to use the PostgreSQL statistics collector in combination with operating system utilities that allow insight into the kernel's handling of I/O. + +### 28.2.3.`pg_stat_activity` + +[](<>) + +The`pg_stat_activity`view will have one row per server process, showing information related to the current activity of that process. + +**Table 28.3.`pg_stat_activity`看法** + +| 列类型

描述 | +| --------------- | +| `数据` `样的`

此后端连接到的数据库的 OID | +| `数据名` `姓名`

此后端连接到的数据库的名称 | +| `pid` `整数`

此后端的进程 ID | +| `leader_pid` `整数`

并行组负责人的进程 ID,如果此进程是并行查询工作者。`空值`如果这个进程是并行组长或者不参与并行查询。 | +| `使用sysid` `样的`

登录到此后端的用户的 OID | +| `使用名称` `姓名`

登录到此后端的用户名 | +| `应用名称` `文本`

连接到此后端的应用程序的名称 | +| `客户端地址` `网络`

连接到此后端的客户端的 IP 地址。如果此字段为空,则表明客户端通过服务器机器上的 Unix 套接字连接,或者这是一个内部进程,例如 autovacuum。 | +| `客户端主机名` `文本`

已连接客户端的主机名,由反向 DNS 查找报告`客户端地址`.该字段仅对 IP 连接不为空,并且仅当[日志\_主机名](runtime-config-logging.html#GUC-LOG-HOSTNAME)已启用。 | +| `客户端端口` `整数`

客户端用于与此后端通信的 TCP 端口号,或`-1`如果使用 Unix 套接字。如果该字段为空,则表明这是一个内部服务器进程。 | +| `backend_start` `带时区的时间戳`

此过程开始的时间。对于客户端后端,这是客户端连接到服务器的时间。 | +| `xact_start` `带时区的时间戳`

此进程的当前事务开始的时间,如果没有事务处于活动状态,则为 null。如果当前查询是其事务的第一个,则此列等于`查询开始`柱子。 | +| `查询开始` `带时区的时间戳`

当前活动查询开始的时间,或者如果`状态`不是`积极的`, 当最后一个查询开始时 | +| `state_change` `带时区的时间戳`

时间`状态`上次更改 | +| `等待事件类型` `文本`

后端正在等待的事件类型(如果有);否则为空。看[表 28.4](monitoring-stats.html#WAIT-EVENT-TABLE). | +| `等待事件` `文本`

如果后端当前正在等待,则等待事件名称,否则为 NULL。看[表 28.5](monitoring-stats.html#WAIT-EVENT-ACTIVITY-TABLE)通过[表 28.13](monitoring-stats.html#WAIT-EVENT-TIMEOUT-TABLE). | +| `状态` `文本`

此后端的当前整体状态。可能的值为:

* `积极的`:后端正在执行查询。

* `闲`:后端正在等待新的客户端命令。

* `交易闲置`:后端处于事务中,但当前未执行查询。

* `事务中的空闲(中止)`: 这个状态类似于`交易闲置`,但事务中的其中一条语句导致错误除外。

* `快速路径函数调用`:后端正在执行快速路径功能。

* `禁用`:如果出现此状态,则报告此状态[追踪\_活动](runtime-config-statistics.html#GUC-TRACK-ACTIVITIES)在这个后端被禁用。 | +| `backend_xid` `xid`

此后端的顶级事务标识符(如果有)。 | +| `backend_xmin` `xid`

当前后端的`xmin`地平线。 | +| `查询ID` `大整数`

此后端最近查询的标识符。如果`状态`是`积极的`此字段显示当前正在执行的查询的标识符。在所有其他状态下,它显示最后执行的查询的标识符。默认情况下不计算查询标识符,因此该字段将为空,除非[计算\_询问\_id](runtime-config-statistics.html#GUC-COMPUTE-QUERY-ID)启用参数或配置计算查询标识符的第三方模块。 | +| `询问` `文本`

此后端最近查询的文本。如果`状态`是`积极的`此字段显示当前正在执行的查询。在所有其他状态下,它显示最后执行的查询。默认情况下,查询文本被截断为 1024 字节;这个值可以通过参数改变[追踪\_活动\_询问\_尺寸](runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZE). | +| `后端类型` `文本`

当前后端的类型。可能的类型是`自动真空发射器`,`自动吸尘器`,`逻辑复制启动器`,`逻辑复制工作者`,`并行工作者`,`背景作家`,`客户端后端`,`检查指针`,`阿奇弗`,`启动`,`沃尔接收机`,`沃尔森德`和`沃尔夫作家`。此外,通过扩展注册的背景工作者可能有其他类型。 | + +### 笔记 + +这个`等等`和`状态`列是独立的。如果后端在`忙碌的`国家,它可能是,也可能不是`等待`在某些事件上。如果州政府`忙碌的`和`等等`非空,表示正在执行查询,但在系统中的某个位置被阻止。 + +**表28.4。等待事件类型** + +| 等待事件类型 | 描述 | +| ------ | --- | +| `活动` | 服务器进程处于空闲状态。此事件类型表示进程在其主处理循环中等待活动。`等等`将确定具体的等待点;看见[表28.5](monitoring-stats.html#WAIT-EVENT-ACTIVITY-TABLE). | +| `缓冲销` | 服务器进程正在等待对数据缓冲区的独占访问。如果另一个进程持有上次从相关缓冲区读取数据的打开游标,则可以延长缓冲区引脚等待时间。看见[表28.6](monitoring-stats.html#WAIT-EVENT-BUFFERPIN-TABLE). | +| `客户` | 服务器进程正在连接到用户应用程序的套接字上等待活动。因此,服务器期望发生独立于其内部进程的事情。`等等`将确定具体的等待点;看见[表28.7](monitoring-stats.html#WAIT-EVENT-CLIENT-TABLE). | +| `扩大` | 服务器进程正在等待扩展模块定义的某些条件。看见[表28.8](monitoring-stats.html#WAIT-EVENT-EXTENSION-TABLE). | +| `木卫一` | 服务器进程正在等待I/O操作完成。`等等`将确定具体的等待点;看见[表28.9](monitoring-stats.html#WAIT-EVENT-IO-TABLE). | +| `工控机` | 服务器进程正在等待与另一个服务器进程的某些交互。`等等`将确定具体的等待点;看见[表28.10](monitoring-stats.html#WAIT-EVENT-IPC-TABLE). | +| `锁` | 服务器进程正在等待重量级锁。重量级锁,也称为锁管理器锁或简称锁,主要保护 SQL 可见的对象,例如表。但是,它们也用于确保某些内部操作(例如关系扩展)的互斥。`等待事件`将识别等待的锁类型;看[表 28.11](monitoring-stats.html#WAIT-EVENT-LOCK-TABLE). | +| `长锁` | 服务器进程正在等待轻量级锁。大多数此类锁保护共享内存中的特定数据结构。`等待事件`将包含一个标识轻量级锁用途的名称。(一些锁有特定的名称;其他锁属于一组锁,每个锁都有类似的用途。)请参阅表 28.12[.](monitoring-stats.html#WAIT-EVENT-LWLOCK-TABLE)暂停 | +| `服务器进程正在等待超时到期。` | 等待事件`将确定具体的等待点;`看表 28.13[.](monitoring-stats.html#WAIT-EVENT-TIMEOUT-TABLE)表 28.5。 | + +**等待类型的事件`活动`** + +| `活动`等待事件 | 描述 | +| -------- | --- | +| `存档主` | 在归档进程的主循环中等待。 | +| `AutoVacuumMain` | 在 autovacuum 启动器进程的主循环中等待。 | +| `BgWriter休眠` | 在后台写入进程中等待,正在休眠。 | +| `BgWriterMain` | Waiting in main loop of background writer process. | +| `CheckpointerMain` | Waiting in main loop of checkpointer process. | +| `LogicalApplyMain` | Waiting in main loop of logical replication apply process. | +| `LogicalLauncherMain` | Waiting in main loop of logical replication launcher process. | +| `PgStatMain` | Waiting in main loop of statistics collector process. | +| `RecoveryWalStream` | Waiting in main loop of startup process for WAL to arrive, during streaming recovery. | +| `SysLoggerMain` | Waiting in main loop of syslogger process. | +| `WalReceiverMain` | Waiting in main loop of WAL receiver process. | +| `WalSenderMain` | Waiting in main loop of WAL sender process. | +| `WalWriterMain` | Waiting in main loop of WAL writer process. | + +**Table 28.6. Wait Events of Type`BufferPin`** + +| `BufferPin`Wait Event | Description | +| --------------------- | ----------- | +| `BufferPin` | Waiting to acquire an exclusive pin on a buffer. | + +**Table 28.7. Wait Events of Type`Client`** + +| `Client`Wait Event | Description | +| ------------------ | ----------- | +| `ClientRead` | Waiting to read data from the client. | +| `ClientWrite` | Waiting to write data to the client. | +| `GSSOpenServer` | Waiting to read data from the client while establishing a GSSAPI session. | +| `LibPQWalReceiverConnect` | Waiting in WAL receiver to establish connection to remote server. | +| `LibPQWalReceiverReceive` | Waiting in WAL receiver to receive data from remote server. | +| `SSLOpenServer` | Waiting for SSL while attempting connection. | +| `WalSenderWaitForWAL` | Waiting for WAL to be flushed in WAL sender process. | +| `WalSenderWriteData` | Waiting for any activity when processing replies from WAL receiver in WAL sender process. | + +**Table 28.8. Wait Events of Type`Extension`** + +| `Extension`Wait Event | Description | +| --------------------- | ----------- | +| `Extension` | Waiting in an extension. | + +**Table 28.9. Wait Events of Type`IO`** + +| `IO`Wait Event | Description | +| -------------- | ----------- | +| `BaseBackupRead` | 等待从文件中读取基本备份。 | +| `BufFileRead` | 等待从缓冲文件中读取。 | +| `BufFileWrite` | 等待写入缓冲文件。 | +| `BufFileTruncate` | 等待缓冲文件被截断。 | +| `控制文件读取` | 等待从`pg_control`文件。 | +| `控制文件同步` | 等待着`pg_control`文件以达到持久存储。 | +| `控制文件同步更新` | 等待更新`pg_control`文件以达到持久存储。 | +| `控制文件写入` | 等待写入`pg_control`文件。 | +| `控制文件写入更新` | 等待写入更新`pg_control`文件。 | +| `复制文件读取` | 在文件复制操作期间等待读取。 | +| `复制文件写入` | 在文件复制操作期间等待写入。 | +| `DSMFFillZeroWrite` | 等待用零填充动态共享内存支持文件。 | +| `数据文件扩展` | 等待扩展关系数据文件。 | +| `数据文件刷新` | 等待关系数据文件到达持久存储。 | +| `数据文件立即同步` | 等待关系数据文件立即同步到持久存储。 | +| `数据文件预取` | 等待来自关系数据文件的异步预取。 | +| `数据文件读取` | 等待从关系数据文件中读取。 | +| `数据文件同步` | 等待对关系数据文件的更改以达到持久存储。 | +| `数据文件截断` | 等待关系数据文件被截断。 | +| `数据文件写入` | 等待写入关系数据文件。 | +| `LockFileAddToDataDirRead` | 在向数据目录锁定文件添加一行时等待读取。 | +| `LockFileAddToDataDirSync` | 等待数据到达持久存储,同时向数据目录锁定文件添加一行。 | +| `LockFileAddToDataDirWrite` | 在向数据目录锁定文件添加一行时等待写入。 | +| `LockFileCreateRead` | 在创建数据目录锁定文件时等待读取。 | +| `LockFileCreateSync` | 在创建数据目录锁定文件时等待数据到达持久存储。 | +| `LockFileCreateWrite` | 在创建数据目录锁定文件时等待写入。 | +| `LockFileReCheckDataDirRead` | 在重新检查数据目录锁定文件期间等待读取。 | +| `逻辑重写检查点同步` | 在检查点期间等待逻辑重写映射到达持久存储。 | +| `逻辑重写映射同步` | 在逻辑重写期间等待映射数据到达持久存储。 | +| `逻辑重写映射写入` | 在逻辑重写期间等待写入映射数据。 | +| `逻辑重写同步` | 等待逻辑重写映射到达持久存储。 | +| `逻辑重写截断` | 在逻辑重写期间等待截断映射数据。 | +| `逻辑重写写入` | 等待写入逻辑重写映射。 | +| `关系图读取` | 等待读取关系映射文件。 | +| `关系映射同步` | 等待关系映射文件到达持久存储。 | +| `关系映射写入` | 等待写入关系映射文件。 | +| `重新排序缓冲区读取` | 在重新排序缓冲区管理期间等待读取。 | +| `重新排序缓冲区写入` | 在重新排序缓冲区管理期间等待写入。 | +| `重新排序逻辑映射读取` | 在重新排序缓冲区管理期间等待读取逻辑映射。 | +| `ReplicationSlotRead` | 等待从复制槽控制文件中读取。 | +| `ReplicationSlotRestoreSync` | 等待复制槽控制文件到达持久存储,同时将其恢复到内存。 | +| `ReplicationSlotSync` | 等待复制槽控制文件到达持久存储。 | +| `复制槽写` | 等待写入复制槽控制文件。 | +| `SLRUFlushSync` | 在检查点或数据库关闭期间等待 SLRU 数据到达持久存储。 | +| `SLRU读取` | 等待读取 SLRU 页。 | +| `SLRU同步` | 在页面写入后等待 SLRU 数据到达持久存储。 | +| `SLRU写` | 等待写入 SLRU 页。 | +| `快照构建读取` | 等待读取序列化的历史目录快照。 | +| `快照构建同步` | 等待序列化的历史目录快照到达持久存储。 | +| `快照构建写入` | 等待写入序列化的历史目录快照。 | +| `时间线历史文件同步` | 等待通过流复制接收到的时间线历史文件到达持久存储。 | +| `时间线历史文件写入` | 等待写入通过流复制接收到的时间线历史文件。 | +| `时间线历史读` | 等待读取时间线历史文件。 | +| `时间线历史同步` | 等待新创建的时间线历史文件到达持久存储。 | +| `时间线历史写` | 等待写入新创建的时间线历史文件。 | +| `双相文件读取` | 等待读取两阶段状态文件。 | +| `双相文件同步` | 等待两阶段状态文件到达持久存储。 | +| `双阶段文件写入` | 等待写入两阶段状态文件。 | +| `WALBootstrapSync` | 在引导期间等待 WAL 到达持久存储。 | +| `WALBootstrapWrite` | 在引导期间等待写入 WAL 页。 | +| `WALCopyRead` | 通过复制现有的 WAL 段创建新的 WAL 段时等待读取。 | +| `WALCopySync` | 等待通过复制现有 WAL 段创建的新 WAL 段到达持久存储。 | +| `WALCopyWrite` | 通过复制现有的 WAL 段创建新的 WAL 段时等待写入。 | +| `WALInitSync` | 等待新初始化的 WAL 文件到达持久存储。 | +| `WALInitWrite` | 在初始化新的 WAL 文件时等待写入。 | +| `沃尔读` | 等待从 WAL 文件中读取。 | +| `WALSenderTimelineHistoryRead` | 在 walsender 时间线命令期间等待从时间线历史文件中读取。 | +| `WALSync` | 等待 WAL 文件到达持久存储。 | +| `WALSync 方法分配` | 在分配新的 WAL 同步方法时等待数据到达持久存储。 | +| `WALWrite` | 等待写入 WAL 文件。 | +| `逻辑更改读取` | Waiting for a read from a logical changes file. | +| `LogicalChangesWrite` | Waiting for a write to a logical changes file. | +| `LogicalSubxactRead` | Waiting for a read from a logical subxact file. | +| `LogicalSubxactWrite` | Waiting for a write to a logical subxact file. | + +**Table 28.10. Wait Events of Type`IPC`** + +| `IPC`Wait Event | Description | +| --------------- | ----------- | +| `AppendReady` | Waiting for subplan nodes of an`Append`plan node to be ready. | +| `BackendTermination` | Waiting for the termination of another backend. | +| `BackupWaitWalArchive` | Waiting for WAL files required for a backup to be successfully archived. | +| `BgWorkerShutdown` | Waiting for background worker to shut down. | +| `BgWorkerStartup` | Waiting for background worker to start up. | +| `BtreePage` | Waiting for the page number needed to continue a parallel B-tree scan to become available. | +| `BufferIO` | Waiting for buffer I/O to complete. | +| `CheckpointDone` | Waiting for a checkpoint to complete. | +| `检查点开始` | 等待检查点开始。 | +| `执行聚集` | 在执行一个子进程时等待来自一个子进程的活动`收集`计划节点。 | +| `HashBatchAllocate` | Waiting for an elected Parallel Hash participant to allocate a hash table. | +| `HashBatchElect` | 等待选举一个 Parallel Hash 参与者来分配一个哈希表。 | +| `哈希批量加载` | 等待其他并行哈希参与者完成加载哈希表。 | +| `HashBuildAllocate` | 等待选出的 Parallel Hash 参与者分配初始哈希表。 | +| `HashBuildElect` | 等待选举一个并行哈希参与者来分配初始哈希表。 | +| `HashBuildHashInner` | 等待其他并行哈希参与者完成内部关系哈希。 | +| `HashBuildHashOuter` | 等待其他 Parallel Hash 参与者完成对外部关系的划分。 | +| `HashGrowBatchesAllocate` | Waiting for an elected Parallel Hash participant to allocate more batches. | +| `HashGrowBatchesDecide` | 等待选择一个并行哈希参与者来决定未来的批量增长。 | +| `HashGrowBatchesElect` | 等待选举一个 Parallel Hash 参与者来分配更多的批次。 | +| `HashGrowBatches完成` | Waiting for an elected Parallel Hash participant to decide on future batch growth. | +| `HashGrowBatches重新分区` | 等待其他 Parallel Hash 参与者完成重新分区。 | +| `HashGrowBucketsAllocate` | Waiting for an elected Parallel Hash participant to finish allocating more buckets. | +| `HashGrowBucketsElect` | 等待选举一个 Parallel Hash 参与者来分配更多的桶。 | +| `HashGrowBuckets重新插入` | 等待其他并行哈希参与者完成将元组插入新桶。 | +| `逻辑同步数据` | 等待逻辑复制远程服务器发送数据以进行初始表同步。 | +| `逻辑同步状态改变` | 等待逻辑复制远程服务器更改状态。 | +| `消息队列内部` | 等待另一个进程附加到共享消息队列。 | +| `MessageQueuePutMessage` | 等待将协议消息写入共享消息队列。 | +| `消息队列接收` | 等待从共享消息队列接收字节。 | +| `消息队列发送` | 等待将字节发送到共享消息队列。 | +| `并行位图扫描` | 等待并行位图扫描被初始化。 | +| `ParallelCreateIndexScan` | 等待并行`创建索引`工人完成堆扫描。 | +| `并行完成` | 等待并行工作者完成计算。 | +| `ProcArrayGroupUpdate` | 等待组长在并行操作结束时清除事务 ID。 | +| `过程信号屏障` | 等待所有后端处理障碍事件。 | +| `Promote` | Waiting for standby promotion. | +| `RecoveryConflictSnapshot` | Waiting for recovery conflict resolution for a vacuum cleanup. | +| `RecoveryConflictTablespace` | Waiting for recovery conflict resolution for dropping a tablespace. | +| `RecoveryPause` | Waiting for recovery to be resumed. | +| `ReplicationOriginDrop` | Waiting for a replication origin to become inactive so it can be dropped. | +| `ReplicationSlotDrop` | Waiting for a replication slot to become inactive so it can be dropped. | +| `SafeSnapshot` | Waiting to obtain a valid snapshot for a`READ ONLY DEFERRABLE`transaction. | +| `SyncRep` | Waiting for confirmation from a remote server during synchronous replication. | +| `WalReceiverExit` | Waiting for the WAL receiver to exit. | +| `WalReceiverWaitStart` | Waiting for startup process to send initial data for streaming replication. | +| `XactGroupUpdate` | Waiting for the group leader to update transaction status at end of a parallel operation. | + +**Table 28.11. Wait Events of Type`Lock`** + +| `Lock`Wait Event | Description | +| ---------------- | ----------- | +| `advisory` | Waiting to acquire an advisory user lock. | +| `extend` | Waiting to extend a relation. | +| `frozenid` | Waiting to update`pg_database`.`datfrozenxid`and`pg_database`.`datminmxid`. | +| `object` | Waiting to acquire a lock on a non-relation database object. | +| `page` | Waiting to acquire a lock on a page of a relation. | +| `relation` | Waiting to acquire a lock on a relation. | +| `spectoken` | Waiting to acquire a speculative insertion lock. | +| `transactionid` | Waiting for a transaction to finish. | +| `tuple` | Waiting to acquire a lock on a tuple. | +| `userlock` | Waiting to acquire a user lock. | +| `virtualxid` | Waiting to acquire a virtual transaction ID lock. | + +**Table 28.12. Wait Events of Type`长锁`** + +| `长锁`等待事件 | 描述 | +| -------- | --- | +| `AddinShmemInit` | 等待管理扩展在共享内存中的空间分配。 | +| `自动归档` | 等待更新`postgresql.auto.conf`文件。 | +| `自动真空` | 等待读取或更新 autovacuum worker 的当前状态。 | +| `自动真空计划` | 等待以确保为 autovacuum 选择的表仍需要清理。 | +| `后台工作者` | 等待读取或更新后台工作状态。 | +| `Btree真空` | 等待读取或更新 B 树索引的真空相关信息。 | +| `缓冲区内容` | 等待访问内存中的数据页。 | +| `缓冲区映射` | 等待将数据块与缓冲池中的缓冲区关联。 | +| `CheckpointerComm` | 等待管理 fsync 请求。 | +| `提交` | 等待读取或更新为事务提交时间戳设置的最后一个值。 | +| `CommitTsBuffer` | 等待提交时间戳 SLRU 缓冲区上的 I/O。 | +| `提交TsSLRU` | 等待访问提交时间戳 SLRU 缓存。 | +| `控制文件` | 等待阅读或更新`pg_control`文件或创建一个新的 WAL 文件。 | +| `动态共享内存控制` | 等待读取或更新动态共享内存分配信息。 | +| `LockFastPath` | 等待读取或更新进程的快速路径锁定信息。 | +| `锁管理器` | 等待读取或更新有关“重量级”锁的信息。 | +| `逻辑RepWorker` | 等待读取或更新逻辑复制工作者的状态。 | +| `MultiXactGen` | 等待读取或更新共享的 multixact 状态。 | +| `MultiXactMemberBuffer` | 等待 multixact 成员 SLRU 缓冲区上的 I/O。 | +| `MultiXactMemberSLRU` | 等待访问 multixact 成员 SLRU 缓存。 | +| `MultiXactOffsetBuffer` | 等待 multixact 偏移 SLRU 缓冲区上的 I/O。 | +| `MultiXactOffsetSLRU` | 等待访问 multixact 偏移 SLRU 缓存。 | +| `MultiXactTruncation` | 等待读取或截断 multixact 信息。 | +| `通知缓冲区` | 在等待 I/O`通知`消息 SLRU 缓冲区。 | +| `通知队列` | 等待阅读或更新`通知`消息。 | +| `通知队列尾` | 等待更新限制`通知`消息存储。 | +| `通知SLRU` | 等待访问`通知`消息 SLRU 缓存。 | +| `OidGen` | 等待分配新的 OID。 | +| `旧快照时间图` | 等待读取或更新旧的快照控制信息。 | +| `并行追加` | 在 Parallel Append 计划执行期间等待选择下一个子计划。 | +| `并行哈希连接` | 在 Parallel Hash Join 计划执行期间等待同步工作程序。 | +| `并行查询DSA` | 等待并行查询动态共享内存分配。 | +| `每会话DSA` | 等待并行查询动态共享内存分配。 | +| `每会话记录类型` | 等待访问有关复合类型的并行查询信息。 | +| `PerSessionRecordTypmod` | 等待访问并行查询的有关标识匿名记录类型的类型修饰符的信息。 | +| `PerXactPredicateList` | 在并行查询期间等待访问当前可序列化事务持有的谓词锁列表。 | +| `谓词锁管理器` | 等待访问可序列化事务使用的谓词锁信息。 | +| `过程数组` | 等待访问共享的每进程数据结构(通常,获取快照或报告会话的事务 ID)。 | +| `关系映射` | 等待阅读或更新`pg_filenode.map`文件(用于跟踪某些系统目录的文件节点分配)。 | +| `RelCacheInit` | 等待阅读或更新`pg_internal.init`关系缓存初始化文件。 | +| `复制起源` | 等待创建、删除或使用复制源。 | +| `复制起源状态` | 等待读取或更新一个复制源的进度。 | +| `复制槽分配` | 等待分配或释放复制槽。 | +| `复制槽控制` | 等待读取或更新复制槽状态。 | +| `复制槽IO` | 等待复制槽上的 I/O。 | +| `串行缓冲区` | 等待可序列化事务冲突 SLRU 缓冲区上的 I/O。 | +| `可序列化的已完成列表` | 等待访问已完成的可序列化事务列表。 | +| `可序列化谓词列表` | 等待访问可序列化事务持有的谓词锁列表。 | +| `可序列化XactHash` | 等待读取或更新有关可序列化事务的信息。 | +| `串行SLRU` | 等待访问可序列化事务冲突 SLRU 缓存。 | +| `SharedTidBitmap` | 在并行位图索引扫描期间等待访问共享 TID 位图。 | +| `SharedTupleStore` | 在并行查询期间等待访问共享元组存储。 | +| `索引` | 等待在共享内存中查找或分配空间。 | +| `SInvalRead` | 等待从共享目录失效队列中检索消息。 | +| `无效写入` | 正在等待将消息添加到共享目录失效队列。 | +| `子传输缓冲区` | 等待子事务 SLRU 缓冲区上的 I/O。 | +| `子transSLRU` | 等待访问子事务 SLRU 缓存。 | +| `同步代表` | 等待读取或更新有关同步复制状态的信息。 | +| `同步扫描` | 等待选择同步表扫描的起始位置。 | +| `表空间创建` | 等待创建或删除表空间。 | +| `双相状态` | 等待读取或更新已准备事务的状态。 | +| `WALBuf映射` | 等待替换 WAL 缓冲区中的页面。 | +| `WAL插入` | 等待将 WAL 数据插入内存缓冲区。 | +| `WALWrite` | 等待 WAL 缓冲区写入磁盘。 | +| `WrapLimitsVacuum` | 等待更新事务 id 和 multixact 消耗的限制。 | +| `XactBuffer` | 正在等待事务状态SLRU缓冲区上的I/O。 | +| `XactSLRU` | 正在等待访问事务状态SLRU缓存。 | +| `XACTR截断` | 等待执行`pg_xact_状态`或者更新可用的最旧事务ID。 | +| `喜来登` | 正在等待分配新的事务ID。 | + +### 笔记 + +扩展可以添加`LWLock`输入中所示的列表[表28.12](monitoring-stats.html#WAIT-EVENT-LWLOCK-TABLE)。在某些情况下,扩展名分配的名称在所有服务器进程中都不可用;所以`LWLock`等待事件可能会报告为“仅”`扩大`“而不是分机指定的名称。 + +**表28.13。等待类型的事件`暂停`** + +| `暂停`等待事件 | 描述 | +| -------- | --- | +| `BaseBackupThrottle` | 限制活动时,在基本备份期间等待。 | +| `PgSleep` | 因为打电话给`pg_睡眠`或者兄弟函数。 | +| `恢复延迟` | 由于延迟设置,正在恢复期间等待应用WAL。 | +| `RecoveryRetrievenRetryInterval` | 当WAL数据无法从任何来源获得时,在恢复期间等待(`普格沃尔`,存档或流)。 | +| `真空延迟` | 在基于成本的真空延迟点等待。 | + +以下是如何查看等待事件的示例: + +``` +SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event is NOT NULL; + pid | wait_event_type | wait_event +### 28.2.4. `pg_stat_replication` + +[]() + + The `pg_stat_replication` view will contain one row per WAL sender process, showing statistics about replication to that sender's connected standby server. Only directly connected standbys are listed; no information is available about downstream standby servers. + +**Table 28.14. `pg_stat_replication` View** + +| Column Type

Description | +|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `pid` `integer`

Process ID of a WAL sender process | +| `usesysid` `oid`

OID of the user logged into this WAL sender process | +| `usename` `name`

Name of the user logged into this WAL sender process | +| `application_name` `text`

Name of the application that is connected to this WAL sender | +| `client_addr` `inet`

IP address of the client connected to this WAL sender. If this field is null, it indicates that the client is connected via a Unix socket on the server machine. | +| `client_hostname` `text`

Host name of the connected client, as reported by a reverse DNS lookup of `client_addr`. This field will only be non-null for IP connections, and only when [log\_hostname](runtime-config-logging.html#GUC-LOG-HOSTNAME) is enabled. | +| `client_port` `integer`

TCP port number that the client is using for communication with this WAL sender, or `-1` if a Unix socket is used | +| `backend_start` `timestamp with time zone`

Time when this process was started, i.e., when the client connected to this WAL sender | +| `backend_xmin` `xid`

This standby's `xmin` horizon reported by [hot\_standby\_feedback](runtime-config-replication.html#GUC-HOT-STANDBY-FEEDBACK). | +|`state` `text`

Current WAL sender state. Possible values are:

* `startup`: This WAL sender is starting up.

* `catchup`: This WAL sender's connected standby is catching up with the primary.

* `streaming`: This WAL sender is streaming changes after its connected standby server has caught up with the primary.

* `backup`: This WAL sender is sending a backup.

* `stopping`: This WAL sender is stopping.| +| `sent_lsn` `pg_lsn`

Last write-ahead log location sent on this connection | +| `write_lsn` `pg_lsn`

Last write-ahead log location written to disk by this standby server | +| `flush_lsn` `pg_lsn`

Last write-ahead log location flushed to disk by this standby server | +| `replay_lsn` `pg_lsn`

Last write-ahead log location replayed into the database on this standby server | +| `write_lag` `interval`

Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written it (but not yet flushed it or applied it). This can be used to gauge the delay that `synchronous_commit` level `remote_write` incurred while committing if this server was configured as a synchronous standby. | +| `flush_lag` `interval`

Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written and flushed it (but not yet applied it). This can be used to gauge the delay that `synchronous_commit` level `on` incurred while committing if this server was configured as a synchronous standby. | +| `replay_lag` `interval`

Time elapsed between flushing recent WAL locally and receiving notification that this standby server has written, flushed and applied it. This can be used to gauge the delay that `synchronous_commit` level `remote_apply` incurred while committing if this server was configured as a synchronous standby. | +| `sync_priority` `integer`

Priority of this standby server for being chosen as the synchronous standby in a priority-based synchronous replication. This has no effect in a quorum-based synchronous replication. | +| `sync_state` `text`

Synchronous state of this standby server. Possible values are:

* `async`: This standby server is asynchronous.

* `potential`: This standby server is now asynchronous, but can potentially become synchronous if one of current synchronous ones fails.

* `sync`: This standby server is synchronous.

* `quorum`: This standby server is considered as a candidate for quorum standbys. | +| `reply_time` `timestamp with time zone`

Send time of last reply message received from standby server | + + The lag times reported in the `pg_stat_replication` view are measurements of the time taken for recent WAL to be written, flushed and replayed and for the sender to know about it. These times represent the commit delay that was (or would have been) introduced by each synchronous commit level, if the remote server was configured as a synchronous standby. For an asynchronous standby, the `replay_lag` column approximates the delay before recent transactions became visible to queries. If the standby server has entirely caught up with the sending server and there is no more WAL activity, the most recently measured lag times will continue to be displayed for a short time and then show NULL. + + Lag times work automatically for physical replication. Logical decoding plugins may optionally emit tracking messages; if they do not, the tracking mechanism will simply display NULL lag. + +### Note + + The reported lag times are not predictions of how long it will take for the standby to catch up with the sending server assuming the current rate of replay. Such a system would show similar times while new WAL is being generated, but would differ when the sender becomes idle. In particular, when the standby has caught up completely, `pg_stat_replication` shows the time taken to write, flush and replay the most recent reported WAL location rather than zero as some users might expect. This is consistent with the goal of measuring synchronous commit and transaction visibility delays for recent write transactions. To reduce confusion for users expecting a different model of lag, the lag columns revert to NULL after a short time on a fully replayed idle system. Monitoring systems should choose whether to represent this as missing data, zero or continue to display the last known value. + +### 28.2.5. `pg_stat_replication_slots` + +[]() + + The `pg_stat_replication_slots` view will contain one row per logical replication slot, showing statistics about its usage. + +**Table 28.15. `pg_stat_replication_slots` View** + +| Column Type

Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `slot_name` `text`

A unique, cluster-wide identifier for the replication slot | +| `spill_txns` `bigint`

Number of transactions spilled to disk once the memory used by logical decoding to decode changes from WAL has exceeded `logical_decoding_work_mem`. The counter gets incremented for both top-level transactions and subtransactions. | +| `spill_count` `bigint`

Number of times transactions were spilled to disk while decoding changes from WAL for this slot. This counter is incremented each time a transaction is spilled, and the same transaction may be spilled multiple times. | +| `spill_bytes` `bigint`

Amount of decoded transaction data spilled to disk while performing decoding of changes from WAL for this slot. This and other spill counters can be used to gauge the I/O which occurred during logical decoding and allow tuning `logical_decoding_work_mem`. | +|`stream_txns` `bigint`

Number of in-progress transactions streamed to the decoding output plugin after the memory used by logical decoding to decode changes from WAL for this slot has exceeded `logical_decoding_work_mem`. Streaming only works with top-level transactions (subtransactions can't be streamed independently), so the counter is not incremented for subtransactions.| +| `stream_count``bigint`

Number of times in-progress transactions were streamed to the decoding output plugin while decoding changes from WAL for this slot. This counter is incremented each time a transaction is streamed, and the same transaction may be streamed multiple times. | +| `stream_bytes``bigint`

Amount of transaction data decoded for streaming in-progress transactions to the decoding output plugin while decoding changes from WAL for this slot. This and other streaming counters for this slot can be used to tune `logical_decoding_work_mem`. | +| `total_txns` `bigint`

Number of decoded transactions sent to the decoding output plugin for this slot. This counts top-level transactions only, and is not incremented for subtransactions. Note that this includes the transactions that are streamed and/or spilled. | +| `total_bytes``bigint`

Amount of transaction data decoded for sending transactions to the decoding output plugin while decoding changes from WAL for this slot. Note that this includes data that is streamed and/or spilled. | +| `stats_reset` `timestamp with time zone`

Time at which these statistics were last reset | + +### 28.2.6. `pg_stat_wal_receiver` + +[]() + + The `pg_stat_wal_receiver` view will contain only one row, showing statistics about the WAL receiver from that receiver's connected server. + +**Table 28.16. `pg_stat_wal_receiver` View** + +| Column Type

Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `pid` `integer`

Process ID of the WAL receiver process | +| `status` `text`

Activity status of the WAL receiver process | +| `receive_start_lsn` `pg_lsn`

First write-ahead log location used when WAL receiver is started | +| `receive_start_tli` `integer`

First timeline number used when WAL receiver is started | +| `written_lsn` `pg_lsn`

Last write-ahead log location already received and written to disk, but not flushed. This should not be used for data integrity checks. | +| `flushed_lsn` `pg_lsn`

Last write-ahead log location already received and flushed to disk, the initial value of this field being the first log location used when WAL receiver is started | +| `received_tli` `integer`

Timeline number of last write-ahead log location received and flushed to disk, the initial value of this field being the timeline number of the first log location used when WAL receiver is started | +| `last_msg_send_time` `timestamp with time zone`

Send time of last message received from origin WAL sender | +| `last_msg_receipt_time` `timestamp with time zone`

Receipt time of last message received from origin WAL sender | +| `latest_end_lsn` `pg_lsn`

Last write-ahead log location reported to origin WAL sender | +| `latest_end_time` `timestamp with time zone`

Time of last write-ahead log location reported to origin WAL sender | +| `slot_name` `text`

Replication slot name used by this WAL receiver | +|`sender_host` `text`

Host of the PostgreSQL instance this WAL receiver is connected to. This can be a host name, an IP address, or a directory path if the connection is via Unix socket. (The path case can be distinguished because it will always be an absolute path, beginning with `/`.)| +| `sender_port` `integer`

Port number of the PostgreSQL instance this WAL receiver is connected to. | +| `conninfo` `text`

Connection string used by this WAL receiver, with security-sensitive fields obfuscated. | + +### 28.2.7. `pg_stat_subscription` + +[]() + + The `pg_stat_subscription` view will contain one row per subscription for main worker (with null PID if the worker is not running), and additional rows for workers handling the initial data copy of the subscribed tables. + +**Table 28.17. `pg_stat_subscription` View** + +| Column Type

Description | +|--------------------------------------------------------------------------------------------------------------------------| +| `subid` `oid`

OID of the subscription | +| `subname` `name`

Name of the subscription | +| `pid` `integer`

Process ID of the subscription worker process | +| `relid` `oid`

OID of the relation that the worker is synchronizing; null for the main apply worker | +| `received_lsn` `pg_lsn`

Last write-ahead log location received, the initial value of this field being 0 | +| `last_msg_send_time` `timestamp with time zone`

Send time of last message received from origin WAL sender | +|`last_msg_receipt_time` `timestamp with time zone`

Receipt time of last message received from origin WAL sender | +| `latest_end_lsn` `pg_lsn`

Last write-ahead log location reported to origin WAL sender | +|`latest_end_time` `timestamp with time zone`

Time of last write-ahead log location reported to origin WAL sender| + +### 28.2.8. `pg_stat_ssl` + +[]() + + The `pg_stat_ssl` view will contain one row per backend or WAL sender process, showing statistics about SSL usage on this connection. It can be joined to `pg_stat_activity` or `pg_stat_replication` on the `pid` column to get more details about the connection. + +**Table 28.18. `pg_stat_ssl` View** + +| Column Type

Description | +|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `pid` `integer`

Process ID of a backend or WAL sender process | +| `ssl` `boolean`

True if SSL is used on this connection | +| `version` `text`

Version of SSL in use, or NULL if SSL is not in use on this connection | +| `cipher` `text`

Name of SSL cipher in use, or NULL if SSL is not in use on this connection | +| `bits` `integer`

Number of bits in the encryption algorithm used, or NULL if SSL is not used on this connection | +| `client_dn` `text`

Distinguished Name (DN) field from the client certificate used, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated if the DN field is longer than `NAMEDATALEN` (64 characters in a standard build). | +|`client_serial` `numeric`

Serial number of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. The combination of certificate serial number and certificate issuer uniquely identifies a certificate (unless the issuer erroneously reuses serial numbers).| +| `issuer_dn` `text`

DN of the issuer of the client certificate, or NULL if no client certificate was supplied or if SSL is not in use on this connection. This field is truncated like `client_dn`. | + +### 28.2.9. `pg_stat_gssapi` + +[]() + + The `pg_stat_gssapi` view will contain one row per backend, showing information about GSSAPI usage on this connection. It can be joined to `pg_stat_activity` or `pg_stat_replication` on the `pid` column to get more details about the connection. + +**Table 28.19. `pg_stat_gssapi` View** + +| Column Type

Description | +|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `pid` `integer`

Process ID of a backend | +| `gss_authenticated` `boolean`

True if GSSAPI authentication was used for this connection | +|`principal` `text`

Principal used to authenticate this connection, or NULL if GSSAPI was not used to authenticate this connection. This field is truncated if the principal is longer than `NAMEDATALEN` (64 characters in a standard build).| +| `encrypted` `boolean`

True if GSSAPI encryption is in use on this connection | + +### 28.2.10. `pg_stat_archiver` + +[]() + + The `pg_stat_archiver` view will always have a single row, containing data about the archiver process of the cluster. + +**Table 28.20. `pg_stat_archiver` View** + +| Column Type

Description | +|-------------------------------------------------------------------------------------------------------| +| `archived_count` `bigint`

Number of WAL files that have been successfully archived | +| `last_archived_wal` `text`

Name of the last WAL file successfully archived | +|`last_archived_time` `timestamp with time zone`

Time of the last successful archive operation| +| `failed_count` `bigint`

Number of failed attempts for archiving WAL files | +| `last_failed_wal` `text`

Name of the WAL file of the last failed archival operation | +| `last_failed_time` `timestamp with time zone`

Time of the last failed archival operation | +| `stats_reset` `timestamp with time zone`

Time at which these statistics were last reset | + +### 28.2.11. `pg_stat_bgwriter` + +[]() + + The `pg_stat_bgwriter` view will always have a single row, containing global data for the cluster. + +**Table 28.21. `pg_stat_bgwriter` View** + +| Column Type

Description | +|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `checkpoints_timed` `bigint`

Number of scheduled checkpoints that have been performed | +| `checkpoints_req` `bigint`

Number of requested checkpoints that have been performed | +| `checkpoint_write_time` `double precision`

Total amount of time that has been spent in the portion of checkpoint processing where files are written to disk, in milliseconds | +| `checkpoint_sync_time` `double precision`

Total amount of time that has been spent in the portion of checkpoint processing where files are synchronized to disk, in milliseconds | +| `buffers_checkpoint` `bigint`

Number of buffers written during checkpoints | +| `buffers_clean` `bigint`

Number of buffers written by the background writer | +| `maxwritten_clean` `bigint`

Number of times the background writer stopped a cleaning scan because it had written too many buffers | +| `buffers_backend` `bigint`

Number of buffers written directly by a backend | +|`buffers_backend_fsync` `bigint`

Number of times a backend had to execute its own `fsync` call (normally the background writer handles those even when the backend does its own write)| +| `buffers_alloc` `bigint`

Number of buffers allocated | +| `stats_reset` `timestamp with time zone`

Time at which these statistics were last reset | + +### 28.2.12. `pg_stat_wal` + +[]() + + The `pg_stat_wal` view will always have a single row, containing data about WAL activity of the cluster. + +**Table 28.22. `pg_stat_wal` View** + +| Column Type

Description | +|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `wal_records` `bigint`

Total number of WAL records generated | +| `wal_fpi` `bigint`

Total number of WAL full page images generated | +| `wal_bytes` `numeric`

Total amount of WAL generated in bytes | +| `wal_buffers_full` `bigint`

Number of times WAL data was written to disk because WAL buffers became full | +| `wal_write` `bigint`

Number of times WAL buffers were written out to disk via `XLogWrite` request. See [Section 30.5](wal-configuration.html) for more information about the internal WAL function `XLogWrite`. | +|`wal_sync` `bigint`

Number of times WAL files were synced to disk via `issue_xlog_fsync` request (if [fsync](runtime-config-wal.html#GUC-FSYNC) is `on` and [wal\_sync\_method](runtime-config-wal.html#GUC-WAL-SYNC-METHOD) is either `fdatasync`, `fsync` or `fsync_writethrough`, otherwise zero). See [Section 30.5](wal-configuration.html) for more information about the internal WAL function `issue_xlog_fsync`.| +| `wal_write_time` `double precision`

Total amount of time spent writing WAL buffers to disk via `XLogWrite` request, in milliseconds (if [track\_wal\_io\_timing](runtime-config-statistics.html#GUC-TRACK-WAL-IO-TIMING) is enabled, otherwise zero). This includes the sync time when `wal_sync_method` is either `open_datasync` or `open_sync`. | +| `wal_sync_time` `double precision`

Total amount of time spent syncing WAL files to disk via `issue_xlog_fsync` request, in milliseconds (if `track_wal_io_timing` is enabled, `fsync` is `on`, and `wal_sync_method` is either `fdatasync`, `fsync` or `fsync_writethrough`, otherwise zero). | +| `stats_reset` `timestamp with time zone`

Time at which these statistics were last reset | + +### 28.2.13. `pg_stat_database` + +[]() + + The `pg_stat_database` view will contain one row for each database in the cluster, plus one for shared objects, showing database-wide statistics. + +**Table 28.23. `pg_stat_database` View** + +| Column Type

Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `datid` `oid`

OID of this database, or 0 for objects belonging to a shared relation | +| `datname` `name`

Name of this database, or `NULL` for shared objects. | +| `numbackends` `integer`

Number of backends currently connected to this database, or `NULL` for shared objects. This is the only column in this view that returns a value reflecting current state; all other columns return the accumulated values since the last reset. | +| `xact_commit` `bigint`

Number of transactions in this database that have been committed | +| `xact_rollback` `bigint`

Number of transactions in this database that have been rolled back | +| `blks_read` `bigint`

Number of disk blocks read in this database | +| `blks_hit` `bigint`

Number of times disk blocks were found already in the buffer cache, so that a read was not necessary (this only includes hits in the PostgreSQL buffer cache, not the operating system's file system cache) | +| `tup_returned` `bigint`

Number of rows returned by queries in this database | +| `tup_fetched` `bigint`

Number of rows fetched by queries in this database | +| `tup_inserted` `bigint`

Number of rows inserted by queries in this database | +| `tup_updated` `bigint`

Number of rows updated by queries in this database | +| `tup_deleted` `bigint`

Number of rows deleted by queries in this database | +| `conflicts` `bigint`

Number of queries canceled due to conflicts with recovery in this database. (Conflicts occur only on standby servers; see [`pg_stat_database_conflicts`](monitoring-stats.html#MONITORING-PG-STAT-DATABASE-CONFLICTS-VIEW) for details.) | +|`temp_files` `bigint`

Number of temporary files created by queries in this database. All temporary files are counted, regardless of why the temporary file was created (e.g., sorting or hashing), and regardless of the [log\_temp\_files](runtime-config-logging.html#GUC-LOG-TEMP-FILES) setting.| +| `temp_bytes` `bigint`

Total amount of data written to temporary files by queries in this database. All temporary files are counted, regardless of why the temporary file was created, and regardless of the [log\_temp\_files](runtime-config-logging.html#GUC-LOG-TEMP-FILES) setting. | +| `deadlocks` `bigint`

Number of deadlocks detected in this database | +| `checksum_failures` `bigint`

Number of data page checksum failures detected in this database (or on a shared object), or NULL if data checksums are not enabled. | +| `checksum_last_failure` `timestamp with time zone`

Time at which the last data page checksum failure was detected in this database (or on a shared object), or NULL if data checksums are not enabled. | +| `blk_read_time` `double precision`

Time spent reading data file blocks by backends in this database, in milliseconds (if [track\_io\_timing](runtime-config-statistics.html#GUC-TRACK-IO-TIMING) is enabled, otherwise zero) | +| `blk_write_time` `double precision`

Time spent writing data file blocks by backends in this database, in milliseconds (if [track\_io\_timing](runtime-config-statistics.html#GUC-TRACK-IO-TIMING) is enabled, otherwise zero) | +| `session_time` `double precision`

Time spent by database sessions in this database, in milliseconds (note that statistics are only updated when the state of a session changes, so if sessions have been idle for a long time, this idle time won't be included) | +| `active_time` `double precision`

Time spent executing SQL statements in this database, in milliseconds (this corresponds to the states `active` and `fastpath function call` in [`pg_stat_activity`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)) | +|`idle_in_transaction_time` `double precision`

Time spent idling while in a transaction in this database, in milliseconds (this corresponds to the states `idle in transaction` and `idle in transaction (aborted)` in [`pg_stat_activity`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)) | +| `sessions` `bigint`

Total number of sessions established to this database | +| `sessions_abandoned` `bigint`

Number of database sessions to this database that were terminated because connection to the client was lost | +| `sessions_fatal` `bigint`

Number of database sessions to this database that were terminated by fatal errors | +| `sessions_killed` `bigint`

Number of database sessions to this database that were terminated by operator intervention | +| `stats_reset` `timestamp with time zone`

Time at which these statistics were last reset | + +### 28.2.14. `pg_stat_database_conflicts` + +[]() + + The `pg_stat_database_conflicts` view will contain one row per database, showing database-wide statistics about query cancels occurring due to conflicts with recovery on standby servers. This view will only contain information on standby servers, since conflicts do not occur on primary servers. + +**Table 28.24. `pg_stat_database_conflicts` View** + +| Column Type

Description | +|---------------------------------------------------------------------------------------------------------------------------| +| `datid` `oid`

OID of a database | +| `datname` `name`

Name of this database | +|`confl_tablespace` `bigint`

Number of queries in this database that have been canceled due to dropped tablespaces| +| `confl_lock` `bigint`

Number of queries in this database that have been canceled due to lock timeouts | +| `confl_snapshot` `bigint`

Number of queries in this database that have been canceled due to old snapshots | +| `confl_bufferpin` `bigint`

Number of queries in this database that have been canceled due to pinned buffers | +| `confl_deadlock` `bigint`

Number of queries in this database that have been canceled due to deadlocks | + +### 28.2.15. `pg_stat_all_tables` + +[]() + + The `pg_stat_all_tables` view will contain one row for each table in the current database (including TOAST tables), showing statistics about accesses to that specific table. The `pg_stat_user_tables` and `pg_stat_sys_tables` views contain the same information, but filtered to only show user and system tables respectively. + +**Table 28.25. `pg_stat_all_tables` View** + +| Column Type

Description | +|-----------------------------------------------------------------------------------------------------------------------------------| +| `relid` `oid`

OID of a table | +| `schemaname` `name`

Name of the schema that this table is in | +| `relname` `name`

Name of this table | +| `seq_scan` `bigint`

Number of sequential scans initiated on this table | +| `seq_tup_read` `bigint`

Number of live rows fetched by sequential scans | +| `idx_scan` `bigint`

Number of index scans initiated on this table | +| `idx_tup_fetch` `bigint`

Number of live rows fetched by index scans | +| `n_tup_ins` `bigint`

Number of rows inserted | +| `n_tup_upd` `bigint`

Number of rows updated (includes HOT updated rows) | +| `n_tup_del` `bigint`

Number of rows deleted | +| `n_tup_hot_upd` `bigint`

Number of rows HOT updated (i.e., with no separate index update required) | +| `n_live_tup` `bigint`

Estimated number of live rows | +| `n_dead_tup` `bigint`

Estimated number of dead rows | +| `n_mod_since_analyze` `bigint`

Estimated number of rows modified since this table was last analyzed | +| `n_ins_since_vacuum` `bigint`

Estimated number of rows inserted since this table was last vacuumed | +|`last_vacuum` `timestamp with time zone`

Last time at which this table was manually vacuumed (not counting `VACUUM FULL`)| +| `last_autovacuum` `timestamp with time zone`

Last time at which this table was vacuumed by the autovacuum daemon | +| `last_analyze` `timestamp with time zone`

Last time at which this table was manually analyzed | +| `last_autoanalyze` `timestamp with time zone`

Last time at which this table was analyzed by the autovacuum daemon | +| `vacuum_count` `bigint`

Number of times this table has been manually vacuumed (not counting `VACUUM FULL`) | +| `autovacuum_count` `bigint`

Number of times this table has been vacuumed by the autovacuum daemon | +| `analyze_count` `bigint`

Number of times this table has been manually analyzed | +| `autoanalyze_count` `bigint`

Number of times this table has been analyzed by the autovacuum daemon | + +### 28.2.16. `pg_stat_all_indexes` + +[]() + + The `pg_stat_all_indexes` view will contain one row for each index in the current database, showing statistics about accesses to that specific index. The `pg_stat_user_indexes` and `pg_stat_sys_indexes` views contain the same information, but filtered to only show user and system indexes respectively. + +**Table 28.26. `pg_stat_all_indexes` View** + +| Column Type

Description | +|-----------------------------------------------------------------------------------------------------------| +| `relid` `oid`

OID of the table for this index | +| `indexrelid` `oid`

OID of this index | +| `schemaname` `name`

Name of the schema this index is in | +| `relname` `name`

Name of the table for this index | +| `indexrelname` `name`

Name of this index | +| `idx_scan` `bigint`

Number of index scans initiated on this index | +| `idx_tup_read` `bigint`

Number of index entries returned by scans on this index | +|`idx_tup_fetch` `bigint`

Number of live table rows fetched by simple index scans using this index| + + Indexes can be used by simple index scans, “bitmap” index scans, and the optimizer. In a bitmap scan the output of several indexes can be combined via AND or OR rules, so it is difficult to associate individual heap row fetches with specific indexes when a bitmap scan is used. Therefore, a bitmap scan increments the `pg_stat_all_indexes`.`idx_tup_read` count(s) for the index(es) it uses, and it increments the `pg_stat_all_tables`.`idx_tup_fetch` count for the table, but it does not affect `pg_stat_all_indexes`.`idx_tup_fetch`. The optimizer also accesses indexes to check for supplied constants whose values are outside the recorded range of the optimizer statistics because the optimizer statistics might be stale. + +### Note + + The `idx_tup_read` and `idx_tup_fetch` counts can be different even without any use of bitmap scans, because `idx_tup_read` counts index entries retrieved from the index while `idx_tup_fetch` counts live rows fetched from the table. The latter will be less if any dead or not-yet-committed rows are fetched using the index, or if any heap fetches are avoided by means of an index-only scan. + +### 28.2.17. `pg_statio_all_tables` + +[]() + + The `pg_statio_all_tables` view will contain one row for each table in the current database (including TOAST tables), showing statistics about I/O on that specific table. The `pg_statio_user_tables` and `pg_statio_sys_tables` views contain the same information, but filtered to only show user and system tables respectively. + +**Table 28.27. `pg_statio_all_tables` View** + +| Column Type

Description | +|-------------------------------------------------------------------------------------------------------------| +| `relid` `oid`

OID of a table | +| `schemaname` `name`

Name of the schema that this table is in | +| `relname` `name`

Name of this table | +| `heap_blks_read` `bigint`

Number of disk blocks read from this table | +| `heap_blks_hit` `bigint`

Number of buffer hits in this table | +| `idx_blks_read` `bigint`

Number of disk blocks read from all indexes on this table | +| `idx_blks_hit` `bigint`

Number of buffer hits in all indexes on this table | +| `toast_blks_read` `bigint`

Number of disk blocks read from this table's TOAST table (if any) | +| `toast_blks_hit` `bigint`

Number of buffer hits in this table's TOAST table (if any) | +|`tidx_blks_read` `bigint`

Number of disk blocks read from this table's TOAST table indexes (if any)| +| `tidx_blks_hit` `bigint`

Number of buffer hits in this table's TOAST table indexes (if any) | + +### 28.2.18. `pg_statio_all_indexes` + +[]() + + The `pg_statio_all_indexes` view will contain one row for each index in the current database, showing statistics about I/O on that specific index. The `pg_statio_user_indexes` and `pg_statio_sys_indexes` views contain the same information, but filtered to only show user and system indexes respectively. + +**Table 28.28. `pg_statio_all_indexes` View** + +| Column Type

Description | +|-----------------------------------------------------------------------------| +| `relid` `oid`

OID of the table for this index | +| `indexrelid` `oid`

OID of this index | +| `schemaname` `name`

Name of the schema this index is in | +| `relname` `name`

Name of the table for this index | +| `indexrelname` `name`

Name of this index | +|`idx_blks_read` `bigint`

Number of disk blocks read from this index| +| `idx_blks_hit` `bigint`

Number of buffer hits in this index | + +### 28.2.19. `pg_statio_all_sequences` + +[]() + + The `pg_statio_all_sequences` view will contain one row for each sequence in the current database, showing statistics about I/O on that specific sequence. + +**Table 28.29. `pg_statio_all_sequences` View** + +| Column Type

Description | +|----------------------------------------------------------------------------| +| `relid` `oid`

OID of a sequence | +| `schemaname` `name`

Name of the schema this sequence is in | +| `relname` `name`

Name of this sequence | +|`blks_read` `bigint`

Number of disk blocks read from this sequence| +| `blks_hit` `bigint`

Number of buffer hits in this sequence | + +### 28.2.20. `pg_stat_user_functions` + +[]() + + The `pg_stat_user_functions` view will contain one row for each tracked function, showing statistics about executions of that function. The [track\_functions](runtime-config-statistics.html#GUC-TRACK-FUNCTIONS) parameter controls exactly which functions are tracked. + +**Table 28.30. `pg_stat_user_functions` View** + +| Column Type

Description | +|----------------------------------------------------------------------------------------------------------------------------------------------| +| `funcid` `oid`

OID of a function | +| `schemaname` `name`

Name of the schema this function is in | +| `funcname` `name`

Name of this function | +| `calls` `bigint`

Number of times this function has been called | +| `total_time` `double precision`

Total time spent in this function and all other functions called by it, in milliseconds | +|`self_time` `double precision`

Total time spent in this function itself, not including other functions called by it, in milliseconds| + +### 28.2.21. `pg_stat_slru` + +[]()[]() + +PostgreSQL accesses certain on-disk information via *SLRU* (simple least-recently-used) caches. The `pg_stat_slru` view will contain one row for each tracked SLRU cache, showing statistics about access to cached pages. + +**Table 28.31. `pg_stat_slru` View** + +| Column Type

Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `name` `text`

Name of the SLRU | +| `blks_zeroed` `bigint`

Number of blocks zeroed during initializations | +|`blks_hit` `bigint`

Number of times disk blocks were found already in the SLRU, so that a read was not necessary (this only includes hits in the SLRU, not the operating system's file system cache)| +| `blks_read` `bigint`

Number of disk blocks read for this SLRU | +| `blks_written` `bigint`

Number of disk blocks written for this SLRU | +| `blks_exists` `bigint`

Number of blocks checked for existence for this SLRU | +| `flushes` `bigint`

Number of flushes of dirty data for this SLRU | +| `truncates` `bigint`

Number of truncates for this SLRU | +| `stats_reset` `timestamp with time zone`

Time at which these statistics were last reset | + +### 28.2.22. Statistics Functions + + Other ways of looking at the statistics can be set up by writing queries that use the same underlying statistics access functions used by the standard views shown above. For details such as the functions' names, consult the definitions of the standard views. (For example, in psql you could issue `\d+ pg_stat_activity`.) The access functions for per-database statistics take a database OID as an argument to identify which database to report on. The per-table and per-index functions take a table or index OID. The functions for per-function statistics take a function OID. Note that only tables, indexes, and functions in the current database can be seen with these functions. + + Additional functions related to statistics collection are listed in [Table 28.32](monitoring-stats.html#MONITORING-STATS-FUNCS-TABLE). + +**Table 28.32. Additional Statistics Functions** + +| Function

Description | +|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `pg_backend_pid` () → `integer`

Returns the process ID of the server process attached to the current session. | +| []() `pg_stat_get_activity` ( `integer` ) → `setof record`

Returns a record of information about the backend with the specified process ID, or one record for each active backend in the system if `NULL` is specified. The fields returned are a subset of those in the `pg_stat_activity` view. | +| []() `pg_stat_get_snapshot_timestamp` () → `timestamp with time zone`

Returns the timestamp of the current statistics snapshot. | +| []() `pg_stat_clear_snapshot` () → `void`

Discards the current statistics snapshot. | +| []() `pg_stat_reset` () → `void`

Resets all statistics counters for the current database to zero.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +| []() `pg_stat_reset_shared` ( `text` ) → `void`

Resets some cluster-wide statistics counters to zero, depending on the argument. The argument can be `bgwriter` to reset all the counters shown in the `pg_stat_bgwriter` view, `archiver` to reset all the counters shown in the `pg_stat_archiver` view or `wal` to reset all the counters shown in the `pg_stat_wal` view.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +| []() `pg_stat_reset_single_table_counters` ( `oid` ) → `void`

Resets statistics for a single table or index in the current database to zero.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +| []() `pg_stat_reset_single_function_counters` ( `oid` ) → `void`

Resets statistics for a single function in the current database to zero.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | +|[]() `pg_stat_reset_slru` ( `text` ) → `void`

Resets statistics to zero for a single SLRU cache, or for all SLRUs in the cluster. If the argument is NULL, all counters shown in the `pg_stat_slru` view for all SLRU caches are reset. The argument can be one of `CommitTs`, `MultiXactMember`, `MultiXactOffset`, `Notify`, `Serial`, `Subtrans`, or `Xact` to reset the counters for only that entry. If the argument is `other` (or indeed, any unrecognized name), then the counters for all other SLRU caches, such as extension-defined caches, are reset.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function.| +| []() `pg_stat_reset_replication_slot` ( `text` ) → `void`

Resets statistics of the replication slot defined by the argument. If the argument is `NULL`, resets statistics for all the replication slots.

This function is restricted to superusers by default, but other users can be granted EXECUTE to run the function. | + +`pg_stat_get_activity`, the underlying function of the `pg_stat_activity` view, returns a set of records containing all the available information about each backend process. Sometimes it may be more convenient to obtain just a subset of this information. In such cases, an older set of per-backend statistics access functions can be used; these are shown in [Table 28.33](monitoring-stats.html#MONITORING-STATS-BACKEND-FUNCS-TABLE). These access functions use a backend ID number, which ranges from one to the number of currently active backends. The function `pg_stat_get_backend_idset` provides a convenient way to generate one row for each active backend for invoking these functions. For example, to show the PIDs and current queries of all backends: +``` + +选择pg_stat_get_backend_pid(s.backendid)作为pid,选择pg_stat_get_backend_activity(s.backendid)作为查询源(选择pg_stat_get_backend_idset()作为backendid)作为s; + +``` +**Table 28.33. Per-Backend Statistics Functions** + +| Function

Description | +|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| []() `pg_stat_get_backend_idset` () → `setof integer`

Returns the set of currently active backend ID numbers (from 1 to the number of active backends). | +| []() `pg_stat_get_backend_activity` ( `integer` ) → `text`

Returns the text of this backend's most recent query. | +| []() `pg_stat_get_backend_activity_start` ( `integer` ) → `timestamp with time zone`

Returns the time when the backend's most recent query was started. | +| []() `pg_stat_get_backend_client_addr` ( `integer` ) → `inet`

Returns the IP address of the client connected to this backend. | +| []() `pg_stat_get_backend_client_port` ( `integer` ) → `integer`

Returns the TCP port number that the client is using for communication. | +| []() `pg_stat_get_backend_dbid` ( `integer` ) → `oid`

Returns the OID of the database this backend is connected to. | +| []() `pg_stat_get_backend_pid` ( `integer` ) → `integer`

Returns the process ID of this backend. | +| []() `pg_stat_get_backend_start` ( `integer` ) → `timestamp with time zone`

Returns the time when this process was started. | +| []() `pg_stat_get_backend_userid` ( `integer` ) → `oid`

Returns the OID of the user logged into this backend. | +| []() `pg_stat_get_backend_wait_event_type` ( `integer` ) → `text`

Returns the wait event type name if this backend is currently waiting, otherwise NULL. See [Table 28.4](monitoring-stats.html#WAIT-EVENT-TABLE) for details. | +|[]() `pg_stat_get_backend_wait_event` ( `integer` ) → `text`

Returns the wait event name if this backend is currently waiting, otherwise NULL. See [Table 28.5](monitoring-stats.html#WAIT-EVENT-ACTIVITY-TABLE) through [Table 28.13](monitoring-stats.html#WAIT-EVENT-TIMEOUT-TABLE).| +| []() `pg_stat_get_backend_xact_start` ( `integer` ) → `timestamp with time zone`

Returns the time when the backend's current transaction was started. | +``` diff --git a/docs/X/mvcc-intro.md b/docs/en/mvcc-intro.md similarity index 100% rename from docs/X/mvcc-intro.md rename to docs/en/mvcc-intro.md diff --git a/docs/en/mvcc-intro.zh.md b/docs/en/mvcc-intro.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..59d6ea70b237df3e9cabe9ddbac75fe12b90faf7 --- /dev/null +++ b/docs/en/mvcc-intro.zh.md @@ -0,0 +1,9 @@ +## 13.1. Introduction + +[](<>)[](<>)[](<>)[](<>) + +PostgreSQL provides a rich set of tools for developers to manage concurrent access to data. Internally, data consistency is maintained by using a multiversion model (Multiversion Concurrency Control, MVCC). This means that each SQL statement sees a snapshot of data (a*database version*) as it was some time ago, regardless of the current state of the underlying data. This prevents statements from viewing inconsistent data produced by concurrent transactions performing updates on the same data rows, providing*transaction isolation*for each database session. MVCC, by eschewing the locking methodologies of traditional database systems, minimizes lock contention in order to allow for reasonable performance in multiuser environments. + +The main advantage of using the MVCC model of concurrency control rather than locking is that in MVCC locks acquired for querying (reading) data do not conflict with locks acquired for writing data, and so reading never blocks writing and writing never blocks reading. PostgreSQL maintains this guarantee even when providing the strictest level of transaction isolation through the use of an innovative*Serializable Snapshot Isolation*(SSI) level. + +Table- and row-level locking facilities are also available in PostgreSQL for applications which don't generally need full transaction isolation and prefer to explicitly manage particular points of conflict. However, proper use of MVCC will generally provide better performance than locks. In addition, application-defined advisory locks provide a mechanism for acquiring locks that are not tied to a single transaction. diff --git a/docs/X/non-durability.md b/docs/en/non-durability.md similarity index 100% rename from docs/X/non-durability.md rename to docs/en/non-durability.md diff --git a/docs/en/non-durability.zh.md b/docs/en/non-durability.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..105a4f9e9b0887724f4af04d07f61a598de066a4 --- /dev/null +++ b/docs/en/non-durability.zh.md @@ -0,0 +1,17 @@ +## 14.5.非持久设置 + +[](<>) + +持久性是一种数据库功能,即使服务器崩溃或断电,也能保证记录已提交的事务。然而,持久性会增加大量数据库开销,因此如果您的站点不需要这样的保证,PostgreSQL可以配置为运行得更快。以下是在这种情况下可以进行的配置更改,以提高性能。除下述情况外,数据库软件崩溃时,耐久性仍有保证;使用这些设置时,只有突然的操作系统崩溃才会造成数据丢失或损坏的风险。 + +- 将数据库集群的数据目录放在内存支持的文件系统(即RAM磁盘)中。这消除了所有数据库磁盘I/O,但将数据存储限制在可用内存(可能还有交换)的范围内。 + +- 关掉[同步](runtime-config-wal.html#GUC-FSYNC); 无需将数据刷新到磁盘。 + +- 关掉[同步的\_犯罪](runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT); 可能没有必要在每次提交时强制WAL写入磁盘。如果系统崩溃,此设置不存在事务丢失(但不是数据损坏)的风险*数据库*. + +- 关掉[满的\_页\_写](runtime-config-wal.html#GUC-FULL-PAGE-WRITES); 不需要防止部分页面写入。 + +- 增长[最大值\_沃尔\_大小](runtime-config-wal.html#GUC-MAX-WAL-SIZE)和[检查站\_暂停](runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT); 这降低了检查点的频率,但增加了存储需求`/普格沃尔`. + +- 创造[未上钩的桌子](sql-createtable.html#SQL-CREATETABLE-UNLOGGED)为了避免沃尔写道,尽管这使桌子不致于崩溃。 diff --git a/docs/X/pgcrypto.md b/docs/en/pgcrypto.md similarity index 100% rename from docs/X/pgcrypto.md rename to docs/en/pgcrypto.md diff --git a/docs/en/pgcrypto.zh.md b/docs/en/pgcrypto.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..1d2e9ebe8e8c53c9c5f0a8029871941bd7042eb6 --- /dev/null +++ b/docs/en/pgcrypto.zh.md @@ -0,0 +1,616 @@ +## F.26. pgcrypto + +[F.26.1. General Hashing Functions](pgcrypto.html#id-1.11.7.35.6)[F.26.2. Password Hashing Functions](pgcrypto.html#id-1.11.7.35.7)[F.26.3. PGP Encryption Functions](pgcrypto.html#id-1.11.7.35.8)[F.26.4. Raw Encryption Functions](pgcrypto.html#id-1.11.7.35.9)[F.26.5. Random-Data Functions](pgcrypto.html#id-1.11.7.35.10)[F.26.6. Notes](pgcrypto.html#id-1.11.7.35.11)[F.26.7. Author](pgcrypto.html#id-1.11.7.35.12) + +[](<>)[](<>) + +The`pgcrypto`module provides cryptographic functions for PostgreSQL. + +This module is considered “trusted”, that is, it can be installed by non-superusers who have`CREATE`privilege on the current database. + +### F.26.1. General Hashing Functions + +#### F.26.1.1.`digest()` + +[](<>) + +``` +digest(data text, type text) returns bytea +digest(data bytea, type text) returns bytea +``` + +Computes a binary hash of the given*`data`*.*`type`*is the algorithm to use. Standard algorithms are`md5`,`sha1`,`sha224`,`sha256`,`sha384`and`sha512`. If`pgcrypto`was built with OpenSSL, more algorithms are available, as detailed in[Table F.19](pgcrypto.html#PGCRYPTO-WITH-WITHOUT-OPENSSL). + +If you want the digest as a hexadecimal string, use`encode()`on the result. For example: + +``` +CREATE OR REPLACE FUNCTION sha1(bytea) returns text AS $$ + SELECT encode(digest($1, 'sha1'), 'hex') +$$ LANGUAGE SQL STRICT IMMUTABLE; +``` + +#### F.26.1.2.`hmac()` + +[](<>) + +``` +hmac(data text, key text, type text) returns bytea +hmac(data bytea, key bytea, type text) returns bytea +``` + +Calculates hashed MAC for*`data`*with key*`key`*.*`type`*is the same as in`digest()`. + +This is similar to`digest()`but the hash can only be recalculated knowing the key. This prevents the scenario of someone altering data and also changing the hash to match. + +If the key is larger than the hash block size it will first be hashed and the result will be used as key. + +### F.26.2. Password Hashing Functions + +The functions`crypt()`and`gen_salt()`are specifically designed for hashing passwords.`crypt()`does the hashing and`gen_salt()`prepares algorithm parameters for it. + +The algorithms in`crypt()`differ from the usual MD5 or SHA1 hashing algorithms in the following respects: + +1. They are slow. As the amount of data is so small, this is the only way to make brute-forcing passwords hard. + +2. They use a random value, called the*salt*, so that users having the same password will have different encrypted passwords. This is also an additional defense against reversing the algorithm. + +3. They include the algorithm type in the result, so passwords hashed with different algorithms can co-exist. + +4. Some of them are adaptive — that means when computers get faster, you can tune the algorithm to be slower, without introducing incompatibility with existing passwords. + +[Table F.16](pgcrypto.html#PGCRYPTO-CRYPT-ALGORITHMS)lists the algorithms supported by the`crypt()`function. + +**Table F.16. Supported Algorithms for`crypt()`** + +| Algorithm | Max Password Length | Adaptive? | Salt Bits | Output Length | Description | +| --------- | ------------------- | --------- | --------- | ------------- | ----------- | +| `男朋友` | 72 | 是的 | 128 | 60 | 基于河豚,变体 2a | +| `md5` | 无限 | 不 | 48 | 34 | 基于 MD5 的密码 | +| `xdes` | 8 | 是的 | 24 | 20 | 扩展 DES | +| `德斯` | 8 | 不 | 12 | 13 | 原始 UNIX 地穴 | + +#### F.26.2.1。`地穴()` + +[](<>) + +``` +crypt(password text, salt text) returns text +``` + +计算 crypt(3) 风格的散列*`密码`*.存储新密码时,需要使用`gen_salt()`生成一个新的*`盐`*价值。要检查密码,请将存储的哈希值传递为*`盐`*,并测试结果是否与存储的值匹配。 + +设置新密码的示例: + +``` +UPDATE ... SET pswhash = crypt('new password', gen_salt('md5')); +``` + +认证示例: + +``` +SELECT (pswhash = crypt('entered password', pswhash)) AS pswmatch FROM ... ; +``` + +这返回`真的`如果输入的密码正确。 + +#### F.26.2.2。`gen_salt()` + +[](<>) + +``` +gen_salt(type text [, iter_count integer ]) returns text +``` + +生成一个新的随机盐字符串以用于`地穴()`.盐串也告诉`地穴()`使用哪种算法。 + +这*`类型`*参数指定散列算法。接受的类型是:`德斯`,`xdes`,`md5`和`男朋友`. + +The*`iter_count`*parameter lets the user specify the iteration count, for algorithms that have one. The higher the count, the more time it takes to hash the password and therefore the more time to break it. Although with too high a count the time to calculate a hash may be several years — which is somewhat impractical. If the*`iter_count`*parameter is omitted, the default iteration count is used. Allowed values for*`iter_count`*depend on the algorithm and are shown in[Table F.17](pgcrypto.html#PGCRYPTO-ICFC-TABLE). + +**Table F.17. Iteration Counts for`crypt()`** + +| Algorithm | Default | Min | Max | +| --------- | ------- | --- | --- | +| `xdes` | 725 | 1 | 16777215 | +| `bf` | 6 | 4 | 31 | + +For`xdes`there is an additional limitation that the iteration count must be an odd number. + +To pick an appropriate iteration count, consider that the original DES crypt was designed to have the speed of 4 hashes per second on the hardware of that time. Slower than 4 hashes per second would probably dampen usability. Faster than 100 hashes per second is probably too fast. + +[Table F.18](pgcrypto.html#PGCRYPTO-HASH-SPEED-TABLE)gives an overview of the relative slowness of different hashing algorithms. The table shows how much time it would take to try all combinations of characters in an 8-character password, assuming that the password contains either only lower case letters, or upper- and lower-case letters and numbers. In the`crypt-bf`entries, the number after a slash is the*`iter_count`*parameter of`gen_salt`. + +**Table F.18. Hash Algorithm Speeds** + +| Algorithm | Hashes/sec | For`[a-z]` | For`[A-Za-z0-9]` | Duration relative to`md5 hash` | +| --------- | ---------- | ---------- | ---------------- | ------------------------------ | +| `crypt-bf/8` | 1792 | 4 years | 3927 years | 100k | +| `crypt-bf/7` | 3648 | 2 years | 1929 years | 50k | +| `crypt-bf/6` | 7168 | 1 year | 982 years | 25k | +| `crypt-bf/5` | 13504 | 188 天 | 521年 | 12.5k | +| `crypt-md5` | 171584 | 15天 | 41 岁 | 1k | +| `crypt-des` | 23221568 | 157.5 分钟 | 108 天 | 7 | +| `沙1` | 37774272 | 90 分钟 | 68天 | 4 | +| `md5`(哈希) | 150085504 | 22.5 分钟 | 17天 | 1 | + +笔记: + +- 使用的机器是 Intel Mobile Core i3。 + +- `crypt-des`and`crypt-md5`algorithm numbers are taken from John the Ripper v1.6.38`-test`output. + +- `md5 hash`numbers are from mdcrack 1.2. + +- `sha1`numbers are from lcrack-20031130-beta. + +- `crypt-bf`numbers are taken using a simple program that loops over 1000 8-character passwords. That way I can show the speed with different numbers of iterations. For reference:`john -test`shows 13506 loops/sec for`crypt-bf/5`. (The very small difference in results is in accordance with the fact that the`crypt-bf`implementation in`pgcrypto`is the same one used in John the Ripper.) + + Note that “try all combinations” is not a realistic exercise. Usually password cracking is done with the help of dictionaries, which contain both regular words and various mutations of them. So, even somewhat word-like passwords could be cracked much faster than the above numbers suggest, while a 6-character non-word-like password may escape cracking. Or not. + +### F.26.3. PGP Encryption Functions + +The functions here implement the encryption part of the OpenPGP ([RFC 4880](https://tools.ietf.org/html/rfc4880)) standard. Supported are both symmetric-key and public-key encryption. + +An encrypted PGP message consists of 2 parts, or*packets*: + +- Packet containing a session key — either symmetric-key or public-key encrypted. + +- 包含使用会话密钥加密的数据的数据包。 + + 使用对称密钥(即密码)加密时: + +1. 给定的密​​码使用 String2Key (S2K) 算法进行哈希处理。这与`地穴()`算法——故意放慢并使用随机盐——但它会产生一个全长的二进制密钥。 + +2. 如果请求单独的会话密钥,将生成一个新的随机密钥。否则 S2K 密钥将直接用作会话密钥。 + +3. 如果直接使用 S2K 密钥,那么只有 S2K 设置会被放入会话密钥包中。否则会话密钥将使用 S2K 密钥加密并放入会话密钥包中。 + + 使用公钥加密时: + +4. 生成一个新的随机会话密钥。 + +5. 它使用公钥加密并放入会话密钥包中。 + + 在任何一种情况下,要加密的数据都按如下方式处理: + +6. 可选的数据操作:压缩、转换为 UTF-8 和/或行尾转换。 + +7. 数据以随机字节块为前缀。这相当于使用随机 IV。 + +8. 附加了随机前缀和数据的 SHA1 哈希。 + +9. 所有这些都使用会话密钥加密并放入数据包中。 + +#### F.26.3.1。`pgp_sym_encrypt()` + +[](<>)[](<>) + +``` +pgp_sym_encrypt(data text, psw text [, options text ]) returns bytea +pgp_sym_encrypt_bytea(data bytea, psw text [, options text ]) returns bytea +``` + +加密*`数据`*使用对称 PGP 密钥*`psw`*.这*`选项`*参数可以包含选项设置,如下所述。 + +#### F.26.3.2。`pgp_sym_decrypt()` + +[](<>)[](<>) + +``` +pgp_sym_decrypt(msg bytea, psw text [, options text ]) returns text +pgp_sym_decrypt_bytea(msg bytea, psw text [, options text ]) returns bytea +``` + +解密对称密钥加密的 PGP 消息。 + +解密`拜茶`数据与`pgp_sym_decrypt`是不允许的。这是为了避免输出无效的字符数据。解密原始文本数据`pgp_sym_decrypt_bytea`很好。 + +这*`选项`*参数可以包含选项设置,如下所述。 + +#### F.26.3.3。`pgp_pub_encrypt()` + +[](<>)[](<>) + +``` +pgp_pub_encrypt(data text, key bytea [, options text ]) returns bytea +pgp_pub_encrypt_bytea(data bytea, key bytea [, options text ]) returns bytea +``` + +加密*`数据`*使用公共 PGP 密钥*`钥匙`*.给这个函数一个密钥会产生一个错误。 + +这*`选项`*参数可以包含选项设置,如下所述。 + +#### F.26.3.4。`pgp_pub_decrypt()` + +[](<>)[](<>) + +``` +pgp_pub_decrypt(msg bytea, key bytea [, psw text [, options text ]]) returns text +pgp_pub_decrypt_bytea(msg bytea, key bytea [, psw text [, options text ]]) returns bytea +``` + +解密公钥加密的消息。*`钥匙`*必须是与用于加密的公钥对应的密钥。如果密钥受密码保护,您必须在*`psw`*.如果没有密码,但要指定选项,则需要提供一个空密码。 + +解密`拜茶`数据与`pgp_pub_decrypt`是不允许的。这是为了避免输出无效的字符数据。解密原始文本数据`pgp_pub_decrypt_bytea`很好。 + +这*`选项`*参数可以包含选项设置,如下所述。 + +#### F.26.3.5。`pgp_key_id()` + +[](<>) + +``` +pgp_key_id(bytea) returns text +``` + +`pgp_key_id`提取 PGP 公钥或私钥的密钥 ID。或者,如果给定加密消息,它会给出用于加密数据的密钥 ID。 + +它可以返回 2 个特殊的密钥 ID: + +- `SYMKEY` + + 消息使用对称密钥加密。 + +- `任意键` + + 该消息是公钥加密的,但密钥 ID 已被删除。这意味着您将需要尝试所有密钥,以查看哪个密钥对其进行了解密。`pgcrypto`本身不会产生这样的消息。 + + 请注意,不同的密钥可能具有相同的 ID。这是罕见的,但很正常的事件。然后客户端应用程序应该尝试对每一个进行解密,看看哪个适合——比如处理`任意键`. + +#### F.26.3.6。`盔甲()`,`亲爱的()` + +[](<>)[](<>) + +``` +armor(data bytea [ , keys text[], values text[] ]) returns text +dearmor(data text) returns bytea +``` + +这些函数将二进制数据包装/解包为 PGP ASCII-armor 格式,该格式基本上是带有 CRC 和附加格式的 Base64。 + +如果*`钥匙`*和*`价值观`*arrays are specified, an*armor header*is added to the armored format for each key/value pair. Both arrays must be single-dimensional, and they must be of the same length. The keys and values cannot contain any non-ASCII characters. + +#### F.26.3.7.`pgp_armor_headers` + +[](<>) + +``` +pgp_armor_headers(data text, key out text, value out text) returns setof record +``` + +`pgp_armor_headers()`extracts the armor headers from*`data`*. The return value is a set of rows with two columns, key and value. If the keys or values contain any non-ASCII characters, they are treated as UTF-8. + +#### F.26.3.8. Options for PGP Functions + +Options are named to be similar to GnuPG. An option's value should be given after an equal sign; separate options from each other with commas. For example: + +``` +pgp_sym_encrypt(data, psw, 'compress-algo=1, cipher-algo=aes256') +``` + +All of the options except`convert-crlf`apply only to encrypt functions. Decrypt functions get the parameters from the PGP data. + +The most interesting options are probably`compress-algo`and`unicode-mode`. The rest should have reasonable defaults. + +##### F.26.3.8.1. cipher-algo + +Which cipher algorithm to use. + +Values: bf, aes128, aes192, aes256 (OpenSSL-only:`3des`,`cast5`)\ +Default: aes128\ +Applies to: pgp_sym_encrypt, pgp_pub_encrypt + +##### F.26.3.8.2. compress-algo + +Which compression algorithm to use. Only available if PostgreSQL was built with zlib. + +Values:\ +0 - no compression\ +1 - ZIP compression\ +2 - ZLIB compression (= ZIP plus meta-data and block CRCs)\ +Default: 0\ +Applies to: pgp_sym_encrypt, pgp_pub_encrypt + +##### F.26.3.8.3. compress-level + +How much to compress. Higher levels compress smaller but are slower. 0 disables compression. + +Values: 0, 1-9\ +Default: 6\ +Applies to: pgp_sym_encrypt, pgp_pub_encrypt + +##### F.26.3.8.4. convert-crlf + +Whether to convert`\n`into`\r\n`when encrypting and`\r\n`to`\n`when decrypting. RFC 4880 specifies that text data should be stored using`\r\n`line-feeds. Use this to get fully RFC-compliant behavior. + +Values: 0, 1\ +Default: 0\ +Applies to: pgp_sym_encrypt, pgp_pub_encrypt, pgp_sym_decrypt, pgp_pub_decrypt + +##### F.26.3.8.5. disable-mdc + +Do not protect data with SHA-1. The only good reason to use this option is to achieve compatibility with ancient PGP products, predating the addition of SHA-1 protected packets to RFC 4880. Recent gnupg.org and pgp.com software supports it fine. + +Values: 0, 1\ +Default: 0\ +Applies to: pgp_sym_encrypt, pgp_pub_encrypt + +##### F.26.3.8.6. sess-key + +Use separate session key. Public-key encryption always uses a separate session key; this option is for symmetric-key encryption, which by default uses the S2K key directly. + +Values: 0, 1\ +Default: 0\ +Applies to: pgp_sym_encrypt + +##### F.26.3.8.7. s2k-mode + +Which S2K algorithm to use. + +Values:\ +0 - Without salt. Dangerous!\ +1 - With salt but with fixed iteration count.\ +3 - Variable iteration count.\ +Default: 3\ +Applies to: pgp_sym_encrypt + +##### F.26.3.8.8. s2k-count + +The number of iterations of the S2K algorithm to use. It must be a value between 1024 and 65011712, inclusive. + +Default: A random value between 65536 and 253952\ +Applies to: pgp_sym_encrypt, only with s2k-mode=3 + +##### F.26.3.8.9. s2k-digest-algo + +Which digest algorithm to use in S2K calculation. + +Values: md5, sha1\ +Default: sha1\ +Applies to: pgp_sym_encrypt + +##### F.26.3.8.10. s2k-cipher-algo + +Which cipher to use for encrypting separate session key. + +Values: bf, aes, aes128, aes192, aes256\ +Default: use cipher-algo\ +Applies to: pgp_sym_encrypt + +##### F.26.3.8.11. unicode-mode + +Whether to convert textual data from database internal encoding to UTF-8 and back. If your database already is UTF-8, no conversion will be done, but the message will be tagged as UTF-8. Without this option it will not be. + +Values: 0, 1\ +Default: 0\ +Applies to: pgp_sym_encrypt, pgp_pub_encrypt + +#### F.26.3.9. Generating PGP Keys with GnuPG + +To generate a new key: + +``` +gpg --gen-key +``` + +The preferred key type is “DSA and Elgamal”. + +For RSA encryption you must create either DSA or RSA sign-only key as master and then add an RSA encryption subkey with`gpg --edit-key`. + +To list keys: + +``` +gpg --list-secret-keys +``` + +To export a public key in ASCII-armor format: + +``` +gpg -a --export KEYID > public.key +``` + +To export a secret key in ASCII-armor format: + +``` +gpg -a --export-secret-keys KEYID > secret.key +``` + +You need to use`dearmor()`on these keys before giving them to the PGP functions. Or if you can handle binary data, you can drop`-a`from the command. + +For more details see`man gpg`,[The GNU Privacy Handbook](https://www.gnupg.org/gph/en/manual.html)and other documentation on. + +#### F.26.3.10. Limitations of PGP Code + +- No support for signing. That also means that it is not checked whether the encryption subkey belongs to the master key. + +- No support for encryption key as master key. As such practice is generally discouraged, this should not be a problem. + +- No support for several subkeys. This may seem like a problem, as this is common practice. On the other hand, you should not use your regular GPG/PGP keys with`pgcrypto`, but create new ones, as the usage scenario is rather different. + +### F.26.4. Raw Encryption Functions + +These functions only run a cipher over data; they don't have any advanced features of PGP encryption. Therefore they have some major problems: + +1. They use user key directly as cipher key. + +2. They don't provide any integrity checking, to see if the encrypted data was modified. + +3. They expect that users manage all encryption parameters themselves, even IV. + +4. They don't handle text. + + So, with the introduction of PGP encryption, usage of raw encryption functions is discouraged. + +[](<>)[](<>)[](<>)[](<>) + +``` +encrypt(data bytea, key bytea, type text) returns bytea +decrypt(data bytea, key bytea, type text) returns bytea + +encrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea +decrypt_iv(data bytea, key bytea, iv bytea, type text) returns bytea +``` + +使用指定的密码方法加密/解密数据*`类型`*.的语法*`类型`*字符串是: + +``` +algorithm [ - mode ] [ /pad: padding ] +``` + +在哪里*`算法`*是其中之一: + +- `男朋友`— 河豚 + +- `aes`— AES(Rijndael-128、-192 或 -256) + + 和*`模式`*是其中之一: + +- `加拿大广播公司`— 下一个块取决于上一个(默认) + +- `欧洲央行`— 每个块都单独加密(仅用于测试) + + 和*`填充`*是其中之一: + +- `pkcs`— 数据可以是任意长度(默认) + +- `没有任何`— 数据必须是密码块大小的倍数 + + 因此,例如,这些是等价的: + + +``` +encrypt(data, 'fooz', 'bf') +encrypt(data, 'fooz', 'bf-cbc/pad:pkcs') +``` + +在`加密_iv`and`decrypt_iv`, the*`iv`*parameter is the initial value for the CBC mode; it is ignored for ECB. It is clipped or padded with zeroes if not exactly block size. It defaults to all zeroes in the functions without this parameter. + +### F.26.5. Random-Data Functions + +[](<>) + +``` +gen_random_bytes(count integer) returns bytea +``` + +Returns*`count`*cryptographically strong random bytes. At most 1024 bytes can be extracted at a time. This is to avoid draining the randomness generator pool. + +[](<>) + +``` +gen_random_uuid() returns uuid +``` + +Returns a version 4 (random) UUID. (Obsolete, this function is now also included in core PostgreSQL.) + +### F.26.6. Notes + +#### F.26.6.1. Configuration + +`pgcrypto`configures itself according to the findings of the main PostgreSQL`configure`script. The options that affect it are`--with-zlib`and`--with-ssl=openssl`. + +When compiled with zlib, PGP encryption functions are able to compress data before encrypting. + +When compiled with OpenSSL, there will be more algorithms available. Also public-key encryption functions will be faster as OpenSSL has more optimized BIGNUM functions. + +**Table F.19. Summary of Functionality with and without OpenSSL** + +| Functionality | Built-in | With OpenSSL | +| ------------- | -------- | ------------ | +| MD5 | yes | yes | +| SHA1 | 是的 | 是的 | +| SHA224/256/384/512 | 是的 | 是的 | +| 其他摘要算法 | 不 | 是(注1) | +| 河豚 | 是的 | 是的 | +| AES | 是的 | 是的 | +| DES/3DES/CAST5 | 不 | 是的 | +| 原始加密 | 是的 | 是的 | +| PGP 对称加密 | 是的 | 是的 | +| PGP 公钥加密 | 是的 | 是的 | + +针对 OpenSSL 3.0.0 和更高版本编译时,必须在`openssl.cnf`配置文件,以便使用DES或Blowfish等旧密码。 + +笔记: + +1. OpenSSL支持的任何摘要算法都会自动获取。对于需要明确支持的密码,这是不可能的。 + +#### F.26.6.2。空处理 + +按照SQL中的标准,如果任何参数为NULL,则所有函数都返回NULL。这可能会对不小心使用造成安全风险。 + +#### F.26.6.3。安全限制 + +全部的`pgcrypto`函数在数据库服务器内部运行。这意味着所有数据和密码都会在`pgcrypto`以及明文形式的客户端应用程序。因此,你必须: + +1. 本地连接或使用SSL连接。 + +2. 信任系统管理员和数据库管理员。 + + 如果不能,那么最好在客户端应用程序内部进行加密。 + + 实施并不抵制[侧通道攻击](https://en.wikipedia.org/wiki/Side-channel_attack)。例如,一次测试所需的时间`pgcrypto`要完成的解密函数因给定大小的密文而异。 + +#### F.26.6.4。有用的阅读 + +- + + GNU隐私手册。 + +- + + 描述了crypt blowfish算法。 + +- + + 如何选择一个好的密码。 + +- [http://world.std.com/~莱因霍尔德/骰子器皿。html](http://world.std.com/~reinhold/diceware.html) + + Interesting idea for picking passwords. + +- + + Describes good and bad cryptography. + +#### F.26.6.5. Technical References + +- + + OpenPGP message format. + +- + + The MD5 Message-Digest Algorithm. + +- + + HMAC: Keyed-Hashing for Message Authentication. + +- + + Comparison of crypt-des, crypt-md5 and bcrypt algorithms. + +- + + Description of Fortuna CSPRNG. + +- + + Jean-Luc Cooke Fortuna-based`/dev/random`driver for Linux. + +### F.26.7. Author + +Marko Kreen`<[markokr@gmail.com](mailto:markokr@gmail.com)>` + +`pgcrypto`uses code from the following sources: + +| Algorithm | Author | Source origin | +| --------- | ------ | ------------- | +| DES crypt | 大卫·布伦等 | FreeBSD libcrypt | +| MD5 地穴 | 波尔-亨宁坎普 | FreeBSD libcrypt | +| 河豚地穴 | 太阳能设计师 | www.openwall.com | +| 河豚密码 | 西蒙·泰瑟姆 | 油灰 | +| Rijndael密码 | 布赖恩格拉德曼 | OpenBSD 系统/加密 | +| MD5 哈希和 SHA1 | 宽项目 | KAME kame/sys/crypto | +| SHA256/384/512 | 亚伦·D·吉福德 | OpenBSD 系统/加密 | +| BIGNUM 数学 | 迈克尔·J·弗罗伯格 | dartmouth.edu/~刺痛/sw/imath | diff --git a/docs/X/plpgsql-porting.md b/docs/en/plpgsql-porting.md similarity index 100% rename from docs/X/plpgsql-porting.md rename to docs/en/plpgsql-porting.md diff --git a/docs/en/plpgsql-porting.zh.md b/docs/en/plpgsql-porting.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..294558a17eafe7090323440b6e42a3adcfb259d4 --- /dev/null +++ b/docs/en/plpgsql-porting.zh.md @@ -0,0 +1,473 @@ +## 43.13. Porting from Oracle PL/SQL + +[43.13.1. Porting Examples](plpgsql-porting.html#id-1.8.8.15.6) + +[43.13.2. Other Things to Watch For](plpgsql-porting.html#PLPGSQL-PORTING-OTHER) + +[43.13.3. Appendix](plpgsql-porting.html#PLPGSQL-PORTING-APPENDIX) + +[](<>)[](<>) + +This section explains differences between PostgreSQL's PL/pgSQL language and Oracle's PL/SQL language, to help developers who port applications from Oracle® to PostgreSQL. + +PL/pgSQL is similar to PL/SQL in many aspects. It is a block-structured, imperative language, and all variables have to be declared. Assignments, loops, and conditionals are similar. The main differences you should keep in mind when porting from PL/SQL to PL/pgSQL are: + +- If a name used in an SQL command could be either a column name of a table used in the command or a reference to a variable of the function, PL/SQL treats it as a column name. By default, PL/pgSQL will throw an error complaining that the name is ambiguous. You can specify`plpgsql.variable_conflict`=`use_column`to change this behavior to match PL/SQL, as explained in[Section 43.11.1](plpgsql-implementation.html#PLPGSQL-VAR-SUBST). It's often best to avoid such ambiguities in the first place, but if you have to port a large amount of code that depends on this behavior, setting`variable_conflict`may be the best solution. + +- In PostgreSQL the function body must be written as a string literal. Therefore you need to use dollar quoting or escape single quotes in the function body. (See[Section 43.12.1](plpgsql-development-tips.html#PLPGSQL-QUOTE-TIPS).) + +- Data type names often need translation. For example, in Oracle string values are commonly declared as being of type`varchar2`, which is a non-SQL-standard type. In PostgreSQL, use type`varchar`or`text`instead. Similarly, replace type`number`with`numeric`, or use some other numeric data type if there's a more appropriate one. + +- Instead of packages, use schemas to organize your functions into groups. + +- Since there are no packages, there are no package-level variables either. This is somewhat annoying. You can keep per-session state in temporary tables instead. + +- Integer`FOR`loops with`REVERSE`work differently: PL/SQL counts down from the second number to the first, while PL/pgSQL counts down from the first number to the second, requiring the loop bounds to be swapped when porting. This incompatibility is unfortunate but is unlikely to be changed. (See[Section 43.6.5.5](plpgsql-control-structures.html#PLPGSQL-INTEGER-FOR).) + +- `FOR`loops over queries (other than cursors) also work differently: the target variable(s) must have been declared, whereas PL/SQL always declares them implicitly. An advantage of this is that the variable values are still accessible after the loop exits. + +- There are various notational differences for the use of cursor variables. + +### 43.13.1. Porting Examples + +[Example 43.9](plpgsql-porting.html#PGSQL-PORTING-EX1)shows how to port a simple function from PL/SQL to PL/pgSQL. + +**Example 43.9. Porting a Simple Function from PL/SQL to PL/pgSQL** + +Here is an Oracle PL/SQL function: + +``` +CREATE OR REPLACE FUNCTION cs_fmt_browser_version(v_name varchar2, + v_version varchar2) +RETURN varchar2 IS +BEGIN + IF v_version IS NULL THEN + RETURN v_name; + END IF; + RETURN v_name || '/' || v_version; +END; +/ +show errors; +``` + +Let's go through this function and see the differences compared to PL/pgSQL: + +- The type name`varchar2`has to be changed to`varchar`or`text`. In the examples in this section, we'll use`varchar`, but`text`is often a better choice if you do not need specific string length limits. + +- The`RETURN`key word in the function prototype (not the function body) becomes`RETURNS`in PostgreSQL. Also,`IS`becomes`AS`, and you need to add a`LANGUAGE`clause because PL/pgSQL is not the only possible function language. + +- In PostgreSQL, the function body is considered to be a string literal, so you need to use quote marks or dollar quotes around it. This substitutes for the terminating`/`in the Oracle approach. + +- The`show errors`command does not exist in PostgreSQL, and is not needed since errors are reported automatically. + + This is how this function would look when ported to PostgreSQL: + + +``` +CREATE OR REPLACE FUNCTION cs_fmt_browser_version(v_name varchar, + v_version varchar) +RETURNS varchar AS $$ +BEGIN + IF v_version IS NULL THEN + RETURN v_name; + END IF; + RETURN v_name || '/' || v_version; +END; +$$ LANGUAGE plpgsql; +``` + +[Example 43.10](plpgsql-porting.html#PLPGSQL-PORTING-EX2)shows how to port a function that creates another function and how to handle the ensuing quoting problems. + +**Example 43.10. Porting a Function that Creates Another Function from PL/SQL to PL/pgSQL** + +The following procedure grabs rows from a`SELECT`语句并构建一个大型函数,其结果为`如果`声明,为了效率。 + +这是 Oracle 版本: + +``` +CREATE OR REPLACE PROCEDURE cs_update_referrer_type_proc IS + CURSOR referrer_keys IS + SELECT * FROM cs_referrer_keys + ORDER BY try_order; + func_cmd VARCHAR(4000); +BEGIN + func_cmd := 'CREATE OR REPLACE FUNCTION cs_find_referrer_type(v_host IN VARCHAR2, + v_domain IN VARCHAR2, v_url IN VARCHAR2) RETURN VARCHAR2 IS BEGIN'; + + FOR referrer_key IN referrer_keys LOOP + func_cmd := func_cmd || + ' IF v_' || referrer_key.kind + || ' LIKE ''' || referrer_key.key_string + || ''' THEN RETURN ''' || referrer_key.referrer_type + || '''; END IF;'; + END LOOP; + + func_cmd := func_cmd || ' RETURN NULL; END;'; + + EXECUTE IMMEDIATE func_cmd; +END; +/ +show errors; +``` + +以下是这个函数在 PostgreSQL 中的结束方式: + +``` +CREATE OR REPLACE PROCEDURE cs_update_referrer_type_proc() AS $func$ +DECLARE + referrer_keys CURSOR IS + SELECT * FROM cs_referrer_keys + ORDER BY try_order; + func_body text; + func_cmd text; +BEGIN + func_body := 'BEGIN'; + + FOR referrer_key IN referrer_keys LOOP + func_body := func_body || + ' IF v_' || referrer_key.kind + || ' LIKE ' || quote_literal(referrer_key.key_string) + || ' THEN RETURN ' || quote_literal(referrer_key.referrer_type) + || '; END IF;' ; + END LOOP; + + func_body := func_body || ' RETURN NULL; END;'; + + func_cmd := + 'CREATE OR REPLACE FUNCTION cs_find_referrer_type(v_host varchar, + v_domain varchar, + v_url varchar) + RETURNS varchar AS ' + || quote_literal(func_body) + || ' LANGUAGE plpgsql;' ; + + EXECUTE func_cmd; +END; +$func$ LANGUAGE plpgsql; +``` + +注意函数体是如何单独构建并传递的`引用文字`将其中的任何引号加倍。需要这种技术是因为我们不能安全地使用美元引用来定义新函数:我们不确定将从`referrer_key.key_string`场地。(我们在这里假设`referrer_key.kind`可以相信永远是`主持人`,`领域`, 要么`网址`, 但`referrer_key.key_string`可能是任何东西,特别是它可能包含美元符号。)这个功能实际上是对 Oracle 原始的改进,因为它不会在以下情况下生成损坏的代码`referrer_key.key_string`要么`referrer_key.referrer_type`包含引号。 + +[示例 43.11](plpgsql-porting.html#PLPGSQL-PORTING-EX3)展示了如何移植一个函数`出去`parameters and string manipulation. PostgreSQL does not have a built-in`instr`function, but you can create one using a combination of other functions. In[Section 43.13.3](plpgsql-porting.html#PLPGSQL-PORTING-APPENDIX)there is a PL/pgSQL implementation of`instr`that you can use to make your porting easier. + +**Example 43.11. Porting a Procedure With String Manipulation and`OUT`Parameters from PL/SQL to PL/pgSQL** + +The following Oracle PL/SQL procedure is used to parse a URL and return several elements (host, path, and query). + +This is the Oracle version: + +``` +CREATE OR REPLACE PROCEDURE cs_parse_url( + v_url IN VARCHAR2, + v_host OUT VARCHAR2, -- This will be passed back + v_path OUT VARCHAR2, -- This one too + v_query OUT VARCHAR2) -- And this one +IS + a_pos1 INTEGER; + a_pos2 INTEGER; +BEGIN + v_host := NULL; + v_path := NULL; + v_query := NULL; + a_pos1 := instr(v_url, '//'); + + IF a_pos1 = 0 THEN + RETURN; + END IF; + a_pos2 := instr(v_url, '/', a_pos1 + 2); + IF a_pos2 = 0 THEN + v_host := substr(v_url, a_pos1 + 2); + v_path := '/'; + RETURN; + END IF; + + v_host := substr(v_url, a_pos1 + 2, a_pos2 - a_pos1 - 2); + a_pos1 := instr(v_url, '?', a_pos2 + 1); + + IF a_pos1 = 0 THEN + v_path := substr(v_url, a_pos2); + RETURN; + END IF; + + v_path := substr(v_url, a_pos2, a_pos1 - a_pos2); + v_query := substr(v_url, a_pos1 + 1); +END; +/ +show errors; +``` + +Here is a possible translation into PL/pgSQL: + +``` +CREATE OR REPLACE FUNCTION cs_parse_url( + v_url IN VARCHAR, + v_host OUT VARCHAR, -- This will be passed back + v_path OUT VARCHAR, -- This one too + v_query OUT VARCHAR) -- And this one +AS $$ +DECLARE + a_pos1 INTEGER; + a_pos2 INTEGER; +BEGIN + v_host := NULL; + v_path := NULL; + v_query := NULL; + a_pos1 := instr(v_url, '//'); + + IF a_pos1 = 0 THEN + RETURN; + END IF; + a_pos2 := instr(v_url, '/', a_pos1 + 2); + IF a_pos2 = 0 THEN + v_host := substr(v_url, a_pos1 + 2); + v_path := '/'; + RETURN; + END IF; + + v_host := substr(v_url, a_pos1 + 2, a_pos2 - a_pos1 - 2); + a_pos1 := instr(v_url, '?', a_pos2 + 1); + + IF a_pos1 = 0 THEN + v_path := substr(v_url, a_pos2); + RETURN; + END IF; + + v_path := substr(v_url, a_pos2, a_pos1 - a_pos2); + v_query := substr(v_url, a_pos1 + 1); +END; +$$ LANGUAGE plpgsql; +``` + +This function could be used like this: + +``` +SELECT * FROM cs_parse_url('http://foobar.com/query.cgi?baz'); +``` + +[Example 43.12](plpgsql-porting.html#PLPGSQL-PORTING-EX4)shows how to port a procedure that uses numerous features that are specific to Oracle. + +**Example 43.12. Porting a Procedure from PL/SQL to PL/pgSQL** + +The Oracle version: + +``` +CREATE OR REPLACE PROCEDURE cs_create_job(v_job_id IN INTEGER) IS + a_running_job_count INTEGER; +BEGIN + LOCK TABLE cs_jobs IN EXCLUSIVE MODE; + + SELECT count(*) INTO a_running_job_count FROM cs_jobs WHERE end_stamp IS NULL; + + IF a_running_job_count > 0 THEN + COMMIT; -- free lock + raise_application_error(-20000, + 'Unable to create a new job: a job is currently running.'); + END IF; + + DELETE FROM cs_active_job; + INSERT INTO cs_active_job(job_id) VALUES (v_job_id); + + BEGIN + INSERT INTO cs_jobs (job_id, start_stamp) VALUES (v_job_id, now()); + EXCEPTION + WHEN dup_val_on_index THEN NULL; -- don't worry if it already exists + END; + COMMIT; +END; +/ +show errors +``` + +This is how we could port this procedure to PL/pgSQL: + +``` +CREATE OR REPLACE PROCEDURE cs_create_job(v_job_id integer) AS $$ +DECLARE + a_running_job_count integer; +BEGIN + LOCK TABLE cs_jobs IN EXCLUSIVE MODE; + + SELECT count(*) INTO a_running_job_count FROM cs_jobs WHERE end_stamp IS NULL; + + IF a_running_job_count > 0 THEN + COMMIT; -- free lock + RAISE EXCEPTION 'Unable to create a new job: a job is currently running'; -- (1) + END IF; + + DELETE FROM cs_active_job; + INSERT INTO cs_active_job(job_id) VALUES (v_job_id); + + BEGIN + INSERT INTO cs_jobs (job_id, start_stamp) VALUES (v_job_id, now()); + EXCEPTION + WHEN unique_violation THEN -- (2) + -- don't worry if it already exists + END; + COMMIT; +END; +$$ LANGUAGE plpgsql; +``` + +| [(1)](#co.plpgsql-porting-raise) | The syntax of`RAISE`is considerably different from Oracle's statement, although the basic case`RAISE` *`exception_name`*works similarly. | +| :------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | +| [(2)](#co.plpgsql-porting-exception) | The exception names supported by PL/pgSQL are different from Oracle's. The set of built-in exception names is much larger (see[Appendix A](errcodes-appendix.html)). There is not currently a way to declare user-defined exception names, although you can throw user-chosen SQLSTATE values instead. | + +### 43.13.2. Other Things to Watch For + +This section explains a few other things to watch for when porting Oracle PL/SQL functions to PostgreSQL. + +#### 43.13.2.1. Implicit Rollback after Exceptions + +In PL/pgSQL, when an exception is caught by an`EXCEPTION`clause, all database changes since the block's`BEGIN`are automatically rolled back. That is, the behavior is equivalent to what you'd get in Oracle with: + +``` +BEGIN + SAVEPOINT s1; + ... code here ... +EXCEPTION + WHEN ... THEN + ROLLBACK TO s1; + ... code here ... + WHEN ... THEN + ROLLBACK TO s1; + ... code here ... +END; +``` + +If you are translating an Oracle procedure that uses`SAVEPOINT`and`ROLLBACK TO`in this style, your task is easy: just omit the`SAVEPOINT`and`ROLLBACK TO`. If you have a procedure that uses`SAVEPOINT`and`ROLLBACK TO`in a different way then some actual thought will be required. + +#### 43.13.2.2.`EXECUTE` + +The PL/pgSQL version of`EXECUTE`works similarly to the PL/SQL version, but you have to remember to use`quote_literal`and`quote_ident`as described in[Section 43.5.4](plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN). Constructs of the type`EXECUTE 'SELECT * FROM $1';`will not work reliably unless you use these functions. + +#### 43.13.2.3. Optimizing PL/pgSQL Functions + +PostgreSQL gives you two function creation modifiers to optimize execution: “volatility” (whether the function always returns the same result when given the same arguments) and “strictness” (whether the function returns null if any argument is null). Consult the[CREATE FUNCTION](sql-createfunction.html)reference page for details. + +When making use of these optimization attributes, your`CREATE FUNCTION`statement might look something like this: + +``` +CREATE FUNCTION foo(...) RETURNS integer AS $$ +... +$$ LANGUAGE plpgsql STRICT IMMUTABLE; +``` + +### 43.13.3. Appendix + +This section contains the code for a set of Oracle-compatible`instr`functions that you can use to simplify your porting efforts. + +[](<>) + +``` +-- +-- instr functions that mimic Oracle's counterpart +-- Syntax: instr(string1, string2 [, n [, m]]) +-- where [] denotes optional parameters. +-- +-- Search string1, beginning at the nth character, for the mth occurrence +-- of string2. If n is negative, search backwards, starting at the abs(n)'th +-- character from the end of string1. +-- If n is not passed, assume 1 (search starts at first character). +-- If m is not passed, assume 1 (find first occurrence). +-- Returns starting index of string2 in string1, or 0 if string2 is not found. +-- + +CREATE FUNCTION instr(varchar, varchar) RETURNS integer AS $$ +BEGIN + RETURN instr($1, $2, 1); +END; +$$ LANGUAGE plpgsql STRICT IMMUTABLE; + +CREATE FUNCTION instr(string varchar, string_to_search_for varchar, + beg_index integer) +RETURNS integer AS $$ +DECLARE + pos integer NOT NULL DEFAULT 0; + temp_str varchar; + beg integer; + length integer; + ss_length integer; +BEGIN + IF beg_index > 0 THEN + temp_str := substring(string FROM beg_index); + pos := position(string_to_search_for IN temp_str); + + IF pos = 0 THEN + RETURN 0; + ELSE + RETURN pos + beg_index - 1; + END IF; + ELSIF beg_index < 0 THEN + ss_length := char_length(string_to_search_for); + length := char_length(string); + beg := length + 1 + beg_index; + + WHILE beg > 0 LOOP + temp_str := substring(string FROM beg FOR ss_length); + IF string_to_search_for = temp_str THEN + RETURN beg; + END IF; + + beg := beg - 1; + END LOOP; + + RETURN 0; + ELSE + RETURN 0; + END IF; +END; +$$ LANGUAGE plpgsql STRICT IMMUTABLE; + +CREATE FUNCTION instr(string varchar, string_to_search_for varchar, + beg_index integer, occur_index integer) +RETURNS integer AS $$ +DECLARE + pos integer NOT NULL DEFAULT 0; + occur_number integer NOT NULL DEFAULT 0; + temp_str varchar; + beg integer; + i integer; + length integer; + ss_length integer; +BEGIN + IF occur_index <= 0 THEN + RAISE 'argument ''%'' is out of range', occur_index + USING ERRCODE = '22003'; + END IF; + + IF beg_index > 0 THEN + beg := beg_index - 1; + FOR i IN 1..occur_index LOOP + temp_str := substring(string FROM beg + 1); + pos := position(string_to_search_for IN temp_str); + IF pos = 0 THEN + RETURN 0; + END IF; + beg := beg + pos; + END LOOP; + + RETURN beg; + ELSIF beg_index < 0 THEN + ss_length := char_length(string_to_search_for); + length := char_length(string); + beg := length + 1 + beg_index; + + WHILE beg > 0 LOOP + temp_str := substring(string FROM beg FOR ss_length); + IF string_to_search_for = temp_str THEN + occur_number := occur_number + 1; + IF occur_number = occur_index THEN + RETURN beg; + END IF; + END IF; + + beg := beg - 1; + END LOOP; + + RETURN 0; + ELSE + RETURN 0; + END IF; +END; +$$ LANGUAGE plpgsql STRICT IMMUTABLE; +``` diff --git a/docs/X/postgres-fdw.md b/docs/en/postgres-fdw.md similarity index 100% rename from docs/X/postgres-fdw.md rename to docs/en/postgres-fdw.md diff --git a/docs/en/postgres-fdw.zh.md b/docs/en/postgres-fdw.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..596646578fccc027ce16153b3ab2b57404d27d4e --- /dev/null +++ b/docs/en/postgres-fdw.zh.md @@ -0,0 +1,269 @@ +## F.35. postgres_fdw + +[F.35.1. FDW Options of postgres_fdw](postgres-fdw.html#id-1.11.7.44.11)[F.35.2. Functions](postgres-fdw.html#id-1.11.7.44.12)[F.35.3. Connection Management](postgres-fdw.html#id-1.11.7.44.13)[F.35.4. Transaction Management](postgres-fdw.html#id-1.11.7.44.14)[F.35.5. Remote Query Optimization](postgres-fdw.html#id-1.11.7.44.15)[F.35.6. Remote Query Execution Environment](postgres-fdw.html#id-1.11.7.44.16)[F.35.7. Cross-Version Compatibility](postgres-fdw.html#id-1.11.7.44.17)[F.35.8. Examples](postgres-fdw.html#id-1.11.7.44.18)[F.35.9. Author](postgres-fdw.html#id-1.11.7.44.19) + +[](<>) + +The`postgres_fdw`module provides the foreign-data wrapper`postgres_fdw`, which can be used to access data stored in external PostgreSQL servers. + +The functionality provided by this module overlaps substantially with the functionality of the older[dblink](dblink.html)module. But`postgres_fdw`provides more transparent and standards-compliant syntax for accessing remote tables, and can give better performance in many cases. + +To prepare for remote access using`postgres_fdw`: + +1. Install the`postgres_fdw`扩展使用[创建扩展](sql-createextension.html). + +2. 创建一个外部服务器对象,使用[创建服务器](sql-createserver.html), 表示您要连接的每个远程数据库。指定连接信息,除了`用户`和`密码`,作为服务器对象的选项。 + +3. 创建用户映射,使用[创建用户映射](sql-createusermapping.html),对于您希望允许访问每个外部服务器的每个数据库用户。指定要用作的远程用户名和密码`用户`和`密码`用户映射的选项。 + +4. 创建一个外部表,使用[创建外表](sql-createforeigntable.html)或者[导入国外模式](sql-importforeignschema.html), 对于您要访问的每个远程表。外部表的列必须与引用的远程表匹配。但是,如果您将正确的远程名称指定为外部表对象的选项,则可以使用与远程表不同的表和/或列名称。 + + 现在你只需要`选择`从外部表访问存储在其基础远程表中的数据。您还可以使用修改远程表`插入`,`更新`,`删除`, 或者`截短`.(当然,您在用户映射中指定的远程用户必须具有执行这些操作的权限。) + + 请注意,`只要`中指定的选项`选择`,`更新`,`删除`或者`截短`访问或修改远程表时无效。 + + 注意`postgres_fdw`目前缺乏对`插入`带有`冲突时做更新`条款。然而`在冲突中什么都不做`如果省略了唯一索引推断规范,则支持子句。另请注意`postgres_fdw`支持由调用的行移动`使现代化`语句在分区表上执行,但它当前不处理选择在其中插入移动行的远程分区也是`使现代化`将在同一命令中的其他位置更新的目标分区。 + + 通常建议使用与远程表的引用列完全相同的数据类型和排序规则(如果适用)来声明外部表的列。虽然`博士后`目前对于在需要时执行数据类型转换相当宽容,当类型或排序规则不匹配时,可能会出现令人惊讶的语义异常,因为远程服务器对查询条件的解释与本地服务器不同。 + + 请注意,与基础远程表相比,外部表可以使用更少的列或不同的列顺序来声明。列与远程表的匹配是按名称,而不是按位置。 + +### F.35.1。博士后的FDW选择\_fdw + +#### F.35.1.1。连接选项 + +使用`博士后`外部数据包装可以具有libpq在连接字符串中接受的相同选项,如中所述[第34.1.2节](libpq-connect.html#LIBPQ-PARAMKEYWORDS),但这些选项不允许或有特殊处理: + +- `使用者`,`暗语`和`sslpassword`(而是在用户映射中指定这些,或使用服务文件) + +- `客户机编码`(这是根据本地服务器编码自动设置的) + +- `回退应用程序名称`(始终设置为`博士后`) + +- `sslkey`和`sslcert`- 这些可能出现在*一个或两个*一个连接和一个用户映射。如果两者都存在,则用户映射设置将覆盖连接设置。 + + 只有超级用户可以创建或修改用户映射`sslcert`或者`sslkey`设置。 + + 只有超级用户可以在没有密码验证的情况下连接到外部服务器,所以总是指定`密码`属于非超级用户的用户映射选项。 + + 超级用户可以通过设置用户映射选项在每个用户映射的基础上覆盖此检查`password_required '假'`,例如, + + +``` +ALTER USER MAPPING FOR some_non_superuser SERVER loopback_nopw +OPTIONS (ADD password_required 'false'); +``` + +为了防止非特权用户利用运行 postgres 服务器以升级为超级用户权限的 unix 用户的身份验证权限,只有超级用户可以在用户映射上设置此选项。 + +需要注意确保这不允许映射用户以超级用户身份连接到每个 CVE-2007-3278 和 CVE-2007-6601 的映射数据库。不要设置`password_required=false`在`上市`角色。请记住,映射的用户可能会使用任何客户端证书,`.pgpass`,`.pg_service.conf`等在 postgres 服务器运行的系统用户的 unix 主目录中。他们还可以使用身份验证模式授予的任何信任关系,例如`peer`or`ident`authentication. + +#### F.35.1.2. Object Name Options + +These options can be used to control the names used in SQL statements sent to the remote PostgreSQL server. These options are needed when a foreign table is created with names different from the underlying remote table's names. + +`schema_name` + +This option, which can be specified for a foreign table, gives the schema name to use for the foreign table on the remote server. If this option is omitted, the name of the foreign table's schema is used. + +`table_name` + +This option, which can be specified for a foreign table, gives the table name to use for the foreign table on the remote server. If this option is omitted, the foreign table's name is used. + +`column_name` + +This option, which can be specified for a column of a foreign table, gives the column name to use for the column on the remote server. If this option is omitted, the column's name is used. + +#### F.35.1.3. Cost Estimation Options + +`postgres_fdw`retrieves remote data by executing queries against remote servers, so ideally the estimated cost of scanning a foreign table should be whatever it costs to be done on the remote server, plus some overhead for communication. The most reliable way to get such an estimate is to ask the remote server and then add something for overhead — but for simple queries, it may not be worth the cost of an additional remote query to get a cost estimate. So`postgres_fdw`provides the following options to control how cost estimation is done: + +`use_remote_estimate` + +This option, which can be specified for a foreign table or a foreign server, controls whether`postgres_fdw`issues remote`EXPLAIN`commands to obtain cost estimates. A setting for a foreign table overrides any setting for its server, but only for that table. The default is`false`. + +`fdw_startup_cost` + +This option, which can be specified for a foreign server, is a floating point value that is added to the estimated startup cost of any foreign-table scan on that server. This represents the additional overhead of establishing a connection, parsing and planning the query on the remote side, etc. The default value is`100`. + +`fdw_tuple_cost` + +This option, which can be specified for a foreign server, is a floating point value that is used as extra cost per-tuple for foreign-table scans on that server. This represents the additional overhead of data transfer between servers. You might increase or decrease this number to reflect higher or lower network delay to the remote server. The default value is`0.01`. + +When`use_remote_estimate`is true,`postgres_fdw`obtains row count and cost estimates from the remote server and then adds`fdw_startup_cost`and`fdw_tuple_cost`to the cost estimates. When`use_remote_estimate`is false,`postgres_fdw`performs local row count and cost estimation and then adds`fdw_startup_cost`and`fdw_tuple_cost`to the cost estimates. This local estimation is unlikely to be very accurate unless local copies of the remote table's statistics are available. Running[ANALYZE](sql-analyze.html)on the foreign table is the way to update the local statistics; this will perform a scan of the remote table and then calculate and store statistics just as though the table were local. Keeping local statistics can be a useful way to reduce per-query planning overhead for a remote table — but if the remote table is frequently updated, the local statistics will soon be obsolete. + +#### F.35.1.4. Remote Execution Options + +By default, only`WHERE`clauses using built-in operators and functions will be considered for execution on the remote server. Clauses involving non-built-in functions are checked locally after rows are fetched. If such functions are available on the remote server and can be relied on to produce the same results as they do locally, performance can be improved by sending such`在哪里`远程执行的子句。可以使用以下选项控制此行为: + +`扩展` + +此选项是以逗号分隔的 PostgreSQL 扩展名列表,这些扩展名以兼容版本安装在本地和远程服务器上。不可变且属于列出的扩展的函数和运算符将被视为可发送到远程服务器。此选项只能为外部服务器指定,不能为每个表指定。 + +使用时`扩展`选项,*这是用户的责任*列出的扩展在本地和远程服务器上存在并且行为相同。否则,远程查询可能会失败或行为异常。 + +`fetch_size` + +此选项指定行数`postgres_fdw`应该进入每个提取操作。可以为外部表或外部服务器指定它。在表上指定的选项会覆盖为服务器指定的选项。默认是`100`. + +`批量大小` + +此选项指定行数`postgres_fdw`应该在每个插入操作中插入。可以为外部表或外部服务器指定它。在表上指定的选项会覆盖为服务器指定的选项。默认是`1`. + +注意实际行数`postgres_fdw`一次插入取决于列数和提供的`批量大小`价值。批处理作为单个查询执行,而 libpq 协议(其中`postgres_fdw`用于连接远程服务器)将单个查询中的参数数量限制为 65535。当列数\* `batch_size`exceeds the limit, the`batch_size`will be adjusted to avoid an error. + +#### F.35.1.5. Asynchronous Execution Options + +`postgres_fdw`supports asynchronous execution, which runs multiple parts of an`Append`node concurrently rather than serially to improve performance. This execution can be controlled using the following option: + +`async_capable` + +This option controls whether`postgres_fdw`allows foreign tables to be scanned concurrently for asynchronous execution. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is`false`. + +In order to ensure that the data being returned from a foreign server is consistent,`postgres_fdw`will only open one connection for a given foreign server and will run all queries against that server sequentially even if there are multiple foreign tables involved, unless those tables are subject to different user mappings. In such a case, it may be more performant to disable this option to eliminate the overhead associated with running queries asynchronously. + +Asynchronous execution is applied even when an`Append`node contains subplan(s) executed synchronously as well as subplan(s) executed asynchronously. In such a case, if the asynchronous subplans are ones processed using`postgres_fdw`, tuples from the asynchronous subplans are not returned until after at least one synchronous subplan returns all tuples, as that subplan is executed while the asynchronous subplans are waiting for the results of asynchronous queries sent to foreign servers. This behavior might change in a future release. + +#### F.35.1.6. Updatability Options + +By default all foreign tables using`postgres_fdw`are assumed to be updatable. This may be overridden using the following option: + +`updatable` + +This option controls whether`postgres_fdw`allows foreign tables to be modified using`INSERT`,`UPDATE`and`DELETE`commands. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is`true`. + +Of course, if the remote table is not in fact updatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server. Note however that the`information_schema`views will report a`postgres_fdw`foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server. + +#### F.35.1.7. Truncatability Options + +By default all foreign tables using`postgres_fdw`are assumed to be truncatable. This may be overridden using the following option: + +`truncatable` + +This option controls whether`postgres_fdw`allows foreign tables to be truncated using the`TRUNCATE`command. It can be specified for a foreign table or a foreign server. A table-level option overrides a server-level option. The default is`true`. + +Of course, if the remote table is not in fact truncatable, an error would occur anyway. Use of this option primarily allows the error to be thrown locally without querying the remote server. + +#### F.35.1.8. Importing Options + +`postgres_fdw`is able to import foreign table definitions using[IMPORT FOREIGN SCHEMA](sql-importforeignschema.html). This command creates foreign table definitions on the local server that match tables or views present on the remote server. If the remote tables to be imported have columns of user-defined data types, the local server must have compatible types of the same names. + +Importing behavior can be customized with the following options (given in the`IMPORT FOREIGN SCHEMA`command): + +`import_collate` + +This option controls whether column`COLLATE`options are included in the definitions of foreign tables imported from a foreign server. The default is`true`. You might need to turn this off if the remote server has a different set of collation names than the local server does, which is likely to be the case if it's running on a different operating system. If you do so, however, there is a very severe risk that the imported table columns' collations will not match the underlying data, resulting in anomalous query behavior. + +Even when this parameter is set to`true`, importing columns whose collation is the remote server's default can be risky. They will be imported with`COLLATE "default"`, which will select the local server's default collation, which could be different. + +`import_default` + +This option controls whether column`DEFAULT`expressions are included in the definitions of foreign tables imported from a foreign server. The default is`false`. If you enable this option, be wary of defaults that might get computed differently on the local server than they would be on the remote server;`nextval()`is a common source of problems. The`IMPORT`will fail altogether if an imported default expression uses a function or operator that does not exist locally. + +`import_generated` + +This option controls whether column`GENERATED`expressions are included in the definitions of foreign tables imported from a foreign server. The default is`true`. The`IMPORT`will fail altogether if an imported generated expression uses a function or operator that does not exist locally. + +`import_not_null` + +This option controls whether column`NOT NULL`constraints are included in the definitions of foreign tables imported from a foreign server. The default is`true`. + +Note that constraints other than`NOT NULL`will never be imported from the remote tables. Although PostgreSQL does support check constraints on foreign tables, there is no provision for importing them automatically, because of the risk that a constraint expression could evaluate differently on the local and remote servers. Any such inconsistency in the behavior of a check constraint could lead to hard-to-detect errors in query optimization. So if you wish to import check constraints, you must do so manually, and you should verify the semantics of each one carefully. For more detail about the treatment of check constraints on foreign tables, see[CREATE FOREIGN TABLE](sql-createforeigntable.html). + +Tables or foreign tables which are partitions of some other table are imported only when they are explicitly specified in`LIMIT TO`clause. Otherwise they are automatically excluded from[IMPORT FOREIGN SCHEMA](sql-importforeignschema.html). Since all data can be accessed through the partitioned table which is the root of the partitioning hierarchy, importing only partitioned tables should allow access to all the data without creating extra objects. + +#### F.35.1.9. Connection Management Options + +By default, all connections that`postgres_fdw`establishes to foreign servers are kept open in the local session for re-use. + +`keep_connections` + +This option controls whether`postgres_fdw`keeps the connections to the foreign server open so that subsequent queries can re-use them. It can only be specified for a foreign server. The default is`on`. If set to`off`, all connections to this foreign server will be discarded at the end of each transaction. + +### F.35.2. Functions + +`postgres_fdw_get_connections(OUT server_name text, OUT valid boolean) returns setof record` + +This function returns the foreign server names of all the open connections that`postgres_fdw`established from the local session to the foreign servers. It also returns whether each connection is valid or not.`false`is returned if the foreign server connection is used in the current local transaction but its foreign server or user mapping is changed or dropped (Note that server name of an invalid connection will be`NULL`if the server is dropped), and then such invalid connection will be closed at the end of that transaction.`true`is returned otherwise. If there are no open connections, no record is returned. Example usage of the function: + +``` +postgres=# SELECT * FROM postgres_fdw_get_connections() ORDER BY 1; + server_name | valid +### F.35.3. Connection Management + +`postgres_fdw` establishes a connection to a foreign server during the first query that uses a foreign table associated with the foreign server. By default this connection is kept and re-used for subsequent queries in the same session. This behavior can be controlled using `keep_connections` option for a foreign server. If multiple user identities (user mappings) are used to access the foreign server, a connection is established for each user mapping. + + When changing the definition of or removing a foreign server or a user mapping, the associated connections are closed. But note that if any connections are in use in the current local transaction, they are kept until the end of the transaction. Closed connections will be re-established when they are necessary by future queries using a foreign table. + + Once a connection to a foreign server has been established, it's by default kept until the local or corresponding remote session exits. To disconnect a connection explicitly, `keep_connections` option for a foreign server may be disabled, or `postgres_fdw_disconnect` and `postgres_fdw_disconnect_all` functions may be used. For example, these are useful to close connections that are no longer necessary, thereby releasing connections on the foreign server. + +### F.35.4. Transaction Management + + During a query that references any remote tables on a foreign server, `postgres_fdw` opens a transaction on the remote server if one is not already open corresponding to the current local transaction. The remote transaction is committed or aborted when the local transaction commits or aborts. Savepoints are similarly managed by creating corresponding remote savepoints. + + The remote transaction uses `SERIALIZABLE` isolation level when the local transaction has `SERIALIZABLE` isolation level; otherwise it uses `REPEATABLE READ` isolation level. This choice ensures that if a query performs multiple table scans on the remote server, it will get snapshot-consistent results for all the scans. A consequence is that successive queries within a single transaction will see the same data from the remote server, even if concurrent updates are occurring on the remote server due to other activities. That behavior would be expected anyway if the local transaction uses `SERIALIZABLE` or `REPEATABLE READ` isolation level, but it might be surprising for a `READ COMMITTED` local transaction. A future PostgreSQL release might modify these rules. + + Note that it is currently not supported by `postgres_fdw` to prepare the remote transaction for two-phase commit. + +### F.35.5. Remote Query Optimization + +`postgres_fdw` attempts to optimize remote queries to reduce the amount of data transferred from foreign servers. This is done by sending query `WHERE` clauses to the remote server for execution, and by not retrieving table columns that are not needed for the current query. To reduce the risk of misexecution of queries, `WHERE` clauses are not sent to the remote server unless they use only data types, operators, and functions that are built-in or belong to an extension that's listed in the foreign server's `extensions` option. Operators and functions in such clauses must be `IMMUTABLE` as well. For an `UPDATE` or `DELETE` query, `postgres_fdw` attempts to optimize the query execution by sending the whole query to the remote server if there are no query `WHERE` clauses that cannot be sent to the remote server, no local joins for the query, no row-level local `BEFORE` or `AFTER` triggers or stored generated columns on the target table, and no `CHECK OPTION` constraints from parent views. In `UPDATE`, expressions to assign to target columns must use only built-in data types, `IMMUTABLE` operators, or `IMMUTABLE` functions, to reduce the risk of misexecution of the query. + + When `postgres_fdw` encounters a join between foreign tables on the same foreign server, it sends the entire join to the foreign server, unless for some reason it believes that it will be more efficient to fetch rows from each table individually, or unless the table references involved are subject to different user mappings. While sending the `JOIN` clauses, it takes the same precautions as mentioned above for the `WHERE` clauses. + + The query that is actually sent to the remote server for execution can be examined using `EXPLAIN VERBOSE`. + +### F.35.6. Remote Query Execution Environment + + In the remote sessions opened by `postgres_fdw`, the [search\_path](runtime-config-client.html#GUC-SEARCH-PATH) parameter is set to just `pg_catalog`, so that only built-in objects are visible without schema qualification. This is not an issue for queries generated by `postgres_fdw` itself, because it always supplies such qualification. However, this can pose a hazard for functions that are executed on the remote server via triggers or rules on remote tables. For example, if a remote table is actually a view, any functions used in that view will be executed with the restricted search path. It is recommended to schema-qualify all names in such functions, or else attach `SET search_path` options (see [CREATE FUNCTION](sql-createfunction.html)) to such functions to establish their expected search path environment. + +`postgres_fdw` likewise establishes remote session settings for various parameters: + +* [TimeZone](runtime-config-client.html#GUC-TIMEZONE) is set to `UTC` + +* [DateStyle](runtime-config-client.html#GUC-DATESTYLE) is set to `ISO` + +* [IntervalStyle](runtime-config-client.html#GUC-INTERVALSTYLE) is set to `postgres` + +* [extra\_float\_digits](runtime-config-client.html#GUC-EXTRA-FLOAT-DIGITS) is set to `3` for remote servers 9.0 and newer and is set to `2` for older versions + + These are less likely to be problematic than `search_path`, but can be handled with function `SET` options if the need arises. + + It is *not* recommended that you override this behavior by changing the session-level settings of these parameters; that is likely to cause `postgres_fdw` to malfunction. + +### F.35.7. Cross-Version Compatibility + +`postgres_fdw` can be used with remote servers dating back to PostgreSQL 8.3. Read-only capability is available back to 8.1. A limitation however is that `postgres_fdw` generally assumes that immutable built-in functions and operators are safe to send to the remote server for execution, if they appear in a `WHERE` clause for a foreign table. Thus, a built-in function that was added since the remote server's release might be sent to it for execution, resulting in “function does not exist” or a similar error. This type of failure can be worked around by rewriting the query, for example by embedding the foreign table reference in a sub-`SELECT` with `OFFSET 0` as an optimization fence, and placing the problematic function or operator outside the sub-`SELECT`. + +### F.35.8. Examples + + Here is an example of creating a foreign table with `postgres_fdw`. First install the extension: +``` + +CREATE EXTENSION postgres_fdw; + +``` + Then create a foreign server using [CREATE SERVER](sql-createserver.html). In this example we wish to connect to a PostgreSQL server on host `192.83.123.89` listening on port `5432`. The database to which the connection is made is named `foreign_db` on the remote server: +``` + +CREATE SERVER foreign_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '192.83.123.89', port '5432', dbname 'foreign_db'); + +``` + A user mapping, defined with [CREATE USER MAPPING](sql-createusermapping.html), is needed as well to identify the role that will be used on the remote server: +``` + +CREATE USER MAPPING FOR local_user SERVER foreign_server OPTIONS (user 'foreign_user', password 'password'); + +``` + Now it is possible to create a foreign table with [CREATE FOREIGN TABLE](sql-createforeigntable.html). In this example we wish to access the table named `some_schema.some_table` on the remote server. The local name for it will be `foreign_table`: +``` + +CREATE FOREIGN TABLE foreign_table ( id integer NOT NULL, data text ) SERVER foreign_server OPTIONS (schema_name 'some_schema', table_name 'some_table'); + +``` + It's essential that the data types and other properties of the columns declared in `CREATE FOREIGN TABLE` match the actual remote table. Column names must match as well, unless you attach `column_name` options to the individual columns to show how they are named in the remote table. In many cases, use of [`IMPORT FOREIGN SCHEMA`](sql-importforeignschema.html) is preferable to constructing foreign table definitions manually. + +### F.35.9. Author + + Shigeru Hanada `<[shigeru.hanada@gmail.com](mailto:shigeru.hanada@gmail.com)>` +``` diff --git a/docs/X/queries-table-expressions.md b/docs/en/queries-table-expressions.md similarity index 100% rename from docs/X/queries-table-expressions.md rename to docs/en/queries-table-expressions.md diff --git a/docs/en/queries-table-expressions.zh.md b/docs/en/queries-table-expressions.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..77829cf1f809453acccf021cf071258baea4325e --- /dev/null +++ b/docs/en/queries-table-expressions.zh.md @@ -0,0 +1,440 @@ +## 7.2. Table Expressions + +[7.2.1. The`FROM`Clause](queries-table-expressions.html#QUERIES-FROM) + +[7.2.2. The`WHERE`Clause](queries-table-expressions.html#QUERIES-WHERE) + +[7.2.3. The`GROUP BY`and`HAVING`Clauses](queries-table-expressions.html#QUERIES-GROUP) + +[7.2.4.`GROUPING SETS`,`CUBE`, and`ROLLUP`](queries-table-expressions.html#QUERIES-GROUPING-SETS) + +[7.2.5. Window Function Processing](queries-table-expressions.html#QUERIES-WINDOW) + +[](<>) + +A*table expression*computes a table. The table expression contains a`FROM`clause that is optionally followed by`WHERE`,`GROUP BY`, and`HAVING`条款。普通表表达式只是指磁盘上的一个表,即所谓的基表,但更复杂的表达式可用于以各种方式修改或组合基表。 + +可选的`在哪里`,`通过...分组`, 和`拥有`表表达式中的子句指定对在表中派生的表执行的连续转换管道`从`条款。所有这些转换都会生成一个虚拟表,该表提供传递给选择列表的行以计算查询的输出行。 + +### 7.2.1.`这`从 + +条款[`这`](sql-select.html#SQL-FROM)从 + +``` +FROM table_reference [, table_reference [, ...]] +``` + +子句从逗号分隔的表引用列表中给出的一个或多个其他表派生表。`表引用可以是表名(可能是模式限定的),也可以是派生表,例如子查询、`加入构造,或这些的复杂组合。`如果列表中列出了多个表参考`从子句中,表是交叉连接的(即,形成它们行的笛卡尔积;见下文)。结果`从`list 是一个中间虚拟表,然后可以由`在哪里`,`通过...分组`, 和`拥有`子句,最后是整个表表达式的结果。 + +[](<>) + +When a table reference names a table that is the parent of a table inheritance hierarchy, the table reference produces rows of not only that table but all of its descendant tables, unless the key word`ONLY`precedes the table name. However, the reference produces only the columns that appear in the named table — any columns added in subtables are ignored. + +Instead of writing`ONLY`before the table name, you can write`*`after the table name to explicitly specify that descendant tables are included. There is no real reason to use this syntax any more, because searching descendant tables is now always the default behavior. However, it is supported for compatibility with older releases. + +#### 7.2.1.1. Joined Tables + +[](<>) + +A joined table is a table derived from two other (real or derived) tables according to the rules of the particular join type. Inner, outer, and cross-joins are available. The general syntax of a joined table is + +``` +T1 join_type T2 [ join_condition ] +``` + +Joins of all types can be chained together, or nested: either or both*`T1`*and*`T2`*can be joined tables. Parentheses can be used around`JOIN`clauses to control the join order. In the absence of parentheses,`JOIN`clauses nest left-to-right. + +**Join Types** + +Cross join[](<>) [](<>) + +``` +T1 CROSS JOIN T2 +``` + +For every possible combination of rows from*`T1`*and*`T2`*(i.e., a Cartesian product), the joined table will contain a row consisting of all columns in*`T1`*followed by all columns in*`T2`*.如果表分别有 N 和 M 行,则连接表将有 N\*M 行。 + +`从 *`T1`* 交叉连接 *`T2`*`相当于`从 *`T1`* 内部联接 *`T2`* 为真`(见下文)。它也相当于`从 *`T1`*, *`T2`*`. + +### 笔记 + +当出现两个以上的表时,后一种等价性并不完全成立,因为`加入`比逗号绑定得更紧密。例如`从 *`T1`* 交叉连接 *`T2`* 内部联接 *`T3`* 在 *`健康)状况`*`不一样`从 *`T1`*, *`T2`* 内部联接 *`T3`* 在 *`健康)状况`*`因为*`健康)状况`*可以参考*`T1`*在第一种情况下,但不是第二种情况。 + +合格的加盟[](<>) [](<>) + +``` +T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 ON boolean_expression +T1 { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 USING ( join column list ) +T1 NATURAL { [INNER] | { LEFT | RIGHT | FULL } [OUTER] } JOIN T2 +``` + +话`内`和`外`在所有形式中都是可选的。`内`是默认值;`剩下`,`正确的`, 和`满的`暗示外连接。 + +这*加入条件*被指定在`在`要么`使用`子句,或隐含的词`自然的`.连接条件确定两个源表中的哪些行被认为是“匹配”的,如下文详细说明。 + +可能的合格连接类型有: + +`内部联接` + +对于 T1 的每一行 R1,连接表对于 T2 中满足与 R1 的连接条件的每一行都有一行。 + +`左外连接` [](<>) [](<>) + +首先,执行内连接。然后,对于 T1 中与 T2 中的任何行不满足连接条件的每一行,在 T2 的列中添加一个带有空值的连接行。因此,连接表对于 T1 中的每一行总是至少有一行。 + +`右外连接` [](<>) [](<>) + +首先,执行内连接。然后,对于 T2 中与 T1 中的任何行不满足连接条件的每一行,在 T1 的列中添加一个带有空值的连接行。这与左连接相反:结果表将始终为 T2 中的每一行保留一行。 + +`全外连接` + +首先,执行内连接。然后,对于 T1 中与 T2 中的任何行不满足连接条件的每一行,在 T2 的列中添加一个带有空值的连接行。此外,对于 T2 中与 T1 中的任何行不满足连接条件的每一行,添加一个在 T1 的列中具有空值的连接行。 + +这`在`子句是最通用的连接条件:它采用与`在哪里`条款。一对从*`T1`*和*`T2`*如果匹配`在`表达式的计算结果为真。 + +这`使用`子句是一种速记,它允许您利用连接双方对连接列使用相同名称的特定情况。它采用逗号分隔的共享列名列表,并形成一个连接条件,其中包括每个列的相等比较。例如,加入*`T1`*和*`T2`*和`使用(a,b)`产生连接条件`在 *`T1`*.a = *`T2`*.a 和 *`T1`*.b = *`T2`*.b`. + +此外,输出`加入使用`抑制冗余列:不需要打印两个匹配的列,因为它们必须具有相同的值。尽管`加入`产生所有列*`T1`*紧随其后的所有列*`T2`*,`加入使用`为每个列出的列对(按列出的顺序)生成一个输出列,然后是来自*`T1`*,后跟任何剩余的列*`T2`*. + +[](<>) [](<>)最后,`自然`是的简写形式`使用`: 它形成一个`使用`由出现在两个输入表中的所有列名组成的列表。与`使用`,这些列在输出表中只出现一次。如果没有通用的列名,`自然加入`表现得像`加入...正确`,产生一个叉积连接。 + +### 笔记 + +`使用`由于仅组合了列出的列,因此对于连接关系中的列更改是相当安全的。`自然`风险要大得多,因为对任何一个关系的任何模式更改都会导致出现新的匹配列名,这将导致连接也合并该新列。 + +综上所述,假设我们有表格`t1`: + +``` + num | name +#### 7.2.1.2. Table and Column Aliases + +[]()[]() + + A temporary name can be given to tables and complex table references to be used for references to the derived table in the rest of the query. This is called a *table alias*. + + To create a table alias, write +``` + +FROM table_reference AS 别名 + +``` + or +``` + +FROM table_reference 别名 + +``` + The `AS` key word is optional noise. *`alias`* can be any identifier. + + A typical application of table aliases is to assign short identifiers to long table names to keep the join clauses readable. For example: +``` + +SELECT \* FROM some_very_long_table_name s JOIN another_fairly_long_name a ON s.id = a.num; + +``` + The alias becomes the new name of the table reference so far as the current query is concerned — it is not allowed to refer to the table by the original name elsewhere in the query. Thus, this is not valid: +``` + +SELECT \* FROM my_table AS m WHERE my_table.a > 5;- 错误的 + +``` + Table aliases are mainly for notational convenience, but it is necessary to use them when joining a table to itself, e.g.: +``` + +SELECT \* FROM people AS mother JOIN people AS child ON mother.id = child.mother_id; + +``` + Additionally, an alias is required if the table reference is a subquery (see [Section 7.2.1.3](queries-table-expressions.html#QUERIES-SUBQUERIES)). + + Parentheses are used to resolve ambiguities. In the following example, the first statement assigns the alias `b` to the second instance of `my_table`, but the second statement assigns the alias to the result of the join: +``` + +选择*FROM my_table AS a CROSS JOIN my_table AS b ... SELECT*FROM (my_table AS a CROSS JOIN my_table) AS b ... + +``` + Another form of table aliasing gives temporary names to the columns of the table, as well as the table itself: +``` + +FROM table_reference[作为]别名 ( column1 \[, column2[, ...]]) + +``` + If fewer column aliases are specified than the actual table has columns, the remaining columns are not renamed. This syntax is especially useful for self-joins or subqueries. + + When an alias is applied to the output of a `JOIN` clause, the alias hides the original name(s) within the `JOIN`. For example: +``` + +SELECT a.\* FROM my_table AS a JOIN your_table AS b ON ... + +``` + is valid SQL, but: +``` + +SELECT a.\* FROM (my_table AS a JOIN your_table AS b ON ...) AS c + +``` + is not valid; the table alias `a` is not visible outside the alias `c`. + +#### 7.2.1.3. Subqueries + +[]() + + Subqueries specifying a derived table must be enclosed in parentheses and *must* be assigned a table alias name (as in [Section 7.2.1.2](queries-table-expressions.html#QUERIES-TABLE-ALIASES)). For example: +``` + +FROM (SELECT \* FROM table1) AS alias_name + +``` + This example is equivalent to `FROM table1 AS alias_name`. More interesting cases, which cannot be reduced to a plain join, arise when the subquery involves grouping or aggregation. + + A subquery can also be a `VALUES` list: +``` + +FROM (VALUES ('anne', 'smith'), ('bob', 'jones'), ('joe', 'blow')) AS names(first, last) + +``` + Again, a table alias is required. Assigning alias names to the columns of the `VALUES` list is optional, but is good practice. For more information see [Section 7.7](queries-values.html). + +#### 7.2.1.4. Table Functions + +[]()[]() + + Table functions are functions that produce a set of rows, made up of either base data types (scalar types) or composite data types (table rows). They are used like a table, view, or subquery in the `FROM` clause of a query. Columns returned by table functions can be included in `SELECT`, `JOIN`, or `WHERE` clauses in the same manner as columns of a table, view, or subquery. + + Table functions may also be combined using the `ROWS FROM` syntax, with the results returned in parallel columns; the number of result rows in this case is that of the largest function result, with smaller results padded with null values to match. +``` + +函数调用[具有顺序性][作为]table_alias \[(column_alias[, ...])]] ROWS FROM(function_call[, ...])[具有顺序性][作为]table_alias \[(column_alias[, ...])]] + +``` + If the `WITH ORDINALITY` clause is specified, an additional column of type `bigint` will be added to the function result columns. This column numbers the rows of the function result set, starting from 1. (This is a generalization of the SQL-standard syntax for `UNNEST ... WITH ORDINALITY`.) By default, the ordinal column is called `ordinality`, but a different column name can be assigned to it using an `AS` clause. + + The special table function `UNNEST` may be called with any number of array parameters, and it returns a corresponding number of columns, as if `UNNEST` ([Section 9.19](functions-array.html)) had been called on each parameter separately and combined using the `ROWS FROM` construct. +``` + +UNNEST(数组表达式[, ...])[具有顺序性][作为]table_alias \[(column_alias[, ...])]] + +``` + If no *`table_alias`* is specified, the function name is used as the table name; in the case of a `ROWS FROM()` construct, the first function's name is used. + + If column aliases are not supplied, then for a function returning a base data type, the column name is also the same as the function name. For a function returning a composite type, the result columns get the names of the individual attributes of the type. + + Some examples: +``` + +CREATE TABLE foo (fooid int, foosubid int, fooname text); + +CREATE FUNCTION getfoo(int) RETURNS SETOF foo AS $$ SELECT \* FROM foo WHERE fooid = $1;$$ 语言 SQL; + +SELECT \* FROM getfoo(1) AS t1; + +SELECT \* FROM foo WHERE foosubid IN ( SELECT foosubid FROM getfoo(foo.fooid) z WHERE z.fooid = foo.fooid ); + +创建视图 vw_getfoo AS SELECT \* FROM getfoo(1); + +选择 \* 从 vw_getfoo; + +``` + In some cases it is useful to define table functions that can return different column sets depending on how they are invoked. To support this, the table function can be declared as returning the pseudo-type `record` with no `OUT` parameters. When such a function is used in a query, the expected row structure must be specified in the query itself, so that the system can know how to parse and plan the query. This syntax looks like: +``` + +函数调用[作为]别名(列定义[, ...]) function_call AS[别名](列定义[, ...]) ROWS FROM( ... function_call AS (column_definition[, ...])[, ...]) + +``` + When not using the `ROWS FROM()` syntax, the *`column_definition`* list replaces the column alias list that could otherwise be attached to the `FROM` item; the names in the column definitions serve as column aliases. When using the `ROWS FROM()` syntax, a *`column_definition`* list can be attached to each member function separately; or if there is only one member function and no `WITH ORDINALITY` clause, a *`column_definition`* list can be written in place of a column alias list following `ROWS FROM()`. + + Consider this example: +``` + +SELECT \* FROM dblink('dbname=mydb', 'SELECT proname, prosrc FROM pg_proc') AS t1(proname name, prosrc text) WHERE proname LIKE 'bytea%'; + +``` + The [dblink](contrib-dblink-function.html) function (part of the [dblink](dblink.html) module) executes a remote query. It is declared to return `record` since it might be used for any kind of query. The actual column set must be specified in the calling query so that the parser knows, for example, what `*` should expand to. + + This example uses `ROWS FROM`: +``` + +SELECT \* FROM ROWS FROM ( json_to_recordset('[{"a":40,"b":"foo"},{"a":"100","b":"bar"}]') AS (a INTEGER, b TEXT), generate_series(1, 3) ) AS x (p, q, s) ORDER BY p; + +p | q | s + +#### 7.2.1.5.`LATERAL`Subqueries + +[](<>) + +Subqueries appearing in`FROM`can be preceded by the key word`LATERAL`. This allows them to reference columns provided by preceding`FROM`items. (Without`LATERAL`, each subquery is evaluated independently and so cannot cross-reference any other`FROM`item.) + +Table functions appearing in`FROM`can also be preceded by the key word`LATERAL`, but for functions the key word is optional; the function's arguments can contain references to columns provided by preceding`FROM`items in any case. + +A`LATERAL`项目可以出现在顶层`从`列表,或在一个`加入`树。在后一种情况下,它也可以指代左侧的任何项目`加入`它位于右侧。 + +当一个`从`项目包含`侧`交叉引用,评估过程如下:对于每一行`从`提供交叉引用的列或一组多行的项目`从`提供列的项目,`侧`使用该行或行集的列值评估项目。结果行像往常一样与计算它们的行连接。对列源表中的每一行或每组行重复此操作。 + +一个简单的例子`侧`是 + +``` +SELECT * FROM foo, LATERAL (SELECT * FROM bar WHERE bar.id = foo.bar_id) ss; +``` + +这不是特别有用,因为它与更传统的结果完全相同 + +``` +SELECT * FROM foo, bar WHERE bar.id = foo.bar_id; +``` + +`侧`当需要交叉引用的列来计算要连接的行时,它主要有用。一个常见的应用是为一个集合返回函数提供一个参数值。例如,假设`顶点(多边形)`返回多边形的顶点集,我们可以识别存储在表中的多边形的靠近在一起的顶点: + +``` +SELECT p1.id, p2.id, v1, v2 +FROM polygons p1, polygons p2, + LATERAL vertices(p1.poly) v1, + LATERAL vertices(p2.poly) v2 +WHERE (v1 <-> v2) < 10 AND p1.id != p2.id; +``` + +这个查询也可以写成 + +``` +SELECT p1.id, p2.id, v1, v2 +FROM polygons p1 CROSS JOIN LATERAL vertices(p1.poly) v1, + polygons p2 CROSS JOIN LATERAL vertices(p2.poly) v2 +WHERE (v1 <-> v2) < 10 AND p1.id != p2.id; +``` + +或其他几种等效的配方。(如前所述,`侧`key word is unnecessary in this example, but we use it for clarity.) + +It is often particularly handy to`LEFT JOIN`to a`LATERAL`subquery, so that source rows will appear in the result even if the`LATERAL`subquery produces no rows for them. For example, if`get_product_names()`returns the names of products made by a manufacturer, but some manufacturers in our table currently produce no products, we could find out which ones those are like this: + +``` +SELECT m.name +FROM manufacturers m LEFT JOIN LATERAL get_product_names(m.id) pname ON true +WHERE pname IS NULL; +``` + +### 7.2.2. The`WHERE`Clause + +[](<>) + +The syntax of the[`WHERE`](sql-select.html#SQL-WHERE)clause is + +``` +WHERE search_condition +``` + +where*`search_condition`*is any value expression (see[Section 4.2](sql-expressions.html)) that returns a value of type`boolean`. + +After the processing of the`FROM`clause is done, each row of the derived virtual table is checked against the search condition. If the result of the condition is true, the row is kept in the output table, otherwise (i.e., if the result is false or null) it is discarded. The search condition typically references at least one column of the table generated in the`FROM`clause; this is not required, but otherwise the`WHERE`子句将毫无用处。 + +### 笔记 + +内连接的连接条件可以写成`在哪里`条款或在`加入`条款。例如,这些表表达式是等价的: + +``` +FROM a, b WHERE a.id = b.id AND b.val > 5 +``` + +和: + +``` +FROM a INNER JOIN b ON (a.id = b.id) WHERE b.val > 5 +``` + +或者甚至: + +``` +FROM a NATURAL JOIN b WHERE b.val > 5 +``` + +您使用哪一个主要是风格问题。这`加入`中的语法`从`子句可能不像其他 SQL 数据库管理系统那样可移植,即使它在 SQL 标准中。对于外连接没有选择:它们必须在`从`条款。这`在`要么`使用`外连接的子句是*不是*相当于一个`在哪里`条件,因为它会导致添加行(对于不匹配的输入行)以及删除最终结果中的行。 + +以下是一些示例`在哪里`条款: + +``` +SELECT ... FROM fdt WHERE c1 > 5 + +SELECT ... FROM fdt WHERE c1 IN (1, 2, 3) + +SELECT ... FROM fdt WHERE c1 IN (SELECT c1 FROM t2) + +SELECT ... FROM fdt WHERE c1 IN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) + +SELECT ... FROM fdt WHERE c1 BETWEEN (SELECT c3 FROM t2 WHERE c2 = fdt.c1 + 10) AND 100 + +SELECT ... FROM fdt WHERE EXISTS (SELECT c1 FROM t2 WHERE c2 > fdt.c1) +``` + +`fdt`是在`从…起`条款不符合搜索条件的行`哪里`从句从句中删除`fdt`.注意标量子查询用作值表达式。与任何其他查询一样,子查询可以使用复杂的表表达式。还要注意`fdt`在子查询中引用。排位赛`c1`像`fdt。c1`只有在以下情况下才有必要`c1`也是子查询的派生输入表中的列的名称。但是,即使不需要列名,对列名进行限定也会增加清晰度。本例显示了外部查询的列命名范围如何扩展到内部查询。 + +### 7.2.3.那个`分组`和`有`条款 + +[](<>)[](<>) + +通过考试后`哪里`筛选时,派生的输入表可能会使用`分组`子句,并使用`有`条款 + +``` +SELECT select_list + FROM ... + [WHERE ...] + GROUP BY grouping_column_reference [, grouping_column_reference]... +``` + +这个[`分组`](sql-select.html#SQL-GROUPBY)子句用于将表中所有列中具有相同值的行组合在一起。列的列出顺序并不重要。其效果是将具有公共值的每组行组合到一个表示组中所有行的组行中。这样做是为了消除适用于这些组的输出和/或计算聚合中的冗余。例如: + +``` +=> SELECT * FROM test1; + x | y +### Tip + + Grouping without aggregate expressions effectively calculates the set of distinct values in a column. This can also be achieved using the `DISTINCT` clause (see [Section 7.3.3](queries-select-lists.html#QUERIES-DISTINCT)). + + Here is another example: it calculates the total sales for each product (rather than the total sales of all products): +``` + +选择product_id,p.name,(总和(单位)\*p.price)作为products p的销售额,按product_id,p.name,p.price使用(product_id)组加入sales s; + +``` + In this example, the columns `product_id`, `p.name`, and `p.price` must be in the `GROUP BY` clause since they are referenced in the query select list (but see below). The column `s.units` does not have to be in the `GROUP BY` list since it is only used in an aggregate expression (`sum(...)`), which represents the sales of a product. For each product, the query returns a summary row about all sales of the product. + +[]() + + If the products table is set up so that, say, `product_id` is the primary key, then it would be enough to group by `product_id` in the above example, since name and price would be *functionally dependent* on the product ID, and so there would be no ambiguity about which name and price value to return for each product ID group. + + In strict SQL, `GROUP BY` can only group by columns of the source table but PostgreSQL extends this to also allow `GROUP BY` to group by columns in the select list. Grouping by value expressions instead of simple column names is also allowed. + +[]() + + If a table has been grouped using `GROUP BY`, but only certain groups are of interest, the `HAVING` clause can be used, much like a `WHERE` clause, to eliminate groups from the result. The syntax is: +``` + +从…中选择列表。。。[哪里...]分组方式。。。有布尔表达式的 + +``` + Expressions in the `HAVING` clause can refer both to grouped expressions and to ungrouped expressions (which necessarily involve an aggregate function). + + Example: +``` + +=>通过x的总和(y)>3,从test1组中选择x,总和(y);x |和 + +### 7.2.4. `分组集`, `立方体`和`汇总` + +[](<>)[](<>)[](<>) + +与上述操作相比,更复杂的分组操作可以使用*分组集*.用户选择的数据`从…起`和`哪里`子句按每个指定的分组集分别分组,为每个组计算聚合,就像对简单`分组`子句,然后返回结果。例如: + +``` +=> SELECT * FROM items_sold; + brand | size | sales +### Note + + The construct `(a, b)` is normally recognized in expressions as a [row constructor](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS). Within the `GROUP BY` clause, this does not apply at the top levels of expressions, and `(a, b)` is parsed as a list of expressions as described above. If for some reason you *need* a row constructor in a grouping expression, use `ROW(a, b)`. + +### 7.2.5. Window Function Processing + +[]() + + If the query contains any window functions (see [Section 3.5](tutorial-window.html), [Section 9.22](functions-window.html) and [Section 4.2.8](sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS)), these functions are evaluated after any grouping, aggregation, and `HAVING` filtering is performed. That is, if the query uses any aggregates, `GROUP BY`, or `HAVING`, then the rows seen by the window functions are the group rows instead of the original table rows from `FROM`/`WHERE`. + + When multiple window functions are used, all the window functions having syntactically equivalent `PARTITION BY` and `ORDER BY` clauses in their window definitions are guaranteed to be evaluated in a single pass over the data. Therefore they will see the same sort ordering, even if the `ORDER BY` does not uniquely determine an ordering. However, no guarantees are made about the evaluation of functions having different `PARTITION BY` or `ORDER BY` specifications. (In such cases a sort step is typically required between the passes of window function evaluations, and the sort is not guaranteed to preserve ordering of rows that its `ORDER BY` sees as equivalent.) + + Currently, window functions always require presorted data, and so the query output will be ordered according to one or another of the window functions' `PARTITION BY`/`ORDER BY` clauses. It is not recommended to rely on this, however. Use an explicit top-level `ORDER BY` clause if you want to be sure the results are sorted in a particular way. +``` diff --git a/docs/X/queries-with.md b/docs/en/queries-with.md similarity index 100% rename from docs/X/queries-with.md rename to docs/en/queries-with.md diff --git a/docs/en/queries-with.zh.md b/docs/en/queries-with.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..ce77d3eb4c81930074b1f4a41bda06f72e7f3908 --- /dev/null +++ b/docs/en/queries-with.zh.md @@ -0,0 +1,402 @@ +## 7.8.`和`查询(公用表表达式) + +[7.8.1.`选择`在`和`](queries-with.html#QUERIES-WITH-SELECT) + +[7.8.2.递归查询](queries-with.html#QUERIES-WITH-RECURSIVE) + +[7.8.3.公用表表达式实现](queries-with.html#id-1.5.6.12.7) + +[7.8.4。中的数据修改语句`和`](queries-with.html#QUERIES-WITH-MODIFYING) + +[](<>)[](<>) + +`和`提供了一种编写辅助语句以用于更大查询的方法。这些语句,通常被称为公用表表达式或 CTE,可以被认为是定义仅用于一个查询的临时表。一个辅助语句中的每个`和`子句可以是`选择`,`插入`,`更新`, 或者`删除`;和`和`子句本身附加到一个主要语句,该语句也可以是`选择`,`插入`,`更新`, 要么`删除`. + +### 7.8.1.`选择`在`和` + +的基本价值`选择`在`和`就是将复杂的查询分解成更简单的部分。一个例子是: + +``` +WITH regional_sales AS ( + SELECT region, SUM(amount) AS total_sales + FROM orders + GROUP BY region +), top_regions AS ( + SELECT region + FROM regional_sales + WHERE total_sales > (SELECT SUM(total_sales)/10 FROM regional_sales) +) +SELECT region, + product, + SUM(quantity) AS product_units, + SUM(amount) AS product_sales +FROM orders +WHERE region IN (SELECT region FROM top_regions) +GROUP BY region, product; +``` + +它仅显示顶级销售地区的每种产品的销售总额。这`和`子句定义了两个名为`区域销售`和`顶部区域`,其中的输出`区域销售`用于`顶部区域`和输出`顶部区域`用于初级`选择`询问。这个例子可以不用写`和`, but we'd have needed two levels of nested sub-`SELECT`s. It's a bit easier to follow this way. + +### 7.8.2. Recursive Queries + +[](<>)The optional`RECURSIVE`modifier changes`WITH`from a mere syntactic convenience into a feature that accomplishes things not otherwise possible in standard SQL. Using`RECURSIVE`, a`WITH`query can refer to its own output. A very simple example is this query to sum the integers from 1 through 100: + +``` +WITH RECURSIVE t(n) AS ( + VALUES (1) + UNION ALL + SELECT n+1 FROM t WHERE n < 100 +) +SELECT sum(n) FROM t; +``` + +The general form of a recursive`WITH`query is always a*non-recursive term*, then`UNION`(or`UNION ALL`), then a*recursive term*, where only the recursive term can contain a reference to the query's own output. Such a query is executed as follows: + +**Recursive Query Evaluation** + +1. Evaluate the non-recursive term. For`UNION`(but not`UNION ALL`), discard duplicate rows. Include all remaining rows in the result of the recursive query, and also place them in a temporary*working table*. + +2. So long as the working table is not empty, repeat these steps: + + 1. Evaluate the recursive term, substituting the current contents of the working table for the recursive self-reference. For`UNION`(but not`UNION ALL`), discard duplicate rows and rows that duplicate any previous result row. Include all remaining rows in the result of the recursive query, and also place them in a temporary*intermediate table*. + + 2. Replace the contents of the working table with the contents of the intermediate table, then empty the intermediate table. + +### Note + +Strictly speaking, this process is iteration not recursion, but`RECURSIVE`is the terminology chosen by the SQL standards committee. + +In the example above, the working table has just a single row in each step, and it takes on the values from 1 through 100 in successive steps. In the 100th step, there is no output because of the`WHERE`clause, and so the query terminates. + +Recursive queries are typically used to deal with hierarchical or tree-structured data. A useful example is this query to find all the direct and indirect sub-parts of a product, given only a table that shows immediate inclusions: + +``` +WITH RECURSIVE included_parts(sub_part, part, quantity) AS ( + SELECT sub_part, part, quantity FROM parts WHERE part = 'our_product' + UNION ALL + SELECT p.sub_part, p.part, p.quantity + FROM included_parts pr, parts p + WHERE p.part = pr.sub_part +) +SELECT sub_part, SUM(quantity) as total_quantity +FROM included_parts +GROUP BY sub_part +``` + +#### 7.8.2.1. Search Order + +When computing a tree traversal using a recursive query, you might want to order the results in either depth-first or breadth-first order. This can be done by computing an ordering column alongside the other data columns and using that to sort the results at the end. Note that this does not actually control in which order the query evaluation visits the rows; that is as always in SQL implementation-dependent. This approach merely provides a convenient way to order the results afterwards. + +To create a depth-first order, we compute for each result row an array of rows that we have visited so far. For example, consider the following query that searches a table`tree`using a`link`field: + +``` +WITH RECURSIVE search_tree(id, link, data) AS ( + SELECT t.id, t.link, t.data + FROM tree t + UNION ALL + SELECT t.id, t.link, t.data + FROM tree t, search_tree st + WHERE t.id = st.link +) +SELECT * FROM search_tree; +``` + +To add depth-first ordering information, you can write this: + +``` +WITH RECURSIVE search_tree(id, link, data, path) AS ( + SELECT t.id, t.link, t.data, ARRAY[t.id] + FROM tree t + UNION ALL + SELECT t.id, t.link, t.data, path || t.id + FROM tree t, search_tree st + WHERE t.id = st.link +) +SELECT * FROM search_tree ORDER BY path; +``` + +In the general case where more than one field needs to be used to identify a row, use an array of rows. For example, if we needed to track fields`f1`and`f2`: + +``` +WITH RECURSIVE search_tree(id, link, data, path) AS ( + SELECT t.id, t.link, t.data, ARRAY[ROW(t.f1, t.f2)] + FROM tree t + UNION ALL + SELECT t.id, t.link, t.data, path || ROW(t.f1, t.f2) + FROM tree t, search_tree st + WHERE t.id = st.link +) +SELECT * FROM search_tree ORDER BY path; +``` + +### Tip + +Omit the`ROW()`syntax in the common case where only one field needs to be tracked. This allows a simple array rather than a composite-type array to be used, gaining efficiency. + +To create a breadth-first order, you can add a column that tracks the depth of the search, for example: + +``` +WITH RECURSIVE search_tree(id, link, data, depth) AS ( + SELECT t.id, t.link, t.data, 0 + FROM tree t + UNION ALL + SELECT t.id, t.link, t.data, depth + 1 + FROM tree t, search_tree st + WHERE t.id = st.link +) +SELECT * FROM search_tree ORDER BY depth; +``` + +To get a stable sort, add data columns as secondary sorting columns. + +### Tip + +The recursive query evaluation algorithm produces its output in breadth-first search order. However, this is an implementation detail and it is perhaps unsound to rely on it. The order of the rows within each level is certainly undefined, so some explicit ordering might be desired in any case. + +There is built-in syntax to compute a depth- or breadth-first sort column. For example: + +``` +WITH RECURSIVE search_tree(id, link, data) AS ( + SELECT t.id, t.link, t.data + FROM tree t + UNION ALL + SELECT t.id, t.link, t.data + FROM tree t, search_tree st + WHERE t.id = st.link +) SEARCH DEPTH FIRST BY id SET ordercol +SELECT * FROM search_tree ORDER BY ordercol; + +WITH RECURSIVE search_tree(id, link, data) AS ( + SELECT t.id, t.link, t.data + FROM tree t + UNION ALL + SELECT t.id, t.link, t.data + FROM tree t, search_tree st + WHERE t.id = st.link +) SEARCH BREADTH FIRST BY id SET ordercol +SELECT * FROM search_tree ORDER BY ordercol; +``` + +This syntax is internally expanded to something similar to the above hand-written forms. The`SEARCH`clause specifies whether depth- or breadth first search is wanted, the list of columns to track for sorting, and a column name that will contain the result data that can be used for sorting. That column will implicitly be added to the output rows of the CTE. + +#### 7.8.2.2. Cycle Detection + +When working with recursive queries it is important to be sure that the recursive part of the query will eventually return no tuples, or else the query will loop indefinitely. Sometimes, using`UNION`instead of`UNION ALL`can accomplish this by discarding rows that duplicate previous output rows. However, often a cycle does not involve output rows that are completely duplicate: it may be necessary to check just one or a few fields to see if the same point has been reached before. The standard method for handling such situations is to compute an array of the already-visited values. For example, consider again the following query that searches a table`graph`using a`link`field: + +``` +WITH RECURSIVE search_graph(id, link, data, depth) AS ( + SELECT g.id, g.link, g.data, 0 + FROM graph g + UNION ALL + SELECT g.id, g.link, g.data, sg.depth + 1 + FROM graph g, search_graph sg + WHERE g.id = sg.link +) +SELECT * FROM search_graph; +``` + +This query will loop if the`link`relationships contain cycles. Because we require a “depth” output, just changing`联合所有`到`联盟`不会消除循环。相反,我们需要识别在遵循特定链接路径时是否再次到达同一行。我们添加两列`is_cycle`和`小路`到容易循环的查询: + +``` +WITH RECURSIVE search_graph(id, link, data, depth, is_cycle, path) AS ( + SELECT g.id, g.link, g.data, 0, + false, + ARRAY[g.id] + FROM graph g + UNION ALL + SELECT g.id, g.link, g.data, sg.depth + 1, + g.id = ANY(path), + path || g.id + FROM graph g, search_graph sg + WHERE g.id = sg.link AND NOT is_cycle +) +SELECT * FROM search_graph; +``` + +除了防止循环之外,数组值本身通常也很有用,因为它代表了到达任何特定行所采用的“路径”。 + +在需要检查多个字段以识别循环的一般情况下,使用一组行。例如,如果我们需要比较字段`f1`和`f2`: + +``` +WITH RECURSIVE search_graph(id, link, data, depth, is_cycle, path) AS ( + SELECT g.id, g.link, g.data, 0, + false, + ARRAY[ROW(g.f1, g.f2)] + FROM graph g + UNION ALL + SELECT g.id, g.link, g.data, sg.depth + 1, + ROW(g.f1, g.f2) = ANY(path), + path || ROW(g.f1, g.f2) + FROM graph g, search_graph sg + WHERE g.id = sg.link AND NOT is_cycle +) +SELECT * FROM search_graph; +``` + +### 提示 + +省略`排()`只需要检查一个字段来识别循环的常见情况下的语法。这允许使用简单数组而不是复合类型数组,从而提高效率。 + +有用于简化循环检测的内置语法。上面的查询也可以这样写: + +``` +WITH RECURSIVE search_graph(id, link, data, depth) AS ( + SELECT g.id, g.link, g.data, 1 + FROM graph g + UNION ALL + SELECT g.id, g.link, g.data, sg.depth + 1 + FROM graph g, search_graph sg + WHERE g.id = sg.link +) CYCLE id SET is_cycle USING path +SELECT * FROM search_graph; +``` + +它将在内部重写为上述形式。这`循环`子句首先指定要跟踪以进行循环检测的列列表,然后指定显示是否检测到循环的列名,最后指定将跟踪路径的另一列的名称。循环和路径列将隐式添加到 CTE 的输出行。 + +### 提示 + +循环路径列的计算方式与上一节中显示的深度优先排序列相同。一个查询可以同时具有`搜索`和一个`循环`子句,但深度优先搜索规范和循环检测规范会产生冗余计算,因此仅使用`循环`clause and order by the path column. If breadth-first ordering is wanted, then specifying both`SEARCH`and`CYCLE`can be useful. + +A helpful trick for testing queries when you are not certain if they might loop is to place a`LIMIT`in the parent query. For example, this query would loop forever without the`LIMIT`: + +``` +WITH RECURSIVE t(n) AS ( + SELECT 1 + UNION ALL + SELECT n+1 FROM t +) +SELECT n FROM t LIMIT 100; +``` + +This works because PostgreSQL's implementation evaluates only as many rows of a`WITH`query as are actually fetched by the parent query. Using this trick in production is not recommended, because other systems might work differently. Also, it usually won't work if you make the outer query sort the recursive query's results or join them to some other table, because in such cases the outer query will usually try to fetch all of the`WITH`query's output anyway. + +### 7.8.3. Common Table Expression Materialization + +A useful property of`WITH`queries is that they are normally evaluated only once per execution of the parent query, even if they are referred to more than once by the parent query or sibling`WITH`queries. Thus, expensive calculations that are needed in multiple places can be placed within a`WITH`query to avoid redundant work. Another possible application is to prevent unwanted multiple evaluations of functions with side-effects. However, the other side of this coin is that the optimizer is not able to push restrictions from the parent query down into a multiply-referenced`WITH`query, since that might affect all uses of the`WITH`query's output when it should affect only one. The multiply-referenced`WITH`query will be evaluated as written, without suppression of rows that the parent query might discard afterwards. (But, as mentioned above, evaluation might stop early if the reference(s) to the query demand only a limited number of rows.) + +但是,如果一个`和`查询是非递归且无副作用的(也就是说,它是一个`选择`不包含 volatile 函数)然后可以将其折叠到父查询中,从而允许两个查询级别的联合优化。默认情况下,如果父查询引用`和`只查询一次,但如果它引用`和`多次查询。您可以通过指定来覆盖该决定`物化`强制单独计算`和`查询,或通过指定`未物化`强制将其合并到父查询中。后一种选择有重复计算的风险`和`查询,但如果每次使用`和`查询只需要一小部分`和`查询的完整输出。 + +这些规则的一个简单示例是 + +``` +WITH w AS ( + SELECT * FROM big_table +) +SELECT * FROM w WHERE key = 123; +``` + +这`和`查询将被折叠,产生相同的执行计划 + +``` +SELECT * FROM big_table WHERE key = 123; +``` + +特别是,如果有一个索引`钥匙`,它可能会被用来只获取具有`键 = 123`. On the other hand, in + +``` +WITH w AS ( + SELECT * FROM big_table +) +SELECT * FROM w AS w1 JOIN w AS w2 ON w1.key = w2.ref +WHERE w2.key = 123; +``` + +the`WITH`query will be materialized, producing a temporary copy of`big_table`that is then joined with itself — without benefit of any index. This query will be executed much more efficiently if written as + +``` +WITH w AS NOT MATERIALIZED ( + SELECT * FROM big_table +) +SELECT * FROM w AS w1 JOIN w AS w2 ON w1.key = w2.ref +WHERE w2.key = 123; +``` + +so that the parent query's restrictions can be applied directly to scans of`big_table`. + +An example where`NOT MATERIALIZED`could be undesirable is + +``` +WITH w AS ( + SELECT key, very_expensive_function(val) as f FROM some_table +) +SELECT * FROM w AS w1 JOIN w AS w2 ON w1.f = w2.f; +``` + +Here, materialization of the`WITH`query ensures that`very_expensive_function`is evaluated only once per table row, not twice. + +The examples above only show`WITH`being used with`SELECT`, but it can be attached in the same way to`INSERT`,`UPDATE`, or`DELETE`. In each case it effectively provides temporary table(s) that can be referred to in the main command. + +### 7.8.4. Data-Modifying Statements in`和` + +您可以使用数据修改语句(`插入`,`更新`, 或者`删除`) 在`和`.这允许您在同一个查询中执行几个不同的操作。一个例子是: + +``` +WITH moved_rows AS ( + DELETE FROM products + WHERE + "date" >= '2010-10-01' AND + "date" < '2010-11-01' + RETURNING * +) +INSERT INTO products_log +SELECT * FROM moved_rows; +``` + +此查询有效地从`产品`到`产品日志`.这`删除`在`和`删除指定的行`产品`,通过其返回其内容`返回`条款;然后主查询读取该输出并将其插入`产品日志`. + +上述示例的一个优点是`和`条款附在`插入`,而不是子`选择`内`插入`.这是必要的,因为数据修改语句只允许在`和`附加到顶级语句的子句。不过,正常`和`可见性规则适用,因此可以参考`和`子语句的输出`选择`. + +中的数据修改语句`和`通常有`返回`条款(见[第 6.4 节](dml-returning.html)),如上例所示。它是输出`返回`条款,*不是*数据修改语句的目标表,它形成了可以由查询的其余部分引用的临时表。如果数据修改语句`和`缺少一个`返回`子句,则它不会形成临时表,并且不能在查询的其余部分中引用。尽管如此,这样的语句仍将被执行。一个不是特别有用的例子是: + +``` +WITH t AS ( + DELETE FROM foo +) +DELETE FROM bar; +``` + +此示例将删除表中的所有行`富`和`酒吧`.报告给客户端的受影响行数将仅包括从`酒吧`. + +不允许在数据修改语句中进行递归自引用。在某些情况下,可以通过引用递归的输出来解决这个限制`和`, 例如: + +``` +WITH RECURSIVE included_parts(sub_part, part) AS ( + SELECT sub_part, part FROM parts WHERE part = 'our_product' + UNION ALL + SELECT p.sub_part, p.part + FROM included_parts pr, parts p + WHERE p.part = pr.sub_part +) +DELETE FROM parts + WHERE part IN (SELECT part FROM included_parts); +``` + +此查询将删除产品的所有直接和间接子部分。 + +中的数据修改语句`和`只执行一次,并且总是完成,与主查询是否读取所有(或实际上任何)输出无关。请注意,这与规则不同`选择`在`和`:如上一节所述,执行`选择`仅在主查询需要其输出时才进行。 + +中的子语句`和`彼此同时执行并与主查询同时执行。因此,当在`和`,指定更新实际发生的顺序是不可预测的。所有语句都以相同的方式执行*快照*(看[第 13 章](mvcc.html)),因此他们无法“看到”彼此对目标表的影响。这减轻了行更新的实际顺序的不可预测性的影响,并意味着`返回`数据是不同人之间沟通变化的唯一方式`和`子语句和主查询。这方面的一个例子是 + +``` +WITH t AS ( + UPDATE products SET price = price * 1.05 + RETURNING * +) +SELECT * FROM products; +``` + +外层`选择`将在行动之前返回原始价格`更新`, 而在 + +``` +WITH t AS ( + UPDATE products SET price = price * 1.05 + RETURNING * +) +SELECT * FROM t; +``` + +外层`选择`将返回更新的数据。 + +不支持在单个语句中尝试两次更新同一行。只进行了一项修改,但要可靠地预测哪一项并不容易(有时也不可能)。这也适用于删除已在同一语句中更新的行:仅执行更新。因此,您通常应避免尝试在单个语句中两次修改单个行。特别避免写`和`可能影响由主语句或兄弟子语句更改的相同行的子语句。这种声明的影响是不可预测的。 + +目前,任何用作数据修改语句目标的表`和`不能有条件规则,也不能有`还`规则,也不是`反而`扩展到多个语句的规则。 diff --git a/docs/X/query-path.md b/docs/en/query-path.md similarity index 100% rename from docs/X/query-path.md rename to docs/en/query-path.md diff --git a/docs/en/query-path.zh.md b/docs/en/query-path.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c089e8e13e7b38e1885b75591d57a18c8cd619d8 --- /dev/null +++ b/docs/en/query-path.zh.md @@ -0,0 +1,19 @@ +## 51.1.查询的路径 + +这里,我们简要概述了查询获得结果所必须经过的阶段。 + +1. 必须建立从应用程序到PostgreSQL server的连接。应用程序向服务器发送查询,并等待接收服务器返回的结果。 + +2. 这个*分析阶段*检查应用程序传输的查询的语法是否正确,并创建*查询树*. + +3. 这个*重写系统*获取由解析器阶段创建的查询树,并查找*规则*(存储在*系统型录*)以应用于查询树。它执行*规则机构*. + + 重写系统的一个应用是实现*意见*.每当对视图(即*虚拟表*)则重写系统将用户的查询重写为访问*基表*在*视图定义*反而。 + +4. 这*计划者/优化者*获取(重写的)查询树并创建一个*查询计划*这将是*执行人*. + + 它首先创建所有可能的*路径*导致相同的结果。例如,如果要扫描的关系上有索引,则扫描有两条路径。一种可能是简单的顺序扫描,另一种可能是使用索引。接下来估计每条路径的执行成本并选择最便宜的路径。最便宜的路径被扩展为执行者可以使用的完整计划。 + +5. 执行者递归地遍历*计划树*并以计划表示的方式检索行。执行人利用*存储系统*在扫描关系时,执行*排序*和*加入*, 评估*资格*最后交还派生的行。 + + 在接下来的部分中,我们将更详细地介绍上面列出的每个项目,以便更好地理解 PostgreSQL 的内部控制和数据结构。 diff --git a/docs/X/querytree.md b/docs/en/querytree.md similarity index 100% rename from docs/X/querytree.md rename to docs/en/querytree.md diff --git a/docs/en/querytree.zh.md b/docs/en/querytree.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c8f990c445ec7b99d682682a12baeb658e54fa0f --- /dev/null +++ b/docs/en/querytree.zh.md @@ -0,0 +1,55 @@ +## 41.1. The Query Tree + +[](<>) + +To understand how the rule system works it is necessary to know when it is invoked and what its input and results are. + +The rule system is located between the parser and the planner. It takes the output of the parser, one query tree, and the user-defined rewrite rules, which are also query trees with some extra information, and creates zero or more query trees as result. So its input and output are always things the parser itself could have produced and thus, anything it sees is basically representable as an SQL statement. + +Now what is a query tree? It is an internal representation of an SQL statement where the single parts that it is built from are stored separately. These query trees can be shown in the server log if you set the configuration parameters`debug_print_parse`,`debug_print_rewritten`, or`debug_print_plan`. The rule actions are also stored as query trees, in the system catalog`pg_rewrite`. They are not formatted like the log output, but they contain exactly the same information. + +Reading a raw query tree requires some experience. But since SQL representations of query trees are sufficient to understand the rule system, this chapter will not teach how to read them. + +When reading the SQL representations of the query trees in this chapter it is necessary to be able to identify the parts the statement is broken into when it is in the query tree structure. The parts of a query tree are + +the command type + +This is a simple value telling which command (`SELECT`,`INSERT`,`UPDATE`,`DELETE`) produced the query tree. + +the range table[](<>) + +范围表是查询中使用的关系列表。在一个`选择`声明这些是之后给出的关系`从`关键词。 + +每个范围表条目都标识一个表或视图,并说明在查询的其他部分中调用它的名称。在查询树中,范围表条目是通过编号而不是名称来引用的,因此这里是否存在重复名称并不重要,就像在 SQL 语句中那样。这可能发生在规则的范围表被合并后。本章中的示例不会出现这种情况。结果关系 + +这是范围表的索引,用于标识查询结果所在的关系。 + +选择 + +`查询没有结果关系。`(特殊情况选择进入`大部分与`创建表`其次是`插入...选择`,此处不再单独讨论。)`为了 + +插入`,`更新`, 和`删除`命令,结果关系是更改要生效的表(或视图!)。`目标清单 + +目标列表是定义查询结果的表达式列表。[](<>) + +在一个`SELECT`, these expressions are the ones that build the final output of the query. They correspond to the expressions between the key words`SELECT`and`FROM`. (`*`is just an abbreviation for all the column names of a relation. It is expanded by the parser into the individual columns, so the rule system never sees it.) + +`DELETE`commands don't need a normal target list because they don't produce any result. Instead, the planner adds a special CTID entry to the empty target list, to allow the executor to find the row to be deleted. (CTID is added when the result relation is an ordinary table. If it is a view, a whole-row variable is added instead, by the rule system, as described in[Section 41.2.4](rules-views.html#RULES-VIEWS-UPDATE).) + +For`INSERT`commands, the target list describes the new rows that should go into the result relation. It consists of the expressions in the`VALUES`clause or the ones from the`SELECT`clause in`INSERT ... SELECT`. The first step of the rewrite process adds target list entries for any columns that were not assigned to by the original command but have defaults. Any remaining columns (with neither a given value nor a default) will be filled in by the planner with a constant null expression. + +For`UPDATE`commands, the target list describes the new rows that should replace the old ones. In the rule system, it contains just the expressions from the`SET column = expression`part of the command. The planner will handle missing columns by inserting expressions that copy the values from the old row into the new one. Just as for`DELETE`, a CTID or whole-row variable is added so that the executor can identify the old row to be updated. + +Every entry in the target list contains an expression that can be a constant value, a variable pointing to a column of one of the relations in the range table, a parameter, or an expression tree made of function calls, constants, variables, operators, etc. + +the qualification + +The query's qualification is an expression much like one of those contained in the target list entries. The result value of this expression is a Boolean that tells whether the operation (`INSERT`,`UPDATE`,`DELETE`, or`SELECT`) for the final result row should be executed or not. It corresponds to the`WHERE`clause of an SQL statement. + +the join tree + +The query's join tree shows the structure of the`FROM`clause. For a simple query like`SELECT ... FROM a, b, c`, the join tree is just a list of the`FROM`items, because we are allowed to join them in any order. But when`JOIN`表达式,特别是外连接,我们必须按照连接显示的顺序连接。在这种情况下,连接树显示了`加入`表达式。与特定相关的限制`加入`子句(从`在`或者`使用`表达式)存储为附加到这些连接树节点的限定表达式。原来存放顶层很方便`在哪里`表达式作为附加到顶级连接树项的限定。所以实际上连接树代表了`从`和`在哪里`a的从句`选择`. + +其他 + +查询树的其他部分如`订购方式`条款在这里不感兴趣。规则系统在应用规则时替换了那里的一些条目,但这与规则系统的基础没有太大关系。 diff --git a/docs/X/rangetypes.md b/docs/en/rangetypes.md similarity index 100% rename from docs/X/rangetypes.md rename to docs/en/rangetypes.md diff --git a/docs/en/rangetypes.zh.md b/docs/en/rangetypes.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..ede082c89691e4f937a62d9c0d17c43d8a985856 --- /dev/null +++ b/docs/en/rangetypes.zh.md @@ -0,0 +1,281 @@ +## 8.17. Range Types + +[8.17.1. Built-in Range and Multirange Types](rangetypes.html#RANGETYPES-BUILTIN) + +[8.17.2. Examples](rangetypes.html#RANGETYPES-EXAMPLES) + +[8.17.3. Inclusive and Exclusive Bounds](rangetypes.html#RANGETYPES-INCLUSIVITY) + +[8.17.4. Infinite (Unbounded) Ranges](rangetypes.html#RANGETYPES-INFINITE) + +[8.17.5. Range Input/Output](rangetypes.html#RANGETYPES-IO) + +[8.17.6. Constructing Ranges and Multiranges](rangetypes.html#RANGETYPES-CONSTRUCT) + +[8.17.7. Discrete Range Types](rangetypes.html#RANGETYPES-DISCRETE) + +[8.17.8. Defining New Range Types](rangetypes.html#RANGETYPES-DEFINING) + +[8.17.9. Indexing](rangetypes.html#RANGETYPES-INDEXING) + +[8.17.10. Constraints on Ranges](rangetypes.html#RANGETYPES-CONSTRAINT) + +[](<>)[](<>) + +Range types are data types representing a range of values of some element type (called the range's*subtype*). For instance, ranges of`timestamp`might be used to represent the ranges of time that a meeting room is reserved. In this case the data type is`tsrange`(short for “timestamp range”), and`timestamp`is the subtype. The subtype must have a total order so that it is well-defined whether element values are within, before, or after a range of values. + +Range types are useful because they represent many element values in a single range value, and because concepts such as overlapping ranges can be expressed clearly. The use of time and date ranges for scheduling purposes is the clearest example; but price ranges, measurement ranges from an instrument, and so forth can also be useful. + +Every range type has a corresponding multirange type. A multirange is an ordered list of non-contiguous, non-empty, non-null ranges. Most range operators also work on multiranges, and they have a few functions of their own. + +### 8.17.1. Built-in Range and Multirange Types + +PostgreSQL comes with the following built-in range types: + +- `int4range`— Range of`integer`,`int4多范围`— 对应的多范围 + +- `整数8范围`- 范围`大整数`, `int8多范围`— 对应的多范围 + +- `数值范围`- 范围`数字`, `数字多范围`— 对应的多范围 + +- `tsrange`- 范围`没有时区的时间戳`, `多量程`— 对应的多范围 + +- `茨茨兰奇`- 范围`带时区的时间戳`, `多量程`— 对应的多范围 + +- `日期范围`- 范围`日期`,`datemultirange`-对应的多量程 + + 此外,您可以定义自己的范围类型;看见[创建类型](sql-createtype.html)了解更多信息。 + +### 8.17.2.例子 + +``` +CREATE TABLE reservation (room int, during tsrange); +INSERT INTO reservation VALUES + (1108, '[2010-01-01 14:30, 2010-01-01 15:30)'); + +-- Containment +SELECT int4range(10, 20) @> 3; + +-- Overlaps +SELECT numrange(11.1, 22.2) && numrange(20.0, 30.0); + +-- Extract the upper bound +SELECT upper(int8range(15, 25)); + +-- Compute the intersection +SELECT int4range(10, 20) * int4range(15, 25); + +-- Is the range empty? +SELECT isempty(numrange(1, 5)); +``` + +看见[表9.53](functions-range.html#RANGE-OPERATORS-TABLE)和[表9.55](functions-range.html#RANGE-FUNCTIONS-TABLE)获取有关范围类型的运算符和函数的完整列表。 + +### 8.17.3.包容与排斥界限 + +每个非空范围都有两个界限,下限和上限。这些值之间的所有点都包含在该范围内。包含边界表示边界点本身也包含在范围内,而独占边界表示边界点不包含在范围内。 + +在范围的文本形式中,包含的下界表示为“`[`“而排他性下限由”`(`”. 同样,包含的上界表示为“`]`,而排他性上限由`)`”. (见[第8.17.5节](rangetypes.html#RANGETYPES-IO)更多细节。) + +功能`下奥公司`and`upper_inc`test the inclusivity of the lower and upper bounds of a range value, respectively. + +### 8.17.4. Infinite (Unbounded) Ranges + +The lower bound of a range can be omitted, meaning that all values less than the upper bound are included in the range, e.g.,`(,3]`. Likewise, if the upper bound of the range is omitted, then all values greater than the lower bound are included in the range. If both lower and upper bounds are omitted, all values of the element type are considered to be in the range. Specifying a missing bound as inclusive is automatically converted to exclusive, e.g.,`[,]`is converted to`(,)`. You can think of these missing values as +/-infinity, but they are special range type values and are considered to be beyond any range element type's +/-infinity values. + +Element types that have the notion of “infinity” can use them as explicit bound values. For example, with timestamp ranges,`[today,infinity)`excludes the special`timestamp`value`infinity`, while`[today,infinity]`include it, as does`[today,)`and`[today,]`. + +The functions`lower_inf`and`upper_inf`test for infinite lower and upper bounds of a range, respectively. + +### 8.17.5. Range Input/Output + +The input for a range value must follow one of the following patterns: + +``` +(lower-bound,upper-bound) +(lower-bound,upper-bound] +[lower-bound,upper-bound) +[lower-bound,upper-bound] +empty +``` + +The parentheses or brackets indicate whether the lower and upper bounds are exclusive or inclusive, as described previously. Notice that the final pattern is`empty`, which represents an empty range (a range that contains no points). + +The*`lower-bound`*may be either a string that is valid input for the subtype, or empty to indicate no lower bound. Likewise,*`upper-bound`*may be either a string that is valid input for the subtype, or empty to indicate no upper bound. + +Each bound value can be quoted using`"`(double quote) characters. This is necessary if the bound value contains parentheses, brackets, commas, double quotes, or backslashes, since these characters would otherwise be taken as part of the range syntax. To put a double quote or backslash in a quoted bound value, precede it with a backslash. (Also, a pair of double quotes within a double-quoted bound value is taken to represent a double quote character, analogously to the rules for single quotes in SQL literal strings.) Alternatively, you can avoid quoting and use backslash-escaping to protect all data characters that would otherwise be taken as range syntax. Also, to write a bound value that is an empty string, write`""`, since writing nothing means an infinite bound. + +Whitespace is allowed before and after the range value, but any whitespace between the parentheses or brackets is taken as part of the lower or upper bound value. (Depending on the element type, it might or might not be significant.) + +### Note + +These rules are very similar to those for writing field values in composite-type literals. See[Section 8.16.6](rowtypes.html#ROWTYPES-IO-SYNTAX)for additional commentary. + +Examples: + +``` +-- includes 3, does not include 7, and does include all points in between +SELECT '[3,7)'::int4range; + +-- does not include either 3 or 7, but includes all points in between +SELECT '(3,7)'::int4range; + +-- includes only the single point 4 +SELECT '[4,4]'::int4range; + +-- includes no points (and will be normalized to 'empty') +SELECT '[4,4)'::int4range; +``` + +The input for a multirange is curly brackets (`{`and`}`) containing zero or more valid ranges, separated by commas. Whitespace is permitted around the brackets and commas. This is intended to be reminiscent of array syntax, although multiranges are much simpler: they have just one dimension and there is no need to quote their contents. (The bounds of their ranges may be quoted as above however.) + +Examples: + +``` +SELECT '{}'::int4multirange; +SELECT '{[3,7)}'::int4multirange; +SELECT '{[3,7), [8,9)}'::int4multirange; +``` + +### 8.17.6. Constructing Ranges and Multiranges + +Each range type has a constructor function with the same name as the range type. Using the constructor function is frequently more convenient than writing a range literal constant, since it avoids the need for extra quoting of the bound values. The constructor function accepts two or three arguments. The two-argument form constructs a range in standard form (lower bound inclusive, upper bound exclusive), while the three-argument form constructs a range with bounds of the form specified by the third argument. The third argument must be one of the strings “`()`”, “`(]`”, “`[)`”, or “`[]`”. For example: + +``` +-- The full form is: lower bound, upper bound, and text argument indicating +-- inclusivity/exclusivity of bounds. +SELECT numrange(1.0, 14.0, '(]'); + +-- If the third argument is omitted, '[)' is assumed. +SELECT numrange(1.0, 14.0); + +-- Although '(]' is specified here, on display the value will be converted to +-- canonical form, since int8range is a discrete range type (see below). +SELECT int8range(1, 14, '(]'); + +-- Using NULL for either bound causes the range to be unbounded on that side. +SELECT numrange(NULL, 2.2); +``` + +Each range type also has a multirange constructor with the same name as the multirange type. The constructor function takes zero or more arguments which are all ranges of the appropriate type. For example: + +``` +SELECT nummultirange(); +SELECT nummultirange(numrange(1.0, 14.0)); +SELECT nummultirange(numrange(1.0, 14.0), numrange(20.0, 25.0)); +``` + +### 8.17.7. Discrete Range Types + +A discrete range is one whose element type has a well-defined “step”, such as`integer`or`date`. In these types two elements can be said to be adjacent, when there are no valid values between them. This contrasts with continuous ranges, where it's always (or almost always) possible to identify other element values between two given values. For example, a range over the`numeric`type is continuous, as is a range over`timestamp`. (Even though`timestamp`has limited precision, and so could theoretically be treated as discrete, it's better to consider it continuous since the step size is normally not of interest.) + +Another way to think about a discrete range type is that there is a clear idea of a “next” or “previous” value for each element value. Knowing that, it is possible to convert between inclusive and exclusive representations of a range's bounds, by choosing the next or previous element value instead of the one originally given. For example, in an integer range type`[4,8]`and`(3,9)`denote the same set of values; but this would not be so for a range over numeric. + +A discrete range type should have a*canonicalization*function that is aware of the desired step size for the element type. The canonicalization function is charged with converting equivalent values of the range type to have identical representations, in particular consistently inclusive or exclusive bounds. If a canonicalization function is not specified, then ranges with different formatting will always be treated as unequal, even though they might represent the same set of values in reality. + +The built-in range types`int4range`,`int8range`, and`daterange`all use a canonical form that includes the lower bound and excludes the upper bound; that is,`[)`. User-defined range types can use other conventions, however. + +### 8.17.8. Defining New Range Types + +Users can define their own range types. The most common reason to do this is to use ranges over subtypes not provided among the built-in range types. For example, to define a new range type of subtype`float8`: + +``` +CREATE TYPE floatrange AS RANGE ( + subtype = float8, + subtype_diff = float8mi +); + +SELECT '[1.234, 5.678]'::floatrange; +``` + +Because`float8`has no meaningful “step”, we do not define a canonicalization function in this example. + +When you define your own range you automatically get a corresponding multirange type. + +Defining your own range type also allows you to specify a different subtype B-tree operator class or collation to use, so as to change the sort ordering that determines which values fall into a given range. + +If the subtype is considered to have discrete rather than continuous values, the`CREATE TYPE`command should specify a`canonical`function. The canonicalization function takes an input range value, and must return an equivalent range value that may have different bounds and formatting. The canonical output for two ranges that represent the same set of values, for example the integer ranges`[1, 7]`and`[1, 8)`, must be identical. It doesn't matter which representation you choose to be the canonical one, so long as two equivalent values with different formattings are always mapped to the same value with the same formatting. In addition to adjusting the inclusive/exclusive bounds format, a canonicalization function might round off boundary values, in case the desired step size is larger than what the subtype is capable of storing. For instance, a range type over`timestamp`could be defined to have a step size of an hour, in which case the canonicalization function would need to round off bounds that weren't a multiple of an hour, or perhaps throw an error instead. + +In addition, any range type that is meant to be used with GiST or SP-GiST indexes should define a subtype difference, or`subtype_diff`, function. (The index will still work without`subtype_diff`, but it is likely to be considerably less efficient than if a difference function is provided.) The subtype difference function takes two input values of the subtype, and returns their difference (i.e.,*`X`*minus*`Y`*) represented as a`float8`value. In our example above, the function`float8mi`that underlies the regular`float8`minus operator can be used; but for any other subtype, some type conversion would be necessary. Some creative thought about how to represent differences as numbers might be needed, too. To the greatest extent possible, the`subtype_diff`function should agree with the sort ordering implied by the selected operator class and collation; that is, its result should be positive whenever its first argument is greater than its second according to the sort ordering. + +A less-oversimplified example of a`subtype_diff`function is: + +``` +CREATE FUNCTION time_subtype_diff(x time, y time) RETURNS float8 AS +'SELECT EXTRACT(EPOCH FROM (x - y))' LANGUAGE sql STRICT IMMUTABLE; + +CREATE TYPE timerange AS RANGE ( + subtype = time, + subtype_diff = time_subtype_diff +); + +SELECT '[11:10, 23:00]'::timerange; +``` + +See[CREATE TYPE](sql-createtype.html)for more information about creating range types. + +### 8.17.9. Indexing + +[](<>) + +可以为范围类型的表列创建 GiST 和 SP-GiST 索引。还可以为多范围类型的表列创建 GiST 索引。例如,要创建一个 GiST 索引: + +``` +CREATE INDEX reservation_idx ON reservation USING GIST (during); +``` + +范围上的 GiST 或 SP-GiST 索引可以加速涉及这些范围运算符的查询:`=`,`&&`,`<@`,`@>`,`<<`,`>>`,`-|-`,`&<`, 和`&>`.多范围的 GiST 索引可以加速涉及同一组多范围运算符的查询。范围上的 GiST 索引和多范围上的 GiST 索引也可以相应地加速涉及这些跨类型范围到多范围和多范围到范围运算符的查询:`&&`,`<@`,`@>`,`<<`,`>>`,`-|-`,`&<`, and`&>`. See[Table 9.53](functions-range.html#RANGE-OPERATORS-TABLE)for more information. + +In addition, B-tree and hash indexes can be created for table columns of range types. For these index types, basically the only useful range operation is equality. There is a B-tree sort ordering defined for range values, with corresponding`<`and`>`operators, but the ordering is rather arbitrary and not usually useful in the real world. Range types' B-tree and hash support is primarily meant to allow sorting and hashing internally in queries, rather than creation of actual indexes. + +### 8.17.10. Constraints on Ranges + +[](<>) + +While`UNIQUE`is a natural constraint for scalar values, it is usually unsuitable for range types. Instead, an exclusion constraint is often more appropriate (see[CREATE TABLE ... CONSTRAINT ... EXCLUDE](sql-createtable.html#SQL-CREATETABLE-EXCLUDE)). Exclusion constraints allow the specification of constraints such as “non-overlapping” on a range type. For example: + +``` +CREATE TABLE reservation ( + during tsrange, + EXCLUDE USING GIST (during WITH &&) +); +``` + +That constraint will prevent any overlapping values from existing in the table at the same time: + +``` +INSERT INTO reservation VALUES + ('[2010-01-01 11:30, 2010-01-01 15:00)'); +INSERT 0 1 + +INSERT INTO reservation VALUES + ('[2010-01-01 14:45, 2010-01-01 15:45)'); +ERROR: conflicting key value violates exclusion constraint "reservation_during_excl" +DETAIL: Key (during)=(["2010-01-01 14:45:00","2010-01-01 15:45:00")) conflicts +with existing key (during)=(["2010-01-01 11:30:00","2010-01-01 15:00:00")). +``` + +You can use the[`btree_gist`](btree-gist.html)extension to define exclusion constraints on plain scalar data types, which can then be combined with range exclusions for maximum flexibility. For example, after`btree_gist`is installed, the following constraint will reject overlapping ranges only if the meeting room numbers are equal: + +``` +CREATE EXTENSION btree_gist; +CREATE TABLE room_reservation ( + room text, + during tsrange, + EXCLUDE USING GIST (room WITH =, during WITH &&) +); + +INSERT INTO room_reservation VALUES + ('123A', '[2010-01-01 14:00, 2010-01-01 15:00)'); +INSERT 0 1 + +INSERT INTO room_reservation VALUES + ('123A', '[2010-01-01 14:30, 2010-01-01 15:30)'); +ERROR: conflicting key value violates exclusion constraint "room_reservation_room_during_excl" +DETAIL: Key (room, during)=(123A, ["2010-01-01 14:30:00","2010-01-01 15:30:00")) conflicts +with existing key (room, during)=(123A, ["2010-01-01 14:00:00","2010-01-01 15:00:00")). + +INSERT INTO room_reservation VALUES + ('123B', '[2010-01-01 14:30, 2010-01-01 15:30)'); +INSERT 0 1 +``` diff --git a/docs/X/regress-evaluation.md b/docs/en/regress-evaluation.md similarity index 100% rename from docs/X/regress-evaluation.md rename to docs/en/regress-evaluation.md diff --git a/docs/en/regress-evaluation.zh.md b/docs/en/regress-evaluation.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a89abbb839674d101a8bab2ce3d58a3bd1c8ddc7 --- /dev/null +++ b/docs/en/regress-evaluation.zh.md @@ -0,0 +1,89 @@ +## 33.2. Test Evaluation + +[33.2.1. Error Message Differences](regress-evaluation.html#id-1.6.20.6.6) + +[33.2.2. Locale Differences](regress-evaluation.html#id-1.6.20.6.7) + +[33.2.3. Date and Time Differences](regress-evaluation.html#id-1.6.20.6.8) + +[33.2.4. Floating-Point Differences](regress-evaluation.html#id-1.6.20.6.9) + +[33.2.5. Row Ordering Differences](regress-evaluation.html#id-1.6.20.6.10) + +[33.2.6. Insufficient Stack Depth](regress-evaluation.html#id-1.6.20.6.11) + +[33.2.7. The “random” Test](regress-evaluation.html#id-1.6.20.6.12) + +[33.2.8. Configuration Parameters](regress-evaluation.html#id-1.6.20.6.13) + +Some properly installed and fully functional PostgreSQL installations can “fail” some of these regression tests due to platform-specific artifacts such as varying floating-point representation and message wording. The tests are currently evaluated using a simple`diff`comparison against the outputs generated on a reference system, so the results are sensitive to small system differences. When a test is reported as “failed”, always examine the differences between expected and actual results; you might find that the differences are not significant. Nonetheless, we still strive to maintain accurate reference files across all supported platforms, so it can be expected that all tests pass. + +The actual outputs of the regression tests are in files in the`src/test/regress/results`directory. The test script uses`diff`to compare each output file against the reference outputs stored in the`src/test/regress/expected`directory. Any differences are saved for your inspection in`src/test/regress/regression.diffs`. (When running a test suite other than the core tests, these files of course appear in the relevant subdirectory, not`src/test/regress`.) + +If you don't like the`diff`options that are used by default, set the environment variable`PG_REGRESS_DIFF_OPTS`, for instance`PG_REGRESS_DIFF_OPTS='-c'`. (Or you can run`diff`yourself, if you prefer.) + +If for some reason a particular platform generates a “failure” for a given test, but inspection of the output convinces you that the result is valid, you can add a new comparison file to silence the failure report in future test runs. See[Section 33.3](regress-variant.html)for details. + +### 33.2.1. Error Message Differences + +Some of the regression tests involve intentional invalid input values. Error messages can come from either the PostgreSQL code or from the host platform system routines. In the latter case, the messages can vary between platforms, but should reflect similar information. These differences in messages will result in a “failed” regression test that can be validated by inspection. + +### 33.2.2. Locale Differences + +If you run the tests against a server that was initialized with a collation-order locale other than C, then there might be differences due to sort order and subsequent failures. The regression test suite is set up to handle this problem by providing alternate result files that together are known to handle a large number of locales. + +To run the tests in a different locale when using the temporary-installation method, pass the appropriate locale-related environment variables on the`make`command line, for example: + +``` +make check LANG=de_DE.utf8 +``` + +(The regression test driver unsets`LC_ALL`, so it does not work to choose the locale using that variable.) To use no locale, either unset all locale-related environment variables (or set them to`C`) or use the following special invocation: + +``` +make check NO_LOCALE=1 +``` + +When running the tests against an existing installation, the locale setup is determined by the existing installation. To change it, initialize the database cluster with a different locale by passing the appropriate options to`initdb`. + +In general, it is advisable to try to run the regression tests in the locale setup that is wanted for production use, as this will exercise the locale- and encoding-related code portions that will actually be used in production. Depending on the operating system environment, you might get failures, but then you will at least know what locale-specific behaviors to expect when running real applications. + +### 33.2.3. Date and Time Differences + +大多数日期和时间结果取决于时区环境。参考文件是为时区生成的`PST8PDT`(加州伯克利),如果测试不在该时区设置下运行,就会出现明显的失败。回归测试驱动程序设置环境变量`PGTZ`到`PST8PDT`,这通常可以确保正确的结果。 + +### 33.2.4.浮点差 + +有些测试涉及计算64位浮点数(`双精度`)来自表列。涉及数学函数的结果差异`双精度`观察到了柱子。这个`浮动8`和`几何学`测试特别容易在不同的平台之间产生微小的差异,甚至在不同的编译器优化设置下。需要通过人眼对比来确定这些差异的真正意义,这些差异通常位于小数点右侧10位。 + +有些系统显示负零为零`-0`,而其他人只是表现出来`0`. + +有些系统发出错误信号`战俘()`和`exp()`differently from the mechanism expected by the current PostgreSQL code. + +### 33.2.5. Row Ordering Differences + +You might see differences in which the same rows are output in a different order than what appears in the expected file. In most cases this is not, strictly speaking, a bug. Most of the regression test scripts are not so pedantic as to use an`ORDER BY`for every single`SELECT`, and so their result row orderings are not well-defined according to the SQL specification. In practice, since we are looking at the same queries being executed on the same data by the same software, we usually get the same result ordering on all platforms, so the lack of`ORDER BY`is not a problem. Some queries do exhibit cross-platform ordering differences, however. When testing against an already-installed server, ordering differences can also be caused by non-C locale settings or non-default parameter settings, such as custom values of`work_mem`or the planner cost parameters. + +Therefore, if you see an ordering difference, it's not something to worry about, unless the query does have an`ORDER BY`that your result is violating. However, please report it anyway, so that we can add an`ORDER BY`to that particular query to eliminate the bogus“failure” in future releases. + +You might wonder why we don't order all the regression test queries explicitly to get rid of this issue once and for all. The reason is that that would make the regression tests less useful, not more, since they'd tend to exercise query plan types that produce ordered results to the exclusion of those that don't. + +### 33.2.6. Insufficient Stack Depth + +If the`errors`test results in a server crash at the`select infinite_recurse()`command, it means that the platform's limit on process stack size is smaller than the[max_stack_depth](runtime-config-resource.html#GUC-MAX-STACK-DEPTH)parameter indicates. This can be fixed by running the server under a higher stack size limit (4MB is recommended with the default value of`max_stack_depth`). If you are unable to do that, an alternative is to reduce the value of`max_stack_depth`. + +On platforms supporting`getrlimit()`, the server should automatically choose a safe value of`max_stack_depth`; so unless you've manually overridden this setting, a failure of this kind is a reportable bug. + +### 33.2.7. The “random” Test + +The`random`test script is intended to produce random results. In very rare cases, this causes that regression test to fail. Typing: + +``` +diff results/random.out expected/random.out +``` + +should produce only one or a few lines of differences. You need not worry unless the random test fails repeatedly. + +### 33.2.8. Configuration Parameters + +When running the tests against an existing installation, some non-default parameter settings could cause the tests to fail. For example, changing parameters such as`enable_seqscan`or`enable_indexscan`could cause plan changes that would affect the results of tests that use`EXPLAIN`. diff --git a/docs/X/regress-run.md b/docs/en/regress-run.md similarity index 100% rename from docs/X/regress-run.md rename to docs/en/regress-run.md diff --git a/docs/en/regress-run.zh.md b/docs/en/regress-run.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..35905287c51fd7ae62f5a93e271c30fd0b7b70ee --- /dev/null +++ b/docs/en/regress-run.zh.md @@ -0,0 +1,211 @@ +## 33.1. Running the Tests + +[33.1.1. Running the Tests Against a Temporary Installation](regress-run.html#id-1.6.20.5.3) + +[33.1.2. Running the Tests Against an Existing Installation](regress-run.html#id-1.6.20.5.4) + +[33.1.3. Additional Test Suites](regress-run.html#id-1.6.20.5.5) + +[33.1.4. Locale and Encoding](regress-run.html#id-1.6.20.5.6) + +[33.1.5. Custom Server Settings](regress-run.html#id-1.6.20.5.7) + +[33.1.6. Extra Tests](regress-run.html#id-1.6.20.5.8) + +[33.1.7. Testing Hot Standby](regress-run.html#id-1.6.20.5.9) + +The regression tests can be run against an already installed and running server, or using a temporary installation within the build tree. Furthermore, there is a “parallel” and a “sequential” mode for running the tests. The sequential method runs each test script alone, while the parallel method starts up multiple server processes to run groups of tests in parallel. Parallel testing adds confidence that interprocess communication and locking are working correctly. + +### 33.1.1. Running the Tests Against a Temporary Installation + +To run the parallel regression tests after building but before installation, type: + +``` +make check +``` + +in the top-level directory. (Or you can change to`src/test/regress`and run the command there.) At the end you should see something like: + +``` +======================= + All 193 tests passed. +======================= +``` + +or otherwise a note about which tests failed. See[Section 33.2](regress-evaluation.html)below before assuming that a “failure” represents a serious problem. + +Because this test method runs a temporary server, it will not work if you did the build as the root user, since the server will not start as root. Recommended procedure is not to do the build as root, or else to perform testing after completing the installation. + +If you have configured PostgreSQL to install into a location where an older PostgreSQL installation already exists, and you perform`make check`before installing the new version, you might find that the tests fail because the new programs try to use the already-installed shared libraries. (Typical symptoms are complaints about undefined symbols.) If you wish to run the tests before overwriting the old installation, you'll need to build with`configure --disable-rpath`. It is not recommended that you use this option for the final installation, however. + +The parallel regression test starts quite a few processes under your user ID. Presently, the maximum concurrency is twenty parallel test scripts, which means forty processes: there's a server process and a psql process for each test script. So if your system enforces a per-user limit on the number of processes, make sure this limit is at least fifty or so, else you might get random-seeming failures in the parallel test. If you are not in a position to raise the limit, you can cut down the degree of parallelism by setting the`MAX_CONNECTIONS`parameter. For example: + +``` +make MAX_CONNECTIONS=10 check +``` + +runs no more than ten tests concurrently. + +### 33.1.2. Running the Tests Against an Existing Installation + +To run the tests after installation (see[Chapter 17](installation.html)), initialize a data directory and start the server as explained in[Chapter 19](runtime.html), then type: + +``` +make installcheck +``` + +or for a parallel test: + +``` +make installcheck-parallel +``` + +The tests will expect to contact the server at the local host and the default port number, unless directed otherwise by`PGHOST`and`PGPORT`environment variables. The tests will be run in a database named`regression`; any existing database by this name will be dropped. + +The tests will also transiently create some cluster-wide objects, such as roles, tablespaces, and subscriptions. These objects will have names beginning with`regress_`. Beware of using`installcheck`mode with an installation that has any actual global objects named that way. + +### 33.1.3. Additional Test Suites + +The`make check`and`make installcheck`commands run only the “core” regression tests, which test built-in functionality of the PostgreSQL server. The source distribution contains many additional test suites, most of them having to do with add-on functionality such as optional procedural languages. + +To run all test suites applicable to the modules that have been selected to be built, including the core tests, type one of these commands at the top of the build tree: + +``` +make check-world +make installcheck-world +``` + +These commands run the tests using temporary servers or an already-installed server, respectively, just as previously explained for`make check`and`make installcheck`. Other considerations are the same as previously explained for each method. Note that`做检查世界`为每个测试模块构建一个单独的实例(临时数据目录),因此它需要更多的时间和磁盘空间`制作安装检查世界`. + +在具有多个 CPU 内核且没有严格的操作系统限制的现代机器上,您可以通过并行性使事情进展得更快。大多数 PostgreSQL 开发人员实际用于运行所有测试的配方类似于 + +``` +make check-world -j8 >/dev/null +``` + +与`-j`限制接近或略多于可用核心的数量。当您只想验证成功时,丢弃标准输出会消除不感兴趣的喋喋不休。(在失败的情况下,stderr 消息通常足以确定在哪里查看。) + +或者,您可以通过键入运行单个测试套件`检查`要么`进行安装检查`在构建树的相应子目录中。请记住,`进行安装检查`假设您已经安装了相关模块,而不仅仅是核心服务器。 + +可以通过这种方式调用的其他测试包括: + +- 可选过程语言的回归测试。这些位于`源/pl`. + +- 回归测试`贡献`模块,位于`贡献`.不是全部`贡献`模块有测试。 + +- ECPG 接口库的回归测试,位于`src/接口/ecpg/test`. + +- 测试核心支持的身份验证方法,位于`src/test/认证`.(有关其他与身份验证相关的测试,请参见下文。) + +- 测试强调并发会话的行为,位于`src/测试/隔离`. + +- 崩溃恢复和物理复制测试,位于`src/测试/恢复`. + +- 逻辑复制测试,位于`源/测试/订阅`. + +- 客户端程序的测试,位于`源/bin`. + + 使用时`安装检查`模式下,这些测试将创建和销毁名称包括的测试数据库`回归`, 例如`pl_regression`或者`贡献回归`.小心使用`安装检查`具有以这种方式命名的任何非测试数据库的安装模式。 + + 其中一些辅助测试套件使用 TAP 基础架构[第 33.4 节](regress-tap.html).仅当使用选项配置 PostgreSQL 时才运行基于 TAP 的测试`--启用点击测试`.推荐用于开发,但如果没有合适的 Perl 安装可以省略。 + + 一些测试套件默认不运行,要么是因为它们在多用户系统上运行不安全,要么是因为它们需要特殊的软件。您可以通过设置`制作`或环境变量`PG_TEST_EXTRA`到以空格分隔的列表,例如: + + +``` +make check-world PG_TEST_EXTRA='kerberos ldap ssl' +``` + +当前支持以下值: + +`kerberos` + +在下运行测试套件`src/test/kerberos`.这需要安装 MIT Kerberos 并打开 TCP/IP 侦听套接字。 + +`ldap` + +在下运行测试套件`源/测试/ldap`.这需要安装 OpenLDAP 并打开 TCP/IP 侦听套接字。 + +`ssl` + +在下运行测试套件`src/test/ssl`.这将打开 TCP/IP 侦听套接字。 + +当前构建配置不支持的功能的测试不会运行,即使它们在`PG_TEST_EXTRA`. + +此外,还有测试`源/测试/模块`这将由`做检查世界`但不是通过`制作安装检查世界`. This is because they install non-production extensions or have other side-effects that are considered undesirable for a production installation. You can use`make install`and`make installcheck`in one of those subdirectories if you wish, but it's not recommended to do so with a non-test server. + +### 33.1.4. Locale and Encoding + +By default, tests using a temporary installation use the locale defined in the current environment and the corresponding database encoding as determined by`initdb`. It can be useful to test different locales by setting the appropriate environment variables, for example: + +``` +make check LANG=C +make check LC_COLLATE=en_US.utf8 LC_CTYPE=fr_CA.utf8 +``` + +For implementation reasons, setting`LC_ALL`does not work for this purpose; all the other locale-related environment variables do work. + +When testing against an existing installation, the locale is determined by the existing database cluster and cannot be set separately for the test run. + +You can also choose the database encoding explicitly by setting the variable`ENCODING`, for example: + +``` +make check LANG=C ENCODING=EUC_JP +``` + +Setting the database encoding this way typically only makes sense if the locale is C; otherwise the encoding is chosen automatically from the locale, and specifying an encoding that does not match the locale will result in an error. + +The database encoding can be set for tests against either a temporary or an existing installation, though in the latter case it must be compatible with the installation's locale. + +### 33.1.5. Custom Server Settings + +Custom server settings to use when running a regression test suite can be set in the`PGOPTIONS`environment variable (for settings that allow this): + +``` +make check PGOPTIONS="-c force_parallel_mode=regress -c work_mem=50MB" +``` + +When running against a temporary installation, custom settings can also be set by supplying a pre-written`postgresql.conf`: + +``` +echo 'log_checkpoints = on' > test_postgresql.conf +echo 'work_mem = 50MB' >> test_postgresql.conf +make check EXTRA_REGRESS_OPTS="--temp-config=test_postgresql.conf" +``` + +This can be useful to enable additional logging, adjust resource limits, or enable extra run-time checks such as[debug_discard_caches](runtime-config-developer.html#GUC-DEBUG-DISCARD-CACHES). + +### 33.1.6. Extra Tests + +The core regression test suite contains a few test files that are not run by default, because they might be platform-dependent or take a very long time to run. You can run these or other extra test files by setting the variable`EXTRA_TESTS`. For example, to run the`numeric_big`test: + +``` +make check EXTRA_TESTS=numeric_big +``` + +### 33.1.7. Testing Hot Standby + +The source distribution also contains regression tests for the static behavior of Hot Standby. These tests require a running primary server and a running standby server that is accepting new WAL changes from the primary (using either file-based log shipping or streaming replication). Those servers are not automatically created for you, nor is replication setup documented here. Please check the various sections of the documentation devoted to the required commands and related issues. + +To run the Hot Standby tests, first create a database called`regression`on the primary: + +``` +psql -h primary -c "CREATE DATABASE regression" +``` + +Next, run the preparatory script`src/test/regress/sql/hs_primary_setup.sql`on the primary in the regression database, for example: + +``` +psql -h primary -f src/test/regress/sql/hs_primary_setup.sql regression +``` + +Allow these changes to propagate to the standby. + +Now arrange for the default database connection to be to the standby server under test (for example, by setting the`PGHOST`and`PGPORT`environment variables). Finally, run`make standbycheck`in the regression directory: + +``` +cd src/test/regress +make standbycheck +``` + +Some extreme behaviors can also be generated on the primary using the script`src/test/regress/sql/hs_primary_extremes.sql`to allow the behavior of the standby to be tested. diff --git a/docs/X/regress-tap.md b/docs/en/regress-tap.md similarity index 100% rename from docs/X/regress-tap.md rename to docs/en/regress-tap.md diff --git a/docs/en/regress-tap.zh.md b/docs/en/regress-tap.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..2fdcaca03a371267bf9360acec6606e272b3ee51 --- /dev/null +++ b/docs/en/regress-tap.zh.md @@ -0,0 +1,21 @@ +## 33.4.抽头测试 + +各种测试,尤其是客户端程序测试`src/bin`,使用Perl TAP工具,并使用Perl测试程序运行`证明`。您可以将命令行选项传递给`证明`通过设置`制作`变量`展示旗帜`,例如: + +``` +make -C src/bin check PROVE_FLAGS='--timer' +``` + +参见第页的手册`证明`了解更多信息。 + +这个`制作`变量`验证测试`可用于定义相对于`Makefile`调用`证明`运行指定的测试子集而不是默认的`t/*.pl`.例如: + +``` +make check PROVE_TESTS='t/001_test1.pl t/003_test3.pl' +``` + +TAP 测试需要 Perl 模块`IPC::运行`.该模块可从 CPAN 或操作系统包中获得。他们还要求使用选项配置 PostgreSQL`--启用点击测试`. + +一般来说,如果您说,TAP 测试将测试先前安装的安装树中的可执行文件`进行安装检查`,或者如果您说,将从当前来源构建新的本地安装树`检查`.在任何一种情况下,它们都会初始化一个本地实例(数据目录)并在其中临时运行一个服务器。其中一些测试运行不止一台服务器。因此,这些测试可能会占用大量资源。 + +重要的是要意识到即使您说 TAP 测试也会启动测试服务器`进行安装检查`;这与传统的非 TAP 测试基础架构不同,后者希望在这种情况下使用已经运行的测试服务器。一些 PostgreSQL 子目录同时包含传统风格和 TAP 风格的测试,这意味着`进行安装检查`将产生来自临时服务器和已经运行的测试服务器的混合结果。 diff --git a/docs/X/release-14-1.md b/docs/en/release-14-1.md similarity index 100% rename from docs/X/release-14-1.md rename to docs/en/release-14-1.md diff --git a/docs/en/release-14-1.zh.md b/docs/en/release-14-1.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..4c69dba7b7f673af8f691841e4eb57804f017385 --- /dev/null +++ b/docs/en/release-14-1.zh.md @@ -0,0 +1,185 @@ +## E.2. Release 14.1 + +[E.2.1. Migration to Version 14.1](release-14-1.html#id-1.11.6.6.4)[E.2.2. Changes](release-14-1.html#id-1.11.6.6.5) + +**Release date:**2021-11-11 + +This release contains a variety of fixes from 14.0. For information about new features in major release 14, see[Section E.3](release-14.html). + +### E.2.1. Migration to Version 14.1 + +A dump/restore is not required for those running 14.X. + +However, note that installations using physical replication should update standby servers before the primary server, as explained in the third changelog entry below. + +Also, several bugs have been found that may have resulted in corrupted indexes, as explained in the next several changelog entries. If any of those cases apply to you, it's recommended to reindex possibly-affected indexes after updating. + +### E.2.2. Changes + +- Make the server reject extraneous data after an SSL or GSS encryption handshake (Tom Lane) + + A man-in-the-middle with the ability to inject data into the TCP connection could stuff some cleartext data into the start of a supposedly encryption-protected database session. This could be abused to send faked SQL commands to the server, although that would only work if the server did not demand any authentication data. (However, a server relying on SSL certificate authentication might well not do so.) + + The PostgreSQL Project thanks Jacob Champion for reporting this problem. (CVE-2021-23214) + +- Make libpq reject extraneous data after an SSL or GSS encryption handshake (Tom Lane) + + A man-in-the-middle with the ability to inject data into the TCP connection could stuff some cleartext data into the start of a supposedly encryption-protected database session. This could probably be abused to inject faked responses to the client's first few queries, although other details of libpq's behavior make that harder than it sounds. A different line of attack is to exfiltrate the client's password, or other sensitive data that might be sent early in the session. That has been shown to be possible with a server vulnerable to CVE-2021-23214. + + The PostgreSQL Project thanks Jacob Champion for reporting this problem. (CVE-2021-23222) + +- Fix physical replication for cases where the primary crashes after shipping a WAL segment that ends with a partial WAL record (Álvaro Herrera) + + If the primary did not survive long enough to finish writing the rest of the incomplete WAL record, then the previous crash-recovery logic had it back up and overwrite WAL starting from the beginning of the incomplete WAL record. This is problematic since standby servers may already have copies of that WAL segment. They will then see an inconsistent next segment, and will not be able to recover without manual intervention. To fix, do not back up over a WAL segment boundary when restarting after a crash. Instead write a new type of WAL record at the start of the next WAL segment, informing readers that the incomplete WAL record will never be finished and must be disregarded. + + When applying this update, it's best to update standby servers before the primary, so that they will be ready to handle this new WAL record type if the primary happens to crash. + +- Ensure that parallel`VACUUM`doesn't miss any indexes (Peter Geoghegan, Masahiko Sawada) + + A parallel`VACUUM`would fail to process indexes that are below the`min_parallel_index_scan_size`截止,如果表还具有至少两个大于该大小的索引。这可能会导致这些索引损坏,因为它们仍然包含对由`真空`;使用此类索引的后续查询可能会返回不应返回的行。这个问题不会影响 autovacuum,因为它不使用并行清理。但是,建议重新索引任何具有正确索引大小组合的手动清理表。 + +- 使固定`并发创建索引`等待最新的准备交易(Andrey Borodin) + + 刚刚准备好的事务插入的行可能会从新索引中省略,导致依赖索引的查询错过这些行。以前对此类问题的修复未能解决`准备交易`仍在执行的命令`并发创建索引`检查了他们。和以前一样,在启用准备事务的安装中(`max_prepared_transactions` >0),建议重新索引任何并发构建的索引,以防在构建时出现此问题。 + +- 避免可能导致后端无法将新行的条目添加到同时构建的索引的竞争条件(Noah Misch,Andrey Borodin) + + 虽然这在该领域显然很少见,但这种情况可能会影响任何使用`同时`选项。建议重新索引任何此类索引以确保它们是正确的。 + +- 使固定`同时重新索引`保留附加到目标索引的运算符类参数 (Michael Paquier) + +- 修复克隆包含非内置对象的数据库时不正确地创建共享依赖项 (Aleksander Alekseev) + + 这个错误的影响在实践中可能是有限的。原则上,它可以允许角色在仍然拥有对象的情况下被删除;但大多数安装永远不会想要删除已用于他们添加到的对象的角色`模板1`. + +- 确保关系缓存对于附加到分区表或从分区表分离的表无效(Amit Langote,Álvaro Herrera) + + 这种疏忽可能导致直接针对分区的后续插入/更新的不当行为,但仅限于当前存在的会话中。 + +- 在创建范围类型时修复解析树的损坏(Alex Kozhemyakin,Sergey Shinderuk) + + `创建类型`错误地释放了解析树的一个元素,这可能会导致稍后的事件触发器出现问题,或者如果`创建类型`命令存储在计划缓存中并稍后再次使用。 + +- 修复复合域数组中元素字段的更新 (Tom Lane) + + 一个命令如`更新选项卡 SET fld[1].subfld = val`如果数组的元素是域而不是普通组合,则失败。 + +- 禁止组合`先取 TIES`和`对于更新跳过锁定`(大卫克里斯滕森) + + `先取 TIES`必然会比请求的多获取一行,因为它在找到不是平局的行之前无法停止。在我们当前的实现中,如果`更新`使用然后该行也将被锁定,即使它没有返回。这会导致不良行为,如果`跳过锁定`选项被指定。如果不引入一组不同的不良行为,很难改变这一点,所以现在,禁止组合。 + +- 不允许`ALTER INDEX 索引 ALTER COLUMN col SET(选项)`(内森·博萨特,迈克尔·帕奎尔) + + 虽然解析器接受了这一点,但它没有记录并且实际上不起作用。 + +- 修复数字精度的极端情况损失`力量()`(院长拉希德) + + 当第一个参数非常接近 1 时,结果可能不准确。 + +- 避免为 Memoize 计划选择错误的哈希相等运算符 (David Rowley) + + 此错误可能导致崩溃或不正确的查询结果。 + +- 通过将子查询表达式拉入函数范围表条目来修复规划器错误 (Tom Lane) + + 如果一个函数在`从`横向参考一些子的输出`选择`在早些时候`从`子句,我们能够展平那个子`选择`在外部查询中,复制到函数表达式中的表达式未完全处理。这可能导致执行时崩溃。 + +- 避免使用仅 MCV 的统计信息来估计列的范围 (Tom Lane) + + 在某些极端情况下`分析`将构建最常见值 (MCV) 列表,但不会构建直方图,即使 MCV 列表未考虑所有观察到的值。在这种情况下,不要让计划器单独使用 MCV 列表来估计列值的范围。 + +- 修复子事务中 Portal 快照的恢复 (Bertrand Drouvot) + + 如果一个过程提交或回滚一个事务,然后它的下一个重要动作是在一个新的子事务中,快照管理出错,导致悬空指针和可能的崩溃。PL/pgSQL 中的一个典型例子是`犯罪`紧随其后的是`开始...例外`执行查询的块。 + +- 如果事务在导出其快照后失败,则正确清理 (Dilip Kumar) + + 如果同一会话再次尝试导出快照,这种疏忽只会导致问题。最可能的情况是创建一个复制槽(随后是回滚),然后创建另一个复制槽。 + +- 防止备用服务器上溢出子事务跟踪的环绕(Kyotaro Horiguchi,Alexander Korotkov) + + 这种疏忽可能会导致备用服务器上的性能显着下降(表现为过多的 SubtransSLRU 流量)。 + +- 确保在升级备用服务器期间正确考虑准备好的交易(Michael Paquier,Andres Freund) + + 有一个狭窄的窗口,可以从并发运行的会话拍摄的快照中省略准备好的事务。如果该会话随后使用快照执行数据更新,则可能会出现错误结果或数据损坏。 + +- 修复“could not find RecursiveUnion”错误`解释`尝试打印附加到 WorkTableScan 节点的过滤条件 (Tom Lane) + +- 确保重命名表时使用正确的锁定级别 (Nathan Bossart, Álvaro Herrera) + + 由于历史原因,`更改索引...重命名`可以应用于任何类型的关系。重命名索引所需的锁定级别低于重命名表或其他类型的关系所需的锁定级别,但是代码出错了,并且在拼写命令时会使用较弱的锁定级别`更改索引`. + +- 删除同时删除拥有对象的角色时避免空指针取消引用崩溃 (Álvaro Herrera) + +- 防止出现“快照参考泄漏”警告`lo_export()`或相关功能失败 (Heikki Linnakangas) + +- 修复 CoerceToDomain 表达式节点的低效代码生成 (Ranier Vilela) + +- 在某些列表操作操作中避免 O(N^2) 行为 (Nathan Bossart, Tom Lane) + + 这些更改修复了几种情况下的缓慢处理,包括:当许多文件在检查点后被取消链接时;当哈希聚合涉及多个批次时;什么时候`pg_trgm`从复杂的正则表达式中提取可索引条件。实际上,只有第一种情况已经从现场报道过,但它们似乎都是低效列表删除的合理后果。 + +- 围绕 B-tree 发布列表拆分添加更多防御性检查 (Peter Geoghegan) + + 此更改应有助于检测涉及重复表 TID 的索引损坏。 + +- 将 NaN 插入 BRIN float8 或 float4 minmax 时避免断言失败\_多\_操作索引 (Tomas Vondra) + + 在生产构建中,这种情况会导致索引效率低下,但实际上并非不正确。 + +- 允许 autovacuum 启动器进程响应`pg_log_backend_memory_contexts()`请求更快(Koyu Tanigawa) + +- 修复 HMAC 哈希计算中的内存泄漏 (Sergey Shinderuk) + +- 禁止设置`巨大的页面`到`在`什么时候`共享内存类型`是`系统`(托马斯·门罗) + + 以前,此设置被接受,但由于缺乏任何实现,它什么也没做。 + +- 修复 PL/pgSQL 中查询类型的检查`返回查询`声明(汤姆·莱恩) + + `返回查询`应该接受任何可以返回元组的查询,例如`更新返回`.v14 意外禁止了除`选择`;此外,`返回查询执行`变体根本无法应用任何查询类型检查。 + +- 修复 pg_dump 以正确转储非全局默认权限 (Neil Chen, Masahiko Sawada) + + 如果一个全局(无限制)`更改默认权限`command revoked some present-by-default privilege, for example`EXECUTE`for functions, and then a restricted`ALTER DEFAULT PRIVILEGES`command granted that privilege again for a selected role or schema, pg_dump failed to dump the restricted privilege grant correctly. + +- Make pg_dump acquire shared lock on partitioned tables that are to be dumped (Tom Lane) + + This oversight was usually pretty harmless, since once pg_dump has locked any of the leaf partitions, that would suffice to prevent significant DDL on the partitioned table itself. However problems could ensue when dumping a childless partitioned table, since no relevant lock would be held. + +- Fix crash in pg_dump when attempting to dump trigger definitions from a pre-8.3 server (Tom Lane) + +- Fix incorrect filename in pg_restore's error message about an invalid large object TOC file (Daniel Gustafsson) + +- Ensure that pgbench exits with non-zero status after a socket-level failure (Yugo Nagata, Fabien Coelho) + + The desired behavior is to finish out the run but then exit with status 2. Also, fix the reporting of such errors. + +- Prevent pg_amcheck from checking temporary relations, as well as indexes that are invalid or not ready (Mark Dilger) + + This avoids unhelpful checks of relations that will almost certainly appear inconsistent. + +- Make`contrib/amcheck`skip unlogged tables when running on a standby server (Mark Dilger) + + It's appropriate to do this since such tables will be empty, and unlogged indexes were already handled similarly. + +- Change`contrib/pg_stat_statements`to read its “query texts” file in units of at most 1GB (Tom Lane) + + Such large query text files are very unusual, but if they do occur, the previous coding would fail on Windows 64 (which rejects individual read requests of more than 2GB). + +- Fix null-pointer crash when`contrib/postgres_fdw`tries to report a data conversion error (Tom Lane) + +- Ensure that`GetSharedSecurityLabel()`can be used in a newly-started session that has not yet built its critical relation cache entries (Jeff Davis) + +- When running a TAP test, include the module's own directory in`PATH`(Andrew Dunstan) + + This allows tests to find built programs that are not installed, such as custom test drivers. + +- Use the CLDR project's data to map Windows time zone names to IANA time zones (Tom Lane) + + When running on Windows, initdb attempts to set the new cluster's`timezone`parameter to the IANA time zone matching the system's prevailing time zone. We were using a mapping table that we'd generated years ago and updated only fitfully; unsurprisingly, it contained a number of errors as well as omissions of recently-added zones. It turns out that CLDR has been tracking the most appropriate mappings, so start using their data. This change will not affect any existing installation, only newly-initialized clusters. + +- Update time zone data files to tzdata release 2021e for DST law changes in Fiji, Jordan, Palestine, and Samoa, plus historical corrections for Barbados, Cook Islands, Guyana, Niue, Portugal, and Tonga. + + Also, the Pacific/Enderbury zone has been renamed to Pacific/Kanton. Also, the following zones have been merged into nearby, more-populous zones whose clocks have agreed with them since 1970: Africa/Accra, America/Atikokan, America/Blanc-Sablon, America/Creston, America/Curacao, America/Nassau, America/Port_of_Spain, Antarctica/DumontDUrville, and Antarctica/Syowa. In all these cases, the previous zone name remains as an alias. diff --git a/docs/X/release-14-2.md b/docs/en/release-14-2.md similarity index 100% rename from docs/X/release-14-2.md rename to docs/en/release-14-2.md diff --git a/docs/en/release-14-2.zh.md b/docs/en/release-14-2.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..65a799f5355ef91d16c555e460d22526fe41c979 --- /dev/null +++ b/docs/en/release-14-2.zh.md @@ -0,0 +1,211 @@ +## E.1. Release 14.2 + +[E.1.1. Migration to Version 14.2](release-14-2.html#id-1.11.6.5.4)[E.1.2. Changes](release-14-2.html#id-1.11.6.5.5) + +**Release date:**2022-02-10 + +This release contains a variety of fixes from 14.1. For information about new features in major release 14, see[Section E.3](release-14.html). + +### E.1.1. Migration to Version 14.2 + +A dump/restore is not required for those running 14.X. + +However, some bugs have been found that may have resulted in corrupted indexes, as explained in the first two changelog entries. If any of those cases apply to you, it's recommended to reindex possibly-affected indexes after updating. + +Also, if you are upgrading from a version earlier than 14.1, see[Section E.2](release-14-1.html). + +### E.1.2. Changes + +- Enforce standard locking protocol for TOAST table updates, to prevent problems with`REINDEX CONCURRENTLY`(Michael Paquier) + + If applied to a TOAST table or TOAST table's index,`REINDEX CONCURRENTLY`tended to produce a corrupted index. This happened because sessions updating TOAST entries released their`ROW EXCLUSIVE`locks immediately, rather than holding them until transaction commit as all other updates do. The fix is to make TOAST updates hold the table lock according to the normal rule. Any existing corrupted indexes can be repaired by reindexing again. + +- Fix corruption of HOT chains when a RECENTLY_DEAD tuple changes state to fully DEAD during page pruning (Andres Freund) + + It was possible for`VACUUM`to remove a recently-dead tuple while leaving behind a redirect item that pointed to it. When the tuple's item slot is later re-used by some new tuple, that tuple would be seen as part of the pre-existing HOT chain, creating a form of index corruption. If this has happened, reindexing the table should repair the damage. However, this is an extremely low-probability scenario, so we do not recommend reindexing just on the chance that it might have happened. + +- Fix crash in EvalPlanQual rechecks for tables with a mix of local and foreign partitions (Etsuro Fujita) + +- Fix dangling pointer in`COPY TO`(Bharath Rupireddy) + + This oversight could cause an incorrect error message or a crash after an error in`COPY`. + +- Avoid null-pointer crash in`ALTER STATISTICS`when the statistics object is dropped concurrently (Tomas Vondra) + +- Correctly handle alignment padding when extracting a range from a multirange (Alexander Korotkov) + + This error could cause crashes when handling multiranges over variable-length data types. + +- Fix over-optimistic use of hashing for anonymous`RECORD`data types (Tom Lane) + + This prevents some cases of “could not identify a hash function for type record” errors. + +- Fix incorrect plan creation for parallel single-child Append nodes (David Rowley) + + In some cases the Append would be simplified away when it should not be, leading to wrong query results (duplicated rows). + +- Fix index-only scan plans for cases where not all index columns can be returned (Tom Lane) + + If an index has both returnable and non-returnable columns, and one of the non-returnable columns is an expression using a table column that appears in a returnable index column, then a query using that expression could result in an index-only scan plan that attempts to read the non-returnable column, instead of recomputing the expression from the returnable column as intended. The non-returnable column would read as NULL, resulting in wrong query results. + +- Fix Memoize plan nodes to handle subplans that use parameters coming from above the Memoize (David Rowley) + +- Fix Memoize plan nodes to work correctly with non-hashable join operators (David Rowley) + +- Ensure that casting to an unspecified typmod generates a RelabelType node rather than a length-coercion function call (Tom Lane) + + While the coercion function should do the right thing (nothing), this translation is undesirably inefficient. + +- Fix checking of`anycompatible`-family data type matches (Tom Lane) + + In some cases the parser would think that a function or operator with`anycompatible`-family polymorphic parameters matches a set of arguments that it really shouldn't match. In reported cases, that led to matching more than one operator to a call, leading to ambiguous-operator errors; but a failure later on is also possible. + +- Fix WAL replay failure when database consistency is reached exactly at a WAL page boundary (Álvaro Herrera) + +- Fix startup of a physical replica to tolerate transaction ID wraparound (Abhijit Menon-Sen, Tomas Vondra) + + If a replica server is started while the set of active transactions on the primary crosses a wraparound boundary (so that there are some newer transactions with smaller XIDs than older ones), the replica would fail with “out-of-order XID insertion in KnownAssignedXids”. The replica would retry, but could never get past that error. + +- In logical replication, avoid double transmission of a child table's data (Hou Zhijie) + + If a publication includes both child and parent tables, and has the`publish_via_partition_root`option set, subscribers uselessly initiated synchronization on both child and parent tables. Ensure that only the parent table is synchronized in such cases. + +- Remove lexical limitations for SQL commands issued on a logical replication connection (Tom Lane) + + The walsender process would fail for a SQL command containing an unquoted semicolon, or with dollar-quoted literals containing odd numbers of single or double quote marks, or when the SQL command starts with a comment. Moreover, faulty error recovery could lead to unexpected errors in later commands too. + +- Ensure that replication origin timestamp is set while replicating a`ROLLBACK PREPARED`operation (Masahiko Sawada) + +- Fix possible loss of the commit timestamp for the last subtransaction of a transaction (Alex Kingsborough, Kyotaro Horiguchi) + +- Be sure to`fsync`the`pg_logical/mappings`subdirectory during checkpoints (Nathan Bossart) + + On some filesystems this oversight could lead to losing logical rewrite status files after a system crash. + +- Build extended statistics for partitioned tables (Justin Pryzby) + + A previous bug fix disabled building of extended statistics for old-style inheritance trees, but it also prevented building them for partitioned tables, which was an unnecessary restriction. This change allows`ANALYZE`to compute values for statistics objects for partitioned tables. (But note that autovacuum does not process partitioned tables as such, so you must periodically issue manual`ANALYZE`on the partitioned table if you want to maintain such statistics.) + +- 忽略继承树的扩展统计信息 (Justin Pryzby) + + 目前,扩展统计值仅针对每个表在本地计算,而不是针对整个继承树。但是,在规划跨继承树的查询时错误地参考了这些值,可能导致比默认值更差的估计。 + +- 当分区表的行类型在别处用作复合类型时,不允许更改分区表列的数据类型 (Tom Lane) + + 对于常规表,这种限制早已存在,但是由于疏忽,它没有检查分区表。 + +- 不允许`ALTER TABLE ... DROP NOT NULL`对于作为副本标识索引一部分的列(唐海英,侯志杰) + + 主键索引已经存在同样的禁令。 + +- 期间正确更新缓存表状态`使用索引更改表添加主键`(侯志杰) + + 并发会话未能更新他们对表是否具有主键的意见,可能导致不正确的逻辑复制行为。 + +- 切换时正确更新缓存表状态`副本身份`指数(唐海英、侯志杰) + + 并发会话未能更新他们对哪个索引是副本身份的看法,这可能导致不正确的逻辑复制行为。 + +- 当索引列的数据类型与运算符类的声明输入类型二进制兼容时,修复 SP-GiST 索引的失败 (Tom Lane) + + 这种情况应该可以工作,但由于“当叶类型与输入类型不同时必须定义压缩方法”而失败。 + +- 允许在计算最旧的 xmin 时忽略并行清理和并发索引构建 (Masahiko Sawada) + + 这些操作的非并行实例已被忽略,但逻辑不适用于并行情况。阻止 xmin 范围会产生不良影响,例如延迟真空清理。 + +- 修复更新表达式索引时的内存泄漏 (Peter Geoghegan) + + 一个`更新`影响许多行可能会消耗大量内存。 + +- 避免内存泄漏`重新分配拥有者`重新分配许多对象所有权的操作 (Justin Pryzby) + +- 通过避免不必要的缓存访问来提高 walsender 发送逻辑更改的性能(侯志杰) + +- 修复显示`证书`身份验证方法的选项`pg_hba_file_rules`视图(马格努斯·哈根德) + + 这`证书`身份验证方法意味着`clientcert=验证完整`,但是`pg_hba_file_rules`查看错误报告`clientcert=验证-ca`. + +- 确保会话的目标是`pg_log_backend_memory_contexts()`仅将其结果发送到服务器的日志(Fujii Masao) + + 以前,足够高的设置`client_min_messages`可能导致日志消息也被发送到连接的客户端。由于该客户没有要求它,这将是令人惊讶的(并且可能违反有线协议)。 + +- 修复出现在中的整行变量的显示`插入...值`规则(汤姆·莱恩) + + 整行变量将打印为“var.\*”,但这允许在重新加载规则时将其扩展到单独的列,从而产生不同的语义。像我们在其他地方所做的那样,附加一个显式的演员表来防止这种情况发生。 + +- 反向列出 SQL 标准函数体时,适当地显示函数参数`插入...选择`(汤姆·莱恩) + + 以前,他们会作为`$*`ñ`*`即使参数有名称。 + +- 将 Unicode 字符串规范化应用于空字符串时修复一字节缓冲区溢出 (Michael Paquier) + + 由于对齐方面的考虑,这种做法的实际影响是有限的;但在调试版本中,引发了警告。 + +- 修复或删除一些不正确的断言(Simon Riggs、Michael Paquier、Alexander Lakhin) + + 这些错误应该只影响调试版本,而不是生产。 + +- 修复可能导致无法本地化在多线程使用 libpq 或 ecpglib 早期报告的错误消息的竞争条件 (Tom Lane) + +- 避免打电话`错误`来自 libpq 的`取消`功能(汤姆·莱恩) + + `取消`从信号处理程序调用应该是安全的,但是`错误`不安全。错误使用仅发生在向服务器发送取消消息失败的不太可能的情况下,这可能是缺少报告的原因。 + +- 制作 psql 的`\密码`命令默认设置密码`当前用户`,而不是连接的原始用户名 (Tom Lane) + + 这与记录的行为一致,并避免可能的权限失败,如果`设定角色`或者`设置会话授权`自会议开始以来已经完成。为防止混淆,要操作的角色名称现在包含在密码提示中。 + +- 修复 psql`\d`用于识别父触发器的命令查询 (Justin Pryzby) + + 如果一个分区有触发器并且在某个父分区表上存在同名的不相关语句级触发器,则之前的编码失败,“子查询返回了多个用作表达式的行”。 + +- 制作 psql 的`\d`命令按名称而不是 OID 对表的扩展统计对象进行排序 (Justin Pryzby) + +- 修复 psql 的枚举类型标签值的制表符补全 (Tom Lane) + +- 修复使用终端作为数据源或目标时 Windows 上的故障(Dmitry Koval、Juan José Santamaría Flecha、Michael Paquier) + + 这会影响 psql 的`\复制`命令,以及 pg\_符合逻辑的`-f -`. + +- 在 psql 和其他一些客户端程序中,避免尝试调用`获取文本()`来自 control-C 信号处理程序 (Tom Lane) + + 虽然没有报告的故障与此错误有关,但这似乎不太可能是安全的事情。 + +- 允许在pg中取消初始密码提示\_接收和 pg_recvlogical(汤姆·莱恩,内森·博萨特) + + 以前,在提示输入密码时,无法通过 control-C 终止这些程序。 + +- 修复 pg_dump 用户定义转换的转储排序 (Tom Lane) + + 在极少数情况下,输出脚本可能会在创建之前引用用户定义的转换。 + +- 修复 pg\_垃圾场`--插入`和`--列插入`处理包含生成列和删除列的表的模式 (Tom Lane) + +- 修复 pg 中可能的错误报告\_转储和 pg\_基本备份 (Tom Lane) + + 之前的代码无法检查来自某些内核调用的错误,并且在其他情况下可能会报告错误的 errno 值。 + +- 修复仅索引扫描的结果`贡献/btree_gist`索引`字符(*`ñ`*)`列(汤姆·莱恩) + + 仅索引扫描返回的列值删除了尾随空格,这不是预期的行为。发生这种情况是因为这就是数据在索引中存储的方式。此修复更改了要存储的代码`字符(*`ñ`*)`具有预期空间填充量的值。这种索引的行为不会立即改变,除非您`重新索引`它;否则,在更新过程中,空格分隔的值将随着时间的推移逐渐被替换。不使用仅索引扫描计划的查询在任何情况下都不会受到影响。 + +- 修复边缘情况`postgres_fdw`异步查询的处理(藤田悦郎) + + 在尝试并行扫描外部表时,这些错误可能会导致崩溃或不正确的结果。 + +- 更改配置以使用 Python 的 sysconfig 模块,而不是已弃用的 distutils 模块,以确定如何构建 PL/Python (Peter Eisentraut, Tom Lane, Andres Freund) + + 在 Python 3.10 中,这避免了有关 distutils 在 Python 3.12 中被弃用和计划删除的配置时警告。据推测,一旦 3.12 出来,`配置 --with-python`会完全失败。这种面向未来的做法是有代价的:在 Python 2.7 之前和 Python 3 分支中的 3.2 之前不存在 sysconfig,因此不再可能针对早已死去的 Python 版本构建 PL/Python。 + +- 重新允许不使用 OpenSSL 的交叉编译 (Tom Lane) + + 配置应该假设`/dev/urandom`将在目标系统上可用,但它失败了。 + +- 使用 Perl 5.28 及更高版本修复 Windows 上的 PL/Perl 编译失败 (Victor Wagner) + +- 使用 Python 3.11 及更高版本修复 PL/Python 编译失败 (Peter Eisentraut) + +- 添加对使用 Visual Studio 2022 进行构建的支持 (Hans Buschmann) + +- 允许`。蝙蝠`在我们的 MSVC 构建系统中调用包装脚本,而无需先更改其目录 (Anton Voloshin, Andrew Dunstan) diff --git a/docs/X/release-14.md b/docs/en/release-14.md similarity index 100% rename from docs/X/release-14.md rename to docs/en/release-14.md diff --git a/docs/en/release-14.zh.md b/docs/en/release-14.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..9712a46860e6bd7c9a9edaba1059572f7e992a83 --- /dev/null +++ b/docs/en/release-14.zh.md @@ -0,0 +1,1228 @@ +## E.3. Release 14 + +[E.3.1. Overview](release-14.html#id-1.11.6.7.3)[E.3.2. Migration to Version 14](release-14.html#id-1.11.6.7.4)[E.3.3. Changes](release-14.html#id-1.11.6.7.5)[E.3.4. Acknowledgments](release-14.html#RELEASE-14-ACKNOWLEDGEMENTS) + +**Release date:**2021-09-30 + +### E.3.1. Overview + +PostgreSQL 14 contains many new features and enhancements, including: + +- Stored procedures can now return data via`OUT`parameters. + +- The SQL-standard`SEARCH`and`CYCLE`options for common table expressions have been implemented. + +- Subscripting can now be applied to any data type for which it is a useful notation, not only arrays. In this release, the`jsonb`and`hstore`types have gained subscripting operators. + +- Range types have been extended by adding multiranges, allowing representation of noncontiguous data ranges. + +- Numerous performance improvements have been made for parallel queries, heavily-concurrent workloads, partitioned tables, logical replication, and vacuuming. + +- B-tree index updates are managed more efficiently, reducing index bloat. + +- `VACUUM`automatically becomes more aggressive, and skips inessential cleanup, if the database starts to approach a transaction ID wraparound condition. + +- Extended statistics can now be collected on expressions, allowing better planning results for complex queries. + +- libpq now has the ability to pipeline multiple queries, which can boost throughput over high-latency connections. + + The above items and other new features of PostgreSQL 14 are explained in more detail in the sections below. + +### E.3.2. Migration to Version 14 + +A dump/restore using[pg_dumpall](app-pg-dumpall.html)or use of[pg_upgrade](pgupgrade.html)or logical replication is required for those wishing to migrate data from any previous release. See[Section 19.6](upgrading.html)for general information on migrating to new major releases. + +Version 14 contains a number of changes that may affect compatibility with previous releases. Observe the following incompatibilities: + +- User-defined objects that reference certain built-in array functions along with their argument types must be recreated (Tom Lane) + + Specifically,[`array_append()`](functions-array.html),`array_prepend()`,`array_cat()`,`array_position()`,`array_positions()`,`array_remove()`,`array_replace()`, and[`width_bucket()`](functions-math.html)用来拿`任意数组`论据,但现在采取`任何兼容数组`.因此,用户定义的对象(如引用这些数组函数签名的聚合和运算符)必须在升级之前删除,并在升级完成后重新创建。 + +- 删除不推荐使用的包含操作符`@`和`~`用于内置[几何数据类型](functions-geometry.html)和贡献模块[立方体](cube.html),[hstore](hstore.html),[数组内](intarray.html), 和[段](seg.html)(贾斯汀·普里兹比) + + 更一致的命名`<@`和`@>`已经推荐了很多年。 + +- 使固定[`to_tsquery()`](functions-textsearch.html)和`websearch_to_tsquery()`正确解析包含丢弃标记的查询文本 (Alexander Korotkov) + + 某些丢弃的标记,如下划线,导致这些函数的输出产生不正确的 tsquery 输出,例如,两者`websearch_to_tsquery('"pg_class pg"')`和`to_tsquery('pg_class <-> pg')`用来输出`('pg' & 'class') <-> 'pg'`,但现在都输出`'pg' <-> '类' <-> 'pg'`. + +- 使固定[`websearch_to_tsquery()`](functions-textsearch.html)正确解析引号中的多个相邻丢弃标记 (Alexander Korotkov) + + 以前,包含多个相邻丢弃标记的引用文本被视为多个标记,导致不正确的 tsquery 输出,例如,`websearch_to_tsquery('"aaa: bbb"')`用来输出`'aaa' <2> 'bbb'`,但现在输出`'aaa' <-> 'bbb'`. + +- 改变[`提炼()`](functions-datetime.html)返回类型`数字`代替`浮动8`(彼得·艾森特劳特) + + 这避免了某些用法中的精度损失问题。旧的行为仍然可以通过使用旧的底层函数来获得`日期部分()`. + + 还,`摘录(日期)`现在对不属于`日期`数据类型。 + +- 改变[`var_samp()`](functions-aggregate.html)和`stddev_samp()`当输入为单个 NaN 值时,使用数字参数返回 NULL (Tom Lane) + + 之前`钠`被退回。 + +- 返回 false 为[`has_column_privilege()`](functions-info.html)使用属性号时检查不存在或删除的列 (Joe Conway) + + 以前,此类属性编号会返回无效列错误。 + +- 修复无限处理[窗口函数](sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS)范围(汤姆·莱恩) + + 以前的窗框子句如`'inf' 前面和 'inf' 下面`返回不正确的结果。 + +- 删除阶乘运算符`!`和`!!`, 以及函数`numeric_fac()`(马克·迪尔格) + + 这[`阶乘()`](functions-math.html)功能仍受支持。 + +- 不允许`阶乘()`负数 (Peter Eisentraut) + + 以前此类案例返回 1。 + +- 删除对[后缀](sql-createoperator.html)(右一元)运算符(Mark Dilger) + + 皮克\_转储和 pg\_如果 postfix 运算符被转储,升级将发出警告。 + +- 允许`\D`和`\W`匹配换行符的简写[正则表达式](functions-matching.html#FUNCTIONS-POSIX-REGEXP)换行敏感模式 (Tom Lane) + + 以前它们在此模式下不匹配换行符,但这与其他常见正则表达式引擎的行为不一致。`[^[:数字:]]`要么`[^[:word:]]`可用于获取旧行为。 + +- 匹配正则表达式时忽略约束[反向引用](functions-matching.html#POSIX-ESCAPE-SEQUENCES)(汤姆·莱恩) + + 例如,在`(^\d+).*\1`, 这`^`约束应该在字符串的开头应用,但不是在匹配时`\1`. + +- 不允许`\w`作为正则表达式字符类中的范围开始或结束 (Tom Lane) + + 这以前是允许的,但产生了意想不到的结果。 + +- 要求[自定义服务器参数](runtime-config-custom.html)名称仅使用在不带引号的 SQL 标识符中有效的字符 (Tom Lane) + +- 更改默认值[密码\_加密](runtime-config-connection.html#GUC-PASSWORD-ENCRYPTION)服务器参数为`scram-sha-256`(彼得·艾森特劳特) + + 以前是`md5`.除非更改此服务器设置或以 MD5 格式指定密码,否则所有新密码都将存储为 SHA256。此外,遗留的(和未记录的)类似布尔值的值是以前的同义词`md5`不再被接受。 + +- 删除服务器参数`vacuum_cleanup_index_scale_factor`(彼得·吉根) + + 从 PostgreSQL 版本 13.3 开始,此设置被忽略。 + +- 删除服务器参数`operator_precedence_warning`(汤姆·莱恩) + + 此设置用于警告应用程序有关 PostgreSQL 9.5 更改。 + +- 大修规范`客户证书`在[`pg_hba.conf`](auth-pg-hba-conf.html)(堀口京太郎) + + 价值观`1`/`0`/`不验证`不再支持;只有字符串`验证-ca`和`验证完整`可以使用。还有,不允许`验证-ca`如果启用了证书身份验证,因为证书需要`验证完整`检查。 + +- 删除对[SSL](runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SSL)压缩(丹尼尔·古斯塔夫森、迈克尔·帕奎尔) + + 这在以前的 PostgreSQL 版本中已经默认禁用,大多数现代 OpenSSL 和 TLS 版本不再支持它。 + +- 删除服务器和[库](libpq.html)支持版本 2[有线协议](protocol.html)(海基·林纳坎加斯) + + 这是最后一次在 PostgreSQL 7.3(2002 年发布)中用作默认值。 + +- 不允许在[`创建/删除语言`](sql-createlanguage.html)命令(彼得·艾森特劳特) + +- 除掉[复合类型](xfunc-sql.html#XFUNC-SQL-COMPOSITE-FUNCTIONS)以前为序列和吐司表创建的 (Tom Lane) + +- 处理双引号[心电图](ecpg.html)SQL 命令字符串正确 (Tom Lane) + + 之前`'abc''定义'`被传递到服务器作为`'abc'定义'`, 和`“ABC”“定义”`被通过了`“ABC”定义`,导致语法错误。 + +- 防止收容操作员(`<@`和`@>`) 为了[数组内](intarray.html)从使用 GiST 索引 (Tom Lane) + + 以前需要完整的 GiST 索引扫描,所以只需避免这种情况并扫描堆,这样会更快。应删除为此目的创建的索引。 + +- 删除贡献程序 pg_standby (Justin Pryzby) + +- Prevent[tablefunc](tablefunc.html)'s function`normal_rand()`from accepting negative values (Ashutosh Bapat) + + Negative values produced undesirable results. + +### E.3.3. Changes + +Below you will find a detailed account of the changes between PostgreSQL 14 and the previous major release. + +#### E.3.3.1. Server + +- Add predefined roles[`pg_read_all_data`](predefined-roles.html)and`pg_write_all_data`(Stephen Frost) + + These non-login roles can be used to give read or write permission to all tables, views, and sequences. + +- Add predefined role[`pg_database_owner`](predefined-roles.html)that contains only the current database's owner (Noah Misch) + + This is especially useful in template databases. + +- Remove temporary files after backend crashes (Euler Taveira) + + Previously, such files were retained for debugging purposes. If necessary, deletion can be disabled with the new server parameter[remove_temp_files_after\_碰撞](runtime-config-developer.html#GUC-REMOVE-TEMP-FILES-AFTER-CRASH). + +- 如果客户端断开连接,则允许取消长时间运行的查询 (Sergey Cherkashin, Thomas Munro) + + 服务器参数[客户\_联系\_查看\_间隔](runtime-config-connection.html#GUC-CLIENT-CONNECTION-CHECK-INTERVAL)允许控制是否检查内部查询的连接丢失。(这在 Linux 和一些其他操作系统上受支持。) + +- 添加一个可选的超时参数到[`pg_terminate_backend()`](functions-admin.html#FUNCTIONS-ADMIN-SIGNAL)(马格努斯·哈根德) + +- 允许始终将宽元组添加到几乎为空的堆页面(John Naylor,Floris van Nee) + + 以前插入的元组会超过页面的[填充因子](sql-createtable.html)而是添加到新页面。 + +- 在 SSL 连接数据包中添加服务器名称指示 (SNI) (Peter Eisentraut) + + 这可以通过关闭客户端连接选项来禁用[`sslsni`](libpq-connect.html#LIBPQ-PARAMKEYWORDS). + +##### E.3.3.1.1。[吸尘](routine-vacuuming.html) + +- 当可移动索引条目的数量微不足道时,允许清理以跳过索引清理(Masahiko Sawada,Peter Geoghegan) + + 真空参数[`INDEX_CLEANUP`](sql-vacuum.html)有一个新的默认值`汽车`启用此优化。 + +- 允许真空更急切地将已删除的 btree 页面添加到可用空间映射 (Peter Geoghegan) + + 以前,vacuum 只能将页面添加到被之前的vacuum 标记为已删除的可用空间映射中。 + +- 允许真空回收未使用的尾随堆行指针使用的空间 (Matthias van de Meent, Peter Geoghegan) + +- 允许真空在最小锁定索引操作期间更积极地删除死行 (Álvaro Herrera) + + 具体来说,`并发创建索引`和`同时重新索引`不再限制其他关系的死排移除。 + +- 加快清理具有许多关系的数据库(Tatsuhito Kasahara) + +- 减少默认值[真空\_成本\_页\_错过](runtime-config-resource.html#GUC-VACUUM-COST-PAGE-MISS)更好地反映当前的硬件能力 (Peter Geoghegan) + +- 添加跳过 TOAST 表清理的功能 (Nathan Bossart) + + [`真空`](sql-vacuum.html)现在有一个`PROCESS_TOAST`可以设置为 false 以禁用 TOAST 处理的选项,以及[真空数据库](app-vacuumdb.html)有一个`--no-process-toast`选项。 + +- 有[`复制冻结`](sql-copy.html)适当更新页面可见性位(Anastasia Lubennikova、Pavan Deolasee、Jeff Janes) + +- 如果表靠近 xid 或 multixact 环绕,则使真空操作更加激进(Masahiko Sawada,Peter Geoghegan) + + 这是由[真空\_故障保护\_年龄](runtime-config-client.html#GUC-VACUUM-FAILSAFE-AGE)和[真空\_多方位\_故障保护\_年龄](runtime-config-client.html#GUC-VACUUM-MULTIXACT-FAILSAFE-AGE). + +- 在事务 id 和多事务环绕之前增加警告时间和硬限制 (Noah Misch) + + 这应该会减少在未发出有关环绕的警告的情况下发生故障的可能性。 + +- 将每个索引信息添加到[自动真空记录输出](runtime-config-logging.html#GUC-LOG-AUTOVACUUM-MIN-DURATION)(泽田正彦) + +##### E.3.3.1.2。[分区](ddl-partitioning.html) + +- 提高具有许多分区的分区表的更新和删除性能(Amit Langote,Tom Lane) + + 这种变化极大地减少了规划器在这种情况下的开销,并且还允许对分区表的更新/删除使用执行时分区修剪。 + +- 允许分区[分离的](sql-altertable.html)in a non-blocking manner (Álvaro Herrera) + + The syntax is`ALTER TABLE ... DETACH PARTITION ... CONCURRENTLY`, and`FINALIZE`. + +- Ignore`COLLATE`clauses in partition boundary values (Tom Lane) + + Previously any such clause had to match the collation of the partition key; but it's more consistent to consider that it's automatically coerced to the collation of the partition key. + +##### E.3.3.1.3. Indexes + +- Allow btree index additions to[remove expired index entries](btree-implementation.html#BTREE-DELETION)to prevent page splits (Peter Geoghegan) + + This is particularly helpful for reducing index bloat on tables whose indexed columns are frequently updated. + +- Allow[BRIN](brin.html)indexes to record multiple min/max values per range (Tomas Vondra) + + This is useful if there are groups of values in each page range. + +- Allow BRIN indexes to use bloom filters (Tomas Vondra) + + This allows BRIN indexes to be used effectively with data that is not well-localized in the heap. + +- Allow some[GiST](gist.html)indexes to be built by presorting the data (Andrey Borodin) + + Presorting happens automatically and allows for faster index creation and smaller indexes. + +- Allow[SP-GiST](spgist.html)indexes to contain`INCLUDE`'d columns (Pavel Borisov) + +##### E.3.3.1.4. Optimizer + +- Allow hash lookup for`IN`clauses with many constants (James Coleman, David Rowley) + + Previously the code always sequentially scanned the list of values. + +- Increase the number of places[extended statistics](planner-stats.html#PLANNER-STATS-EXTENDED)can be used for`OR`clause estimation (Tomas Vondra, Dean Rasheed) + +- Allow extended statistics on expressions (Tomas Vondra) + + This allows statistics on a group of expressions and columns, rather than only columns like previously. System view[`pg_stats_ext_exprs`](view-pg-stats-ext-exprs.html)reports such statistics. + +- Allow efficient heap scanning of a range of[`TIDs`](datatype-oid.html#DATATYPE-OID-TABLE)(Edmund Horner, David Rowley) + + Previously a sequential scan was required for non-equality`TID`specifications. + +- Fix[`EXPLAIN CREATE TABLE AS`](sql-explain.html)and`EXPLAIN CREATE MATERIALIZED VIEW`to honor`IF NOT EXISTS`(Bharath Rupireddy) + + Previously, if the object already existed,`EXPLAIN`would fail. + +##### E.3.3.1.5. General Performance + +- Improve the speed of computing MVCC[visibility snapshots](mvcc.html)on systems with many CPUs and high session counts (Andres Freund) + + This also improves performance when there are many idle sessions. + +- Add executor method to memoize results from the inner side of a nested-loop join (David Rowley) + + This is useful if only a small percentage of rows is checked on the inner side. It can be disabled via server parameter[enable_memoize](runtime-config-query.html#GUC-ENABLE-MEMOIZE). + +- Allow[window functions](functions-window.html)to perform incremental sorts (David Rowley) + +- Improve the I/O performance of parallel sequential scans (Thomas Munro, David Rowley) + + This was done by allocating blocks in groups to[parallel workers](runtime-config-resource.html#GUC-MAX-PARALLEL-WORKERS). + +- Allow a query referencing multiple[foreign tables](sql-createforeigntable.html)to perform foreign table scans in parallel (Robert Haas, Kyotaro Horiguchi, Thomas Munro, Etsuro Fujita) + + [postgres_fdw](postgres-fdw.html)supports this type of scan if`async_capable`is set. + +- Allow[analyze](routine-vacuuming.html#VACUUM-FOR-STATISTICS)to do page prefetching (Stephen Frost) + + This is controlled by[maintenance_io_concurrency](runtime-config-resource.html#GUC-MAINTENANCE-IO-CONCURRENCY). + +- Improve performance of[regular expression](functions-matching.html#FUNCTIONS-POSIX-REGEXP)searches (Tom Lane) + +- Dramatically improve Unicode normalization performance (John Naylor) + + This speeds[`normalize()`](functions-string.html)and`IS NORMALIZED`. + +- Add ability to use[LZ4 compression](sql-createtable.html)on TOAST data (Dilip Kumar) + + This can be set at the column level, or set as a default via server parameter[default_toast_compression](runtime-config-client.html#GUC-DEFAULT-TOAST-COMPRESSION). The server must be compiled with[`--with-lz4`](install-procedure.html#CONFIGURE-OPTIONS-FEATURES)to support this feature. The default setting is still pglz. + +##### E.3.3.1.6. Monitoring + +- If server parameter[计算\_询问\_id](runtime-config-statistics.html#GUC-COMPUTE-QUERY-ID)启用,显示查询ID[`pg_stat_activity`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW),[`详细解释`](sql-explain.html),[csv日志](runtime-config-logging.html),并且可选地在[日志\_线\_字首](runtime-config-logging.html#GUC-LOG-LINE-PREFIX)(朱利安·鲁豪) + + 还将显示由扩展程序计算的查询 ID。 + +- 改进日志记录[自动真空](routine-vacuuming.html#AUTOVACUUM)和自动分析(Stephen Frost,Jakub Wartak) + + 这会报告自动清空和自动分析的 I/O 时序,如果[追踪\_io\_定时](runtime-config-statistics.html#GUC-TRACK-IO-TIMING)已启用。此外,报告缓冲区读取率和脏率以进行自动分析。 + +- 将有关客户端提供的原始用户名的信息添加到[log_connections](runtime-config-logging.html#GUC-LOG-CONNECTIONS)(Jacob Champion) + +##### E.3.3.1.7. System Views + +- Add system view[`pg_stat_progress_copy`](progress-reporting.html#COPY-PROGRESS-REPORTING)to report`COPY`progress (Josef Šimánek, Matthias van de Meent) + +- Add system view[`pg_stat_wal`](monitoring-stats.html#MONITORING-PG-STAT-WAL-VIEW)to report WAL activity (Masahiro Ikeda) + +- Add system view[`pg_stat_replication_slots`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW)to report replication slot activity (Masahiko Sawada, Amit Kapila, Vignesh C) + + The function[`pg_stat_reset_replication_slot()`](monitoring-stats.html#MONITORING-STATS-FUNCTIONS)resets slot statistics. + +- Add system view[`pg_backend_memory_contexts`](view-pg-backend-memory-contexts.html)to report session memory usage (Atsushi Torikoshi, Fujii Masao) + +- Add function[`pg_log_backend_memory_contexts()`](functions-admin.html#FUNCTIONS-ADMIN-SIGNAL)to output the memory contexts of arbitrary backends (Atsushi Torikoshi) + +- Add session statistics to the[`pg_stat_database`](monitoring-stats.html#MONITORING-PG-STAT-DATABASE-VIEW)system view (Laurenz Albe) + +- Add columns to[`pg_prepared_statements`](view-pg-prepared-statements.html)to report generic and custom plan counts (Atsushi Torikoshi, Kyotaro Horiguchi) + +- Add lock wait start time to[`pg_locks`](view-pg-locks.html)(Atsushi Torikoshi) + +- Make the archiver process visible in`pg_stat_activity`(Kyotaro Horiguchi) + +- Add wait event[`WalReceiverExit`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)to report WAL receiver exit wait time (Fujii Masao) + +- Implement information schema view[`routine_column_usage`](infoschema-routine-column-usage.html)to track columns referenced by function and procedure default expressions (Peter Eisentraut) + +##### E.3.3.1.8. Authentication + +- Allow an SSL certificate's distinguished name (DN) to be matched for client certificate authentication (Andrew Dunstan) + + The new[`pg_hba.conf`](auth-pg-hba-conf.html)option`clientname=DN`allows comparison with certificate attributes beyond the`CN`and can be combined with ident maps. + +- Allow`pg_hba.conf`and[`pg_ident.conf`](auth-username-maps.html)records to span multiple lines (Fabien Coelho) + + A backslash at the end of a line allows record contents to be continued on the next line. + +- Allow the specification of a certificate revocation list (CRL) directory (Kyotaro Horiguchi) + + This is controlled by server parameter[ssl_crl_dir](runtime-config-connection.html#GUC-SSL-CRL-DIR)and libpq connection option[sslcrldir](libpq-connect.html#LIBPQ-CONNECT-SSLCRLDIR). Previously only single CRL files could be specified. + +- Allow passwords of an arbitrary length (Tom Lane, Nathan Bossart) + +##### E.3.3.1.9. Server Configuration + +- Add server parameter[idle_session_timeout](runtime-config-client.html#GUC-IDLE-SESSION-TIMEOUT)to close idle sessions (Li Japin) + + This is similar to[idle_in_transaction_session_timeout](runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT). + +- Change[checkpoint\_完成\_目标](runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET)默认为 0.9(斯蒂芬弗罗斯特) + + 之前的默认值为 0.5。 + +- 允许`%P`在[日志\_线\_字首](runtime-config-logging.html#GUC-LOG-LINE-PREFIX)报告并行工作人员的并行组长的 PID (Justin Pryzby) + +- 允许[unix\_插座\_目录](runtime-config-connection.html#GUC-UNIX-SOCKET-DIRECTORIES)将路径指定为单独的、逗号分隔的引号字符串 (Ian Lawrence Barwick) + + 以前,所有路径都必须在单引号字符串中。 + +- 允许动态共享内存的启动分配 (Thomas Munro) + + 这是由[分钟\_动态的\_共享\_记忆力](runtime-config-resource.html#GUC-MIN-DYNAMIC-SHARED-MEMORY).这样可以更多地使用大型页面。 + +- 添加服务器参数[巨大的\_页\_大小](runtime-config-resource.html#GUC-HUGE-PAGE-SIZE)控制Linux上使用的巨大页面的大小(Odin Ugedal) + +#### E.3.3.2。流式复制和恢复 + +- 允许备用服务器通过[pg\_重绕](app-pgrewind.html)(Heikki Linnakangas) + +- 允许[恢复\_命令](runtime-config-wal.html#GUC-RESTORE-COMMAND)服务器重新加载期间要更改的设置(Sergei Kornilov) + + 你也可以设置`恢复命令`到空字符串并重新加载,以强制恢复仅从[`普格沃尔`](storage-file-layout.html)目录 + +- 添加服务器参数[日志\_恢复\_冲突\_等待](runtime-config-logging.html#GUC-LOG-RECOVERY-CONFLICT-WAITS)报告恢复冲突等待时间长(Bertrand Drouvot,Masahiko Sawada) + +- 如果主服务器更改其参数以防止在备用服务器上重播,则暂停热备用服务器上的恢复 (Peter Eisentraut) + + 以前备用服务器会立即关闭。 + +- 添加功能[`pg_get_wal_replay_pause_state()`](functions-admin.html#FUNCTIONS-RECOVERY-CONTROL)报告恢复状态 (Dilip Kumar) + + 它提供了比[`pg_is_wal_replay_paused()`](functions-admin.html#FUNCTIONS-RECOVERY-CONTROL),它仍然存在。 + +- 添加新的只读服务器参数[在\_热的\_支持](runtime-config-preset.html#GUC-IN-HOT-STANDBY)(哈里巴布·科米、格雷格·南卡罗、汤姆·莱恩) + + 这允许客户端轻松检测它们是否连接到热备用服务器。 + +- 在具有大量共享缓冲区的集群上恢复期间快速截断小表 (Kirk Jamison) + +- 在 Linux 上的崩溃恢复开始时允许文件系统同步 (Thomas Munro) + + 默认情况下,PostgreSQL 在崩溃恢复开始时打开并 fsync 数据库集群中的每个数据文件。新设定,[恢复\_在里面\_同步\_方法](runtime-config-error-handling.html#GUC-RECOVERY-INIT-SYNC-METHOD)`=同步文件`,而是同步集群使用的每个文件系统。这允许在具有许多数据库文件的系统上更快地恢复。 + +- 添加功能[`pg_xact_commit_timestamp_origin()`](functions-info.html)返回指定事务的提交时间戳和复制源(Movead Li) + +- 将复制源添加到返回的记录中[`pg_last_committed_xact()`](functions-info.html)(移动李) + +- 允许复制[原点函数](functions-admin.html#FUNCTIONS-REPLICATION)使用标准功能权限控件进行控制 (Martín Marqués) + + 以前这些功能只能由超级用户执行,这仍然是默认设置。 + +##### E.3.3.2.1。[逻辑复制](logical-replication.html) + +- 允许逻辑复制向订阅者传输长时间的正在进行的事务(Dilip Kumar、Amit Kapila、Ajin Cherian、Tomas Vondra、Nikhil Sontakke、Stas Kelvich) + + 以前的交易超过[合乎逻辑的\_解码\_工作\_内存](runtime-config-resource.html#GUC-LOGICAL-DECODING-WORK-MEM)被写入磁盘直到事务完成。 + +- 增强逻辑复制 API 以允许流式处理大型正在进行的事务(Tomas Vondra、Dilip Kumar、Amit Kapila) + + 输出函数以[`溪流`](logicaldecoding-output-plugin.html#LOGICALDECODING-OUTPUT-PLUGIN-STREAM-START).测试\_解码也支持这些。 + +- 在逻辑复制中的表同步期间允许多个事务 (Peter Smith, Amit Kapila, Takamichi Osumi) + +- 立即 WAL-log 子事务和顶级`XID`协会(Tomas Vondra、Dilip Kumar、Amit Kapila) + + 这对于逻辑解码很有用。 + +- 增强逻辑解码 API 以处理两阶段提交(Ajin Cherian、Amit Kapila、Nikhil Sontakke、Stas Kelvich) + + 这是通过控制[`pg_create_logical_replication_slot()`](functions-admin.html#FUNCTIONS-REPLICATION). + +- 使用逻辑复制时,在命令完成期间将缓存失效消息添加到 WAL (Dilip Kumar, Tomas Vondra, Amit Kapila) + + 这允许进行中事务的逻辑流式传输。禁用逻辑复制时,仅在事务完成时生成失效消息。 + +- 允许逻辑解码以更有效地处理缓存失效消息 (Dilip Kumar) + + 这允许[逻辑解码](logicaldecoding.html)在存在大量 DDL 的情况下高效工作。 + +- 允许控制是否将逻辑解码消息发送到复制流 (David Pirotte, Euler Taveira) + +- 允许逻辑复制订阅使用二进制传输模式 (Dave Cramer) + + 这比文本模式更快,但稍微不那么健壮。 + +- 允许通过 xid 过滤逻辑解码 (Markus Wanner) + +#### E.3.3.3。[`选择`](sql-select.html),[`插入`](sql-insert.html) + +- 减少不能用作列标签的关键字数量`作为`(马克·迪尔格) + + 现在受限关键字减少了 90%。 + +- 允许指定别名`加入`的`使用`子句(彼得·艾森特劳特) + + 别名是通过编写创建的`作为`之后`使用`条款。它可以用作合并的表限定`使用`列。 + +- 允许`清楚的`要添加到`通过...分组`删除重复的`分组集`组合(维克恐惧) + + 例如,`按多维数据集 (a,b)、多维数据集 (b,c) 分组`将生成重复的分组组合`清楚的`. + +- 妥善处理`默认`多行条目`价值观`列出在`插入`(院长拉希德) + + 这种情况通常会引发错误。 + +- 添加SQL标准`搜索`和`周期`条款[通用表表达式](queries-with.html)(彼得·艾森特) + + 同样的结果也可以使用现有的语法来实现,但要方便得多。 + +- 允许在`哪里`条款`关于冲突`符合资格(汤姆·莱恩) + + 但是,只能引用目标表。 + +#### E.3.3.4。实用程序命令 + +- 允许[`刷新物化视图`](sql-refreshmaterializedview.html)使用平行度(Bharath Rupireddy) + +- 允许[`重新索引`](sql-reindex.html)更改新索引的表空间(Alexey Kondratov、Michael Paquier、Justin Pryzby) + + 这是通过指定`表空间`条款A.`--表空间`选项也被添加到[reindexdb](app-reindexdb.html)来控制这一切。 + +- 允许`重新索引`处理分区关系的所有子表或索引 (Justin Pryzby, Michael Paquier) + +- 允许使用索引命令`同时`避免使用等待其他操作完成`同时`(阿尔瓦罗·埃雷拉) + +- 提高性能[`复制自`](sql-copy.html)二进制模式 (Bharath Rupireddy, Amit Langote) + +- 为 SQL 定义的函数保留 SQL 标准语法[查看定义](sql-createview.html)(汤姆·莱恩) + + 以前,调用 SQL 标准函数,例如[`提炼()`](functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT)以简单的函数调用语法显示。现在在显示视图或规则时保留原始语法。 + +- 添加 SQL 标准子句`授予者`到[`授予`](sql-grant.html)和[`撤销`](sql-revoke.html)(彼得·艾森特劳特) + +- 添加`或更换`选项[`创建触发器`](sql-createtrigger.html)(大隅高道) + + 这允许有条件地替换预先存在的触发器。 + +- 允许[`截短`](sql-truncate.html)在外国桌子上操作(Kazutaka Onishi,Kohei KaiGai) + + 这[postgres_fdw](postgres-fdw.html)模块现在也支持这个。 + +- 允许更轻松地将出版物添加到订阅中和从订阅中删除 (Japin Li) + + 新语法是[`更改订阅...添加/删除发布`](sql-altersubscription.html).这避免了必须指定所有发布来添加/删除条目。 + +- 将主键、唯一约束和外键添加到[系统目录](catalogs.html)(彼得·艾森特劳特) + + 这些更改有助于 GUI 工具分析系统目录。现有的目录唯一索引现在已关联`独特`要么`首要的关键`约束。外键关系实际上并没有作为约束存储或实现,而是可以从函数中获取用于显示[皮克\_得到\_目录\_外国的\_键()](functions-info.html#FUNCTIONS-INFO-CATALOG-TABLE). + +- 允许[`目前角色`](functions-info.html)every place`CURRENT_USER`is accepted (Peter Eisentraut) + +#### E.3.3.5. Data Types + +- Allow extensions and built-in data types to implement[subscripting](sql-altertype.html)(Dmitry Dolgov) + + Previously subscript handling was hard-coded into the server, so that subscripting could only be applied to array types. This change allows subscript notation to be used to extract or assign portions of a value of any type for which the concept makes sense. + +- Allow subscripting of[`JSONB`](datatype-json.html)(Dmitry Dolgov) + + `JSONB`subscripting can be used to extract and assign to portions of`JSONB`documents. + +- Add support for[multirange data types](rangetypes.html)(Paul Jungwirth, Alexander Korotkov) + + These are like range data types, but they allow the specification of multiple, ordered, non-overlapping ranges. An associated multirange type is automatically created for every range type. + +- Add support for the[stemming](textsearch-dictionaries.html#TEXTSEARCH-SNOWBALL-DICTIONARY)of languages Armenian, Basque, Catalan, Hindi, Serbian, and Yiddish (Peter Eisentraut) + +- Allow[tsearch data files](textsearch-intro.html#TEXTSEARCH-INTRO-CONFIGURATIONS)to have unlimited line lengths (Tom Lane) + + The previous limit was 4K bytes. Also remove function`t_readline()`. + +- Add support for`Infinity`and`-Infinity`values in the[numeric data type](datatype-numeric.html)(Tom Lane) + + Floating-point data types already supported these. + +- Add[point operators](functions-geometry.html) `<<|`and`|>>`representing strictly above/below tests (Emre Hasegeli) + + Previously these were called`>^`and`<^`, but that naming is inconsistent with other geometric data types. The old names remain available, but may someday be removed. + +- Add operators to add and subtract[`LSN`](datatype-pg-lsn.html)and numeric (byte) values (Fujii Masao) + +- Allow[binary data transfer](protocol-overview.html#PROTOCOL-FORMAT-CODES)to be more forgiving of array and record`OID`mismatches (Tom Lane) + +- Create composite array types for system catalogs (Wenjing Zeng) + + User-defined relations have long had composite types associated with them, and also array types over those composite types. System catalogs now do as well. This change also fixes an inconsistency that creating a user-defined table in single-user mode would fail to create a composite array type. + +#### E.3.3.6. Functions + +- Allow SQL-language[functions](sql-createfunction.html)和[程序](sql-createprocedure.html)使用 SQL 标准函数体 (Peter Eisentraut) + + 以前只支持字符串文字函数体。使用 SQL 标准语法编写函数或过程时,会立即解析主体并存储为解析树。这允许更好地跟踪函数依赖关系,并且可以具有安全优势。 + +- 允许[程序](sql-createprocedure.html)拥有`出去`参数(彼得·艾森特劳特) + +- 允许一些数组函数对混合的兼容数据类型进行操作 (Tom Lane) + + 功能[`array_append()`](functions-array.html),`array_prepend()`,`数组猫()`,`数组位置()`,`数组位置()`,`array_remove()`,`数组替换()`, 和[`宽度_桶()`](functions-math.html)现在采取`任何兼容数组`代替`任意数组`论据。这使他们对参数类型的精确匹配不那么挑剔。 + +- 添加 SQL 标准[`修剪数组()`](functions-array.html)功能(Vik Fearing) + + 这可以通过数组切片完成,但不太容易。 + +- 添加`拜茶`等价物[`ltrim()`](functions-binarystring.html)和`rtrim()`(乔尔·雅各布森) + +- 支持负索引[`拆分部分()`](functions-string.html)(尼基尔·贝内施) + + 负值从最后一个字段开始并向后计数。 + +- 添加[`string_to_table()`](functions-string.html)在分隔符上拆分字符串的函数 (Pavel Stehule) + + 这类似于[`regexp_split_to_table()`](functions-string.html)功能。 + +- 添加[`unistr()`](functions-string.html)允许将 Unicode 字符指定为字符串中的反斜杠十六进制转义的函数 (Pavel Stehule) + + 这类似于如何在文字字符串中指定 Unicode。 + +- 添加[`位异或()`](functions-aggregate.html)XOR 聚合函数 (Alexey Bashtanov) + +- 添加功能[`位计数()`](functions-binarystring.html)返回在位或字节串中设置的位数 (David Fetter) + +- 添加[`date_bin()`](functions-datetime.html#FUNCTIONS-DATETIME-BIN)函数(约翰·奈勒) + + 此函数“分箱”输入时间戳,将它们分组为与指定原点对齐的统一长度的间隔。 + +- 允许[`make_timestamp()`](functions-datetime.html)/`make_timestamptz()`接受负数年(彼得·艾森特劳特) + + 负值被解释为`公元前`年。 + +- 添加更新的正则表达式[`子串()`](functions-string.html)语法(彼得·艾森特劳特) + + 新的 SQL 标准语法是`SUBSTRING(文本类似模式 ESCAPE 转义字符)`.以前的标准语法是`SUBSTRING(转义字符的文本来自模式)`,它仍然被 PostgreSQL 接受。 + +- 允许补充字符类转义[\\D](functions-matching.html#POSIX-ESCAPE-SEQUENCES),`\S`, 和`\W`在正则表达式括号内 (Tom Lane) + +- 添加[`[[:单词:]]`](functions-matching.html#POSIX-BRACKET-EXPRESSIONS)作为正则表达式字符类,相当于`\w`(汤姆·莱恩) + +- 允许为默认值提供更灵活的数据类型[`带领()`](functions-window.html)和`落后()`窗函数(Vik Fearing) + +- 使非零[浮点值](datatype-numeric.html#DATATYPE-FLOAT)除以无穷大归零(堀口京太郎) + + 以前此类操作会产生下溢错误。 + +- 用零返回 NaN 对 NaN 进行浮点除法 (Tom Lane) + + 以前这会返回错误。 + +- 原因[`exp()`](functions-math.html)和`力量()`负无穷指数返回零 (Tom Lane) + + 以前它们经常返回下溢错误。 + +- 提高涉及无穷大的几何计算的准确性 (Tom Lane) + +- 尽可能将内置类型强制功能标记为防漏 (Tom Lane) + + 这允许在安全敏感的情况下更多地使用需要类型转换的函数。 + +- 改变[`pg_describe_object()`](functions-info.html), `pg_identify_object()`, 和`pg_identify_object_as_address()`to always report helpful error messages for non-existent objects (Michael Paquier) + +#### E.3.3.7.[PL/pgSQL](plpgsql.html) + +- Improve PL/pgSQL's[expression](plpgsql-expressions.html)and[assignment](plpgsql-statements.html#PLPGSQL-STATEMENTS-ASSIGNMENT)parsing (Tom Lane) + + This change allows assignment to array slices and nested record fields. + +- Allow plpgsql's[`RETURN QUERY`](plpgsql-control-structures.html)to execute its query using parallelism (Tom Lane) + +- Improve performance of repeated[CALL](plpgsql-transactions.html)s within plpgsql procedures (Pavel Stehule, Tom Lane) + +#### E.3.3.8. Client Interfaces + +- Add[pipeline](libpq-pipeline-mode.html#LIBPQ-PIPELINE-SENDING)mode to libpq (Craig Ringer, Matthieu Garrigues, Álvaro Herrera) + + This allows multiple queries to be sent, only waiting for completion when a specific synchronization message is sent. + +- Enhance libpq's[`target_session_attrs`](libpq-connect.html#LIBPQ-PARAMKEYWORDS)parameter options (Haribabu Kommi, Greg Nancarrow, Vignesh C, Tom Lane) + + The new options are`read-only`,`primary`,`standby`, and`prefer-standby`. + +- Improve the output format of libpq's[`PQtrace()`](libpq-control.html)(Aya Iwata, Álvaro Herrera) + +- Allow an ECPG SQL identifier to be linked to a specific connection (Hayato Kuroda) + + This is done via[`DECLARE ... STATEMENT`](ecpg-sql-declare-statement.html). + +#### E.3.3.9. Client Applications + +- Allow[vacuumdb](app-vacuumdb.html)to skip index cleanup and truncation (Nathan Bossart) + + The options are`--no-index-cleanup`and`--no-truncate`. + +- Allow[pg_dump](app-pgdump.html)to dump only certain extensions (Guillaume Lelarge) + + This is controlled by option`--extension`. + +- Add[pgbench](pgbench.html) `permute()`随机打乱值的函数(Fabien Coelho、Hironobu Suzuki、Dean Rasheed) + +- 在 pgbench 测量的重新连接开销中包括断开时间`-C`(永田勇吾) + +- 允许多个详细选项规范(`-v`) 以增加日志记录的详细程度 (Tom Lane) + + 此行为由[皮克\_倾倒](app-pgdump.html), [皮克\_饺子](app-pg-dumpall.html), 和[皮克\_恢复](app-pgrestore.html). + +##### E.3.3.9.1。[psql](app-psql.html) + +- 允许 psql 的`\df`和`\做`指定函数和运算符参数类型的命令(Greg Sabino Mullane,Tom Lane) + + 这有助于减少为重载名称打印的匹配数。 + +- 将访问方法列添加到 psql 的`\d[i|m|t]+`output (Georgios Kokolatos) + +- Allow psql's`\dt`and`\di`to show TOAST tables and their indexes (Justin Pryzby) + +- Add psql command`\dX`to list extended statistics objects (Tatsuro Yamada) + +- Fix psql's`\dT`to understand array syntax and backend grammar aliases, like`int`for`integer`(Greg Sabino Mullane, Tom Lane) + +- When editing the previous query or a file with psql's`\e`, or using`\ef`and`\ev`, ignore the results if the editor exits without saving (Laurenz Albe) + + Previously, such edits would load the previous query into the query buffer, and typically execute it immediately. This was deemed to be probably not what the user wants. + +- Improve tab completion (Vignesh C, Michael Paquier, Justin Pryzby, Georgios Kokolatos, Julien Rouhaud) + +#### E.3.3.10. Server Applications + +- Add command-line utility[pg_amcheck](app-pgamcheck.html)to simplify running`contrib/amcheck`tests on many relations (Mark Dilger) + +- Add`--no-instructions`option to[initdb](app-initdb.html)(Magnus Hagander) + + This suppresses the server startup instructions that are normally printed. + +- Stop[pg_upgrade](pgupgrade.html)from creating`analyze_new_cluster`script (Magnus Hagander) + + Instead, give comparable[vacuumdb](app-vacuumdb.html)instructions. + +- Remove support for the[postmaster](app-postgres.html) `-o`option (Magnus Hagander) + + This option was unnecessary since all passed options could already be specified directly. + +#### E.3.3.11. Documentation + +- Rename "Default Roles" to["Predefined Roles"](predefined-roles.html)(Bruce Momjian, Stephen Frost) + +- Add documentation for the[`factorial()`](functions-math.html#FUNCTION-FACTORIAL)功能(彼得·艾森特) + + 随着这座城市的拆除!在这个版本中,`阶乘()`是计算阶乘的唯一内置方法。 + +#### E.3.3.12。源代码 + +- 添加配置选项[`--使用ssl={openssl}`](install-procedure.html#CONFIGURE-OPTIONS-FEATURES)允许将来选择使用SSL库(Daniel Gustafsson,Michael Paquier) + + 拼写`--使用openssl`为了兼容性而保留。 + +- 添加对[抽象Unix域套接字](runtime-config-connection.html#GUC-UNIX-SOCKET-DIRECTORIES)(彼得·艾森特) + + 这在Linux和Windows上目前是受支持的。 + +- 允许Windows正确处理超过4G字节的文件(胡安·何塞·桑塔马里亚·弗莱查) + + 例如,这允许[`复制`](sql-copy.html) [沃尔](install-procedure.html#CONFIGURE-OPTIONS-MISC)文件,以及大于4G字节的关系段文件。 + +- 添加服务器参数[调试\_丢弃\_储藏室](runtime-config-developer.html#GUC-DEBUG-DISCARD-CACHES)为测试目的控制缓存刷新(克雷格·林格) + + 以前,这种行为只能在编译时设置。要在initdb期间调用它,请使用新选项`--丢弃缓存`. + +- Various improvements in valgrind error detection ability (Álvaro Herrera, Peter Geoghegan) + +- Add a test module for the regular expression package (Tom Lane) + +- Add support for LLVM version 12 (Andres Freund) + +- Change SHA1, SHA2, and MD5 hash computations to use the OpenSSL EVP API (Michael Paquier) + + This is more modern and supports FIPS mode. + +- Remove separate build-time control over the choice of random number generator (Daniel Gustafsson) + + This is now always determined by the choice of SSL library. + +- Add direct conversion routines between EUC_TW and Big5 encodings (Heikki Linnakangas) + +- Add collation version support for FreeBSD (Thomas Munro) + +- Add[`amadjustmembers`](index-api.html)to the index access method API (Tom Lane) + + This allows an index access method to provide validity checking during creation of a new operator class or family. + +- Provide feature-test macros in`libpq-fe.h`for recently-added libpq features (Tom Lane, Álvaro Herrera) + + Historically, applications have usually used compile-time checks of`PG_VERSION_NUM`to test whether a feature is available. But that's normally the server version, which might not be a good guide to libpq's version.`libpq-fe.h`now offers`#define`symbols denoting application-visible features added in v14; the intent is to keep adding symbols for such features in future versions. + +#### E.3.3.13. Additional Modules + +- Allow subscripting of[hstore](hstore.html)values (Tom Lane, Dmitry Dolgov) + +- Allow GiST/GIN[皮克\_trgm](pgtrgm.html)进行等式查找的索引 (Julien Rouhaud) + + 这类似于`喜欢`除了不支持通配符。 + +- 允许[立方体](cube.html)以二进制模式传输的数据类型 (KaiGai Kohei) + +- 允许[`pgstattuple_approx()`](pgstattuple.html)报告 TOAST 表 (Peter Eisentraut) + +- 添加贡献模块[皮克\_手术](pgsurgery.html)允许更改行可见性​​ (Ashutosh Sharma) + + 这对于纠正数据库损坏很有用。 + +- 添加贡献模块[老的\_快照](oldsnapshot.html)报告`XID`/time 被激活者使用的映射[老的\_快照\_临界点](runtime-config-resource.html#GUC-OLD-SNAPSHOT-THRESHOLD)(罗伯特·哈斯) + +- 允许[安检](amcheck.html)还要检查堆页 (Mark Dilger) + + 以前它只检查 B-Tree 索引页。 + +- 允许[页面检查](pageinspect.html)检查 GiST 索引 (Andrey Borodin, Heikki Linnakangas) + +- 将 pageinspect 块号更改为[`大整数`](datatype-numeric.html#DATATYPE-INT)(彼得·艾森特劳特) + +- 标记[btree\_要旨](btree-gist.html)用作并行安全 (Steven Winfield) + +##### E.3.3.13.1。[皮克\_统计\_陈述](pgstatstatements.html) + +- 从 pg 移动查询哈希计算\_统计\_对核心服务器的声明 (Julien Rouhaud) + + 新的服务器参数[计算\_询问\_id](runtime-config-statistics.html#GUC-COMPUTE-QUERY-ID)的默认值`汽车`加载此扩展时将自动启用查询 ID 计算。 + +- 原因 pg\_统计\_分别跟踪顶部和嵌套语句的语句 (Julien Rohaud) + + 以前,在跟踪所有语句时,将相同的顶部和嵌套语句作为单个条目进行跟踪;但将这些用法分开似乎更有用。 + +- 将实用程序命令的行数添加到 pg\_统计\_声明(藤井正男、葛城雄太、西野由纪) + +- 添加`pg_stat_statements_info`系统视图显示 pg\_统计\_声明活动(葛城裕太、西野由纪、中道直树) + +##### E.3.3.13.2。[postgres_fdw](postgres-fdw.html) + +- 允许 postgres_fdw to`INSERT`rows in bulk (Takayuki Tsunakawa, Tomas Vondra, Amit Langote) + +- Allow postgres_fdw to import table partitions if specified by[`IMPORT FOREIGN SCHEMA ... LIMIT TO`](sql-importforeignschema.html)(Matthias van de Meent) + + By default, only the root of a partitioned table is imported. + +- Add postgres_fdw function`postgres_fdw_get_connections()`to report open foreign server connections (Bharath Rupireddy) + +- Allow control over whether foreign servers keep connections open after transaction completion (Bharath Rupireddy) + + This is controlled by`keep_connections`and defaults to on. + +- Allow postgres_fdw to reestablish foreign server connections if necessary (Bharath Rupireddy) + + Previously foreign server restarts could cause foreign table access errors. + +- Add postgres_fdw functions to discard cached connections (Bharath Rupireddy) + +### E.3.4. Acknowledgments + +The following individuals (in alphabetical order) have contributed to this release as patch authors, committers, reviewers, testers, or reporters of issues. + +| Abhijit Menon-Sen | +| ----------------- | +| Ádám Balogh | +| 阿德里安·何 | +| 阿山哈迪 | +| 阿金切里安 | +| 亚历山大·阿列克谢耶夫 | +| 亚历山德罗·盖拉尔迪 | +| 亚历克斯·科热米亚金 | +| 亚历山大·科罗特科夫 | +| 亚历山大·拉欣 | +| 亚历山大·纳夫拉蒂尔 | +| 亚历山大·皮哈洛夫 | +| 亚历山德拉王 | +| 阿列克谢·巴什坦诺夫 | +| 阿列克谢·布尔加科夫 | +| 阿列克谢·康德拉托夫 | +| 阿尔瓦罗·埃雷拉 | +| 阿米特·卡皮拉 | +| 阿米特·坎德卡 | +| 阿米特·朗格特 | +| 南阿穆尔 | +| 阿纳斯塔西娅·卢本尼科娃 | +| 安德鲁·格罗布 | +| 安德鲁·克雷施默 | +| 安德鲁·瑞尔帝国 | +| 安德烈亚斯·维希特 | +| 安德烈的朋友 | +| 安德鲁·比尔 | +| 安德鲁·邓斯坦 | +| 安德鲁·吉尔思 | +| 安德烈·鲍罗丁 | +| 安德烈·列皮霍夫 | +| 安迪范 | +| 安东·沃罗申 | +| 安东尼·豪斯卡 | +| 阿恩·罗兰 | +| 阿尔塞尼·谢尔 | +| 亚瑟·纳西门托 | +| 亚瑟·扎基洛夫 | +| 阿舒托什·巴帕特 | +| 阿舒托什·夏尔马 | +| 阿什温·阿格拉瓦尔 | +| 阿西夫·雷曼 | +| 阿西姆·普拉文 | +| 鸟越淳 | +| 岩田绫 | +| 巴里·佩德森 | +| 巴斯普 | +| 鲍伊尔詹·萨哈里耶夫 | +| 比娜·艾默生 | +| 贝努瓦·洛布雷奥 | +| 伯恩德·赫尔姆 | +| 伯恩哈德·M·维德曼 | +| 伯特兰·德鲁沃 | +| 巴拉特·鲁皮雷迪 | +| 鲍里斯·科尔帕科夫 | +| 布拉尔接头 | +| 布莱恩·叶 | +| 布鲁斯·莫吉安 | +| 布林·卢埃林 | +| 卡梅伦丹尼尔 | +| 查普曼弗莱克 | +| 查尔斯·桑博尔斯基 | +| 查理·霍恩斯比 | +| 陈娇倩 | +| 克里斯·威尔逊 | +| 基督教探索 | +| 克里斯托弗·伯格 | +| 克里斯托夫·库尔图瓦 | +| 科里·亨克 | +| 克雷格·林格 | +| Dagfinn Ilmari Mannsåker | +| 达纳伯德 | +| 丹尼尔·切尔尼 | +| 丹尼尔·古斯塔夫森 | +| 丹尼尔·维瑞特 | +| 丹尼尔韦斯特曼 | +| 丹尼尔·瓦拉佐 | +| Dar Alathar-也门 | +| 达拉菲·普拉利亚斯科斯基 | +| 戴夫克莱默 | +| 大卫克里斯滕森 | +| 大卫费特 | +| 大卫·G·约翰斯顿 | +| 大卫·盖尔 | +| 大卫吉尔曼 | +| 大卫·皮罗特 | +| 大卫·罗利 | +| 大卫斯蒂尔 | +| 大卫·特隆 | +| 张大卫 | +| 院长拉希德 | +| 丹尼斯赞助人 | +| 点飞 | +| 迪利普库马尔 | +| 迪米特里·努舍勒 | +| 德米特里·库兹明 | +| 德米特里·多尔戈夫 | +| 德米特里·马拉卡索夫 | +| 多马戈伊·斯莫利亚诺维奇 | +| 东旭 | +| 道格拉斯·杜尔 | +| 邓肯金沙 | +| 埃德蒙霍纳 | +| 埃德森·里希特 | +| 叶戈尔·罗戈夫 | +| 叶卡捷琳娜·基里亚诺娃 | +| 埃琳娜·英卓普斯卡娅 | +| 埃米尔·伊格兰 | +| 埃姆雷·哈塞格利 | +| 埃里克·蒂恩斯 | +| 埃里克·赖克斯 | +| 欧文·布兰德施泰特 | +| 艾蒂安·斯塔曼斯 | +| 藤田悦郎 | +| 欧根康科夫 | +| 欧拉·塔维拉 | +| 法比安·科埃略 | +| 法布里齐奥·德·罗耶斯·梅洛 | +| 费德里科·卡塞利 | +| 费利克斯·莱赫纳 | +| 菲利普·戈斯波迪诺夫 | +| 弗洛里斯·范尼 | +| 弗兰克·加涅潘 | +| 弗里斯·贾文 | +| 乔治斯·科科拉托斯 | +| 格雷格·南卡罗 | +| 格雷格·雷赫莱夫斯基 | +| 格雷格·萨比诺·穆兰 | +| 格雷戈里·史密斯 | +| 格里高利·斯莫尔金 | +| 纪尧姆·勒拉格 | +| 盖伊·伯吉斯 | +| 盖仁豪 | +| 海鹰堂 | +| 哈米德·阿赫塔尔 | +| 汉斯·布施曼 | +| 吴昊 | +| 哈里巴布·科米 | +| 哈里赛·哈里 | +| 黑田隼人 | +| 希斯勋爵 | +| 海基·林纳坎加斯 | +| 亨利·欣策 | +| 赫维格·戈曼斯 | +| Himanshu Upadhyaya | +| 铃木博信 | +| 井上浩 | +| 小林久典 | +| 洪扎霍拉克 | +| 侯志杰 | +| 休伯特·卢巴切夫斯基 | +| 休伯特张 | +| 伊恩·巴维克 | +| 伊布拉·艾哈迈德 | +| 伊尔杜斯·库尔班加耶夫 | +| 艾萨克·莫兰 | +| 以色列巴特 | +| 伊塔马尔·加夫尼克 | +| 雅各布冠军 | +| 海梅·卡萨诺瓦 | +| 海梅·索勒 | +| 雅库布·瓦塔克 | +| 詹姆斯·科尔曼 | +| 詹姆斯·希利亚德 | +| 詹姆斯·亨特 | +| 詹姆斯通知 | +| 扬·穆斯勒 | +| 李嘉平 | +| 杰森·贝茨 | +| 杰森哈维 | +| 杰森金 | +| 吉万·拉德 | +| 杰夫戴维斯 | +| 杰夫·简斯 | +| 耶尔特·芬尼玛 | +| 杰里米·埃文斯 | +| 杰里米·芬泽尔 | +| 杰里米·史密斯 | +| 杰西·金基德 | +| 张杰西 | +| 张杰 | +| 吉姆·多蒂 | +| 吉姆·纳斯比 | +| 吉米安吉拉科斯 | +| 吉米·伊 | +| 吉里·费法尔 | +| 乔·康威 | +| 乔尔·雅各布森 | +| 约翰·奈勒 | +| 约翰汤普森 | +| 乔纳森·卡茨 | +| 约瑟夫·希曼内克 | +| 约瑟夫·纳米亚斯 | +| 乔什·伯库斯 | +| 胡安·何塞·圣玛丽亚·弗莱查 | +| 朱利安·鲁豪 | +| 杨俊峰 | +| 于尔根·普茨 | +| 贾斯汀·普利兹比 | +| 大西和孝 | +| 黑田圭佑 | +| 凯利敏 | +| 冈村健介 | +| 凯文·斯威特 | +| 叶凯文 | +| 柯克·贾米森 | +| 光平开盖 | +| 康斯坦丁·克尼日尼克 | +| 三宅市 | +| 克日什托夫·格拉德克 | +| 昆塔尔戈什 | +| 凯尔金斯伯里 | +| 堀口京太郎 | +| 劳伦特·哈森 | +| 劳伦兹阿尔贝 | +| 李东旭 | +| 李嘉平 | +| 刘怀玲 | +| 卢克弗莱明 | +| 卢多维奇库蒂 | +| 路易斯·罗伯托 | +| 卢克·埃德 | +| 马良柱 | +| 马切克·萨克瑞达 | +| 马丹库马拉 | +| 马格努斯·哈根德 | +| 马亨德拉·辛格·塔罗 | +| 马克西姆米柳京 | +| 马克·博伦 | +| 马辛·克鲁波维茨 | +| 马可·阿策里克 | +| 马雷克·苏巴 | +| 玛丽娜·波利亚科娃 | +| 马里奥·埃门劳尔 | +| 马克·迪尔格 | +| 马克·黄 | +| 赵马克 | +| 马库斯·万纳 | +| 马丁·马克斯 | +| 马丁·维瑟 | +| 泽田正彦 | +| 池田正宏 | +| 藤井正夫 | +| 马蒂斯·鲁道夫 | +| 马蒂亚斯·范德米特 | +| 马修·加里格斯 | +| 马蒂斯·范德弗勒滕 | +| 马克西姆·奥尔洛夫 | +| 梅兰妮普拉格曼 | +| 梅林·蒙库尔 | +| 迈克尔·班克 | +| 迈克尔·布朗 | +| 迈克尔·梅克斯 | +| 迈克尔·帕奎尔 | +| 迈克尔·保罗·基利安 | +| 迈克尔·鲍尔斯 | +| 迈克尔·瓦斯托拉 | +| 迈克尔·尼古拉耶夫 | +| 迈克尔·阿尔布莱希特 | +| 米凯尔·古斯塔夫森 | +| 移动李 | +| 穆罕默德·乌萨马 | +| 纳加拉吉拉吉 | +| 中道直树 | +| 内森·博萨特 | +| 弥敦道隆 | +| Nazli Ugur Koyluoglu | +| 内哈·夏尔马 | +| 尼尔陈 | +| 尼克克莱顿 | +| 尼科·威廉姆斯 | +| 尼基尔·贝内施 | +| 尼基尔·桑塔克 | +| 尼基塔·格鲁霍夫 | +| 尼基塔·科涅夫 | +| 尼古拉斯·伯科夫 | +| 尼古拉·萨莫赫瓦洛夫 | +| 尼古拉·沙普洛夫 | +| 尼廷·贾达夫 | +| 诺亚米施 | +| 筱田纪吉 | +| 奥丁乌格达尔 | +| 奥列格·巴尔图诺夫 | +| 奥列格·萨莫洛夫 | +| 恩德卡拉奇 | +| 帕斯卡勒格朗 | +| 保罗·福斯特 | +| 郭保罗 | +| 保罗·容沃思 | +| 保罗马丁内斯 | +| 保罗·西瓦什 | +| 帕万·德奥拉西 | +| 帕维尔·博耶夫 | +| 帕维尔鲍里索夫 | +| 帕维尔·卢扎诺夫 | +| 帕维尔·斯特胡勒 | +| 刘鹏程 | +| 彼得·艾森特劳特 | +| 彼得·吉根 | +| 彼得·史密斯 | +| 彼得·范迪维尔 | +| 彼得·费多罗夫 | +| 彼得·耶利内克 | +| 菲尔·克雷洛夫 | +| 菲利普·格拉姆佐 | +| 菲利普·博多安 | +| 菲利普·门克 | +| 皮埃尔·吉罗 | +| 普拉巴特萨胡 | +| 权宗良 | +| 拉菲·沙米姆 | +| 拉希拉赛义德 | +| 拉杰库马尔·拉古万希 | +| 拉尼尔·维莱拉 | +| 里贾纳奥贝 | +| 雷米·拉佩尔 | +| 罗伯特·福贾 | +| 罗伯特·格兰奇 | +| 罗伯特·哈斯 | +| 罗伯特·卡勒特 | +| 罗伯特·索辛斯基 | +| 罗伯特·特里特 | +| 罗宾·阿比 | +| 罗宾斯·塔拉坎 | +| 罗杰·梅森 | +| 罗希特·博盖特 | +| 罗曼·扎尔科夫 | +| 罗恩·约翰逊 | +| 罗南·邓克劳 | +| 瑞恩兰伯特 | +| 松村亮 | +| 赛义德虎白山 | +| 赛特·塔尔哈·尼桑奇 | +| 桑德罗·玛尼 | +| 桑托什·乌杜皮 | +| 斯科特·里伯 | +| 瑟罗普·萨库尼 | +| 谢尔盖·科尔尼洛夫 | +| 谢尔盖·贝尔尼科夫 | +| 谢尔盖·切尔卡申 | +| 谢尔盖·科波索夫 | +| 谢尔盖·辛德鲁克 | +| 谢尔盖·祖布科夫斯基 | +| 王肖恩 | +| 谢伊·罗扬斯基 | +| 石宇 | +| 加藤真也 | +| 冈野真也 | +| 西格丽德·埃伦赖希 | +| 西蒙·诺里斯 | +| 西蒙·里格斯 | +| 索福克利斯帕帕索福克利 | +| Soumyadeep Chakraborty | +| 斯塔斯·凯尔维奇 | +| 斯蒂芬·斯普林尔 | +| 斯蒂芬·洛雷克 | +| 斯蒂芬弗罗斯特 | +| 史蒂文温菲尔德 | +| 苏拉菲尔·特梅斯根 | +| 苏拉吉·卡拉格 | +| 斯文克莱姆 | +| 大隅高道 | +| 面城隆 | +| 纲川孝之 | +| 唐海英 | +| 笠原达人 | +| 石井达男 | +| 山田达郎 | +| 西奥多·阿尔塞尼·拉里奥诺夫-特里奇金 | +| 托马斯·凯勒 | +| 托马斯·门罗 | +| 托马斯·特伦茨 | +| 蒂斯范达姆 | +| 汤姆·埃利斯 | +| 汤姆·戈特弗里德 | +| 汤姆·莱恩 | +| 汤姆·维吉尔布里夫 | +| 托马斯·巴顿 | +| 托马斯·冯德拉 | +| 平光智宏 | +| 托尼·雷克斯 | +| 毗湿奴普拉巴卡兰 | +| 瓦伦丁·加蒂安-男爵 | +| 维克多·瓦格纳 | +| 维克多·叶戈罗夫 | +| 维涅什 C | +| 维克恐惧 | +| 维塔利·乌斯蒂诺夫 | +| 弗拉基米尔·希特尼科夫 | +| 维亚切斯拉夫·沙布利斯蒂 | +| 王申豪 | +| Wei Wang | +| Wells Oliver | +| Wenjing Zeng | +| Wolfgang Walther | +| Yang Lin | +| Yanliang Lei | +| Yaoguang Chen | +| Yaroslav Pashinsky | +| Yaroslav Schekin | +| Yasushi Yamashita | +| Yoran Heling | +| YoungHwan Joo | +| Yugo Nagata | +| Yuki Seino | +| Yukun Wang | +| Yulin Pei | +| Yura Sokolov | +| Yuta Katsuragi | +| Yuta Kondo | +| Yuzuko Hosoya | +| Zhihong Yu | +| Zhiyong Wu | +| Zsolt Ero | diff --git a/docs/X/release-prior.md b/docs/en/release-prior.md similarity index 100% rename from docs/X/release-prior.md rename to docs/en/release-prior.md diff --git a/docs/en/release-prior.zh.md b/docs/en/release-prior.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..72c89fcb5e7fda4bbe34051ab4cd0279379ac045 --- /dev/null +++ b/docs/en/release-prior.zh.md @@ -0,0 +1,3 @@ +## E.4. Prior Releases + +Release notes for prior release branches can be found at[`https://www.postgresql.org/docs/release/`](https://www.postgresql.org/docs/release/) diff --git a/docs/X/role-membership.md b/docs/en/role-membership.md similarity index 100% rename from docs/X/role-membership.md rename to docs/en/role-membership.md diff --git a/docs/en/role-membership.zh.md b/docs/en/role-membership.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..9b74c758d34a98fb7044a7d265d77f9c1289459e --- /dev/null +++ b/docs/en/role-membership.zh.md @@ -0,0 +1,70 @@ +## 22.3. Role Membership + +[](<>) + +It is frequently convenient to group users together to ease management of privileges: that way, privileges can be granted to, or revoked from, a group as a whole. In PostgreSQL this is done by creating a role that represents the group, and then granting*membership*in the group role to individual user roles. + +To set up a group role, first create the role: + +``` +CREATE ROLE name; +``` + +Typically a role being used as a group would not have the`LOGIN`attribute, though you can set it if you wish. + +Once the group role exists, you can add and remove members using the[`GRANT`](sql-grant.html)and[`REVOKE`](sql-revoke.html)commands: + +``` +GRANT group_role TO role1, ... ; +REVOKE group_role FROM role1, ... ; +``` + +You can grant membership to other group roles, too (since there isn't really any distinction between group roles and non-group roles). The database will not let you set up circular membership loops. Also, it is not permitted to grant membership in a role to`PUBLIC`. + +The members of a group role can use the privileges of the role in two ways. First, every member of a group can explicitly do[`SET ROLE`](sql-set-role.html)to temporarily “become” the group role. In this state, the database session has access to the privileges of the group role rather than the original login role, and any database objects created are considered owned by the group role not the login role. Second, member roles that have the`INHERIT`attribute automatically have use of the privileges of roles of which they are members, including any privileges inherited by those roles. As an example, suppose we have done: + +``` +CREATE ROLE joe LOGIN INHERIT; +CREATE ROLE admin NOINHERIT; +CREATE ROLE wheel NOINHERIT; +GRANT admin TO joe; +GRANT wheel TO admin; +``` + +Immediately after connecting as role`joe`, a database session will have use of privileges granted directly to`乔`加上授予的任何特权`行政`, 因为`乔`“继承”`行政`的特权。但是,授予的特权`车轮`不可用,因为即使`乔`是间接的成员`车轮`, 成员资格是通过`行政`其中有`非继承`属性。后: + +``` +SET ROLE admin; +``` + +会话将只能使用授予给`行政`,而不是那些授予`乔`.后: + +``` +SET ROLE wheel; +``` + +会话将只能使用授予给`车轮`,而不是授予任何一方的`乔`或者`行政`.可以通过以下任何方式恢复原始特权状态: + +``` +SET ROLE joe; +SET ROLE NONE; +RESET ROLE; +``` + +### 笔记 + +这`设定角色`命令始终允许选择原始登录角色直接或间接所属的任何角色。因此,在上面的例子中,没有必要变成`行政`在成为之前`车轮`. + +### 笔记 + +在 SQL 标准中,用户和角色有明显的区别,用户不会自动继承权限,而角色会自动继承。这种行为可以在 PostgreSQL 中通过赋予被用作 SQL 角色的角色`继承`属性,同时赋予被用作 SQL 用户的角色`非继承`属性。但是,PostgreSQL 默认给所有角色`继承`属性,用于向后兼容 8.1 之前的版本,在该版本中,用户始终可以使用授予他们所属的组的权限。 + +角色属性`登录`,`超级用户`,`创建数据库`, 和`创造者`可以认为是特殊权限,但它们永远不会像数据库对象的普通权限那样被继承。你必须实际上`设定角色`到具有这些属性之一的特定角色,以便使用该属性。继续上面的例子,我们可能会选择授予`创建数据库`和`创造者`到`行政`角色。然后一个会话连接为角色`乔`不会立即拥有这些特权,只有在做之后`设置角色管理员`. + +要销毁组角色,请使用[`删除角色`](sql-droprole.html): + +``` +DROP ROLE name; +``` + +组角色中的任何成员资格都会被自动撤销(但成员角色不会受到其他影响)。 diff --git a/docs/X/routine-vacuuming.md b/docs/en/routine-vacuuming.md similarity index 100% rename from docs/X/routine-vacuuming.md rename to docs/en/routine-vacuuming.md diff --git a/docs/en/routine-vacuuming.zh.md b/docs/en/routine-vacuuming.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..51b40e7bd538b2010cb9ae06f6ee660710e9fe67 --- /dev/null +++ b/docs/en/routine-vacuuming.zh.md @@ -0,0 +1,200 @@ +## 25.1. Routine Vacuuming + +[25.1.1. Vacuuming Basics](routine-vacuuming.html#VACUUM-BASICS) + +[25.1.2. Recovering Disk Space](routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERY) + +[25.1.3. Updating Planner Statistics](routine-vacuuming.html#VACUUM-FOR-STATISTICS) + +[25.1.4. Updating the Visibility Map](routine-vacuuming.html#VACUUM-FOR-VISIBILITY-MAP) + +[25.1.5. Preventing Transaction ID Wraparound Failures](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND) + +[25.1.6. The Autovacuum Daemon](routine-vacuuming.html#AUTOVACUUM) + +[](<>) + +PostgreSQL databases require periodic maintenance known as*vacuuming*. For many installations, it is sufficient to let vacuuming be performed by the*autovacuum daemon*, which is described in[Section 25.1.6](routine-vacuuming.html#AUTOVACUUM). You might need to adjust the autovacuuming parameters described there to obtain best results for your situation. Some database administrators will want to supplement or replace the daemon's activities with manually-managed`VACUUM`commands, which typically are executed according to a schedule by cron or Task Scheduler scripts. To set up manually-managed vacuuming properly, it is essential to understand the issues discussed in the next few subsections. Administrators who rely on autovacuuming may still wish to skim this material to help them understand and adjust autovacuuming. + +### 25.1.1. Vacuuming Basics + +PostgreSQL's[`VACUUM`](sql-vacuum.html)command has to process each table on a regular basis for several reasons: + +1. To recover or reuse disk space occupied by updated or deleted rows. +2. To update data statistics used by the PostgreSQL query planner. +3. To update the visibility map, which speeds up[index-only scans](indexes-index-only-scans.html). +4. To protect against loss of very old data due to*transaction ID wraparound*or*multixact ID wraparound*. + + 这些原因中的每一个都要求执行`真空`不同频率和范围的操作,如以下小节所述。 + + 有两种变体`真空`: 标准`真空`和`真空已满`.`真空已满`可以回收更多磁盘空间,但运行速度要慢得多。此外,标准形式`真空`可以与生产数据库操作并行运行。(命令如`选择`,`插入`,`更新`, 和`删除`将继续正常运行,但您将无法使用以下命令修改表的定义`更改表`当它被吸尘时。)`真空已满`需要一个`访问独家`锁定它正在处理的表,因此不能与表的其他用途并行完成。因此,一般来说,管理员应努力使用标准`真空`并避免`VACUUM FULL`. + +`VACUUM`creates a substantial amount of I/O traffic, which can cause poor performance for other active sessions. There are configuration parameters that can be adjusted to reduce the performance impact of background vacuuming — see[Section 20.4.4](runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST). + +### 25.1.2. Recovering Disk Space + +[](<>) + +In PostgreSQL, an`UPDATE`or`DELETE`of a row does not immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion concurrency control (MVCC, see[Chapter 13](mvcc.html)): the row version must not be deleted while it is still potentially visible to other transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must then be reclaimed for reuse by new rows, to avoid unbounded growth of disk space requirements. This is done by running`VACUUM`. + +The standard form of`VACUUM`removes dead row versions in tables and indexes and marks the space available for future reuse. However, it will not return the space to the operating system, except in the special case where one or more pages at the end of a table become entirely free and an exclusive table lock can be easily obtained. In contrast,`VACUUM FULL`actively compacts tables by writing a complete new version of the table file with no dead space. This minimizes the size of the table, but can take a long time. It also requires extra disk space for the new copy of the table, until the operation completes. + +The usual goal of routine vacuuming is to do standard`VACUUM`s often enough to avoid needing`VACUUM FULL`. The autovacuum daemon attempts to work this way, and in fact will never issue`VACUUM FULL`. In this approach, the idea is not to keep tables at their minimum size, but to maintain steady-state usage of disk space: each table occupies space equivalent to its minimum size plus however much space gets used up between vacuum runs. Although`VACUUM FULL`can be used to shrink a table back to its minimum size and return the disk space to the operating system, there is not much point in this if the table will just grow again in the future. Thus, moderately-frequent standard`真空`运行是比不频繁运行更好的方法`真空已满`运行以维护大量更新的表。 + +一些管理员更喜欢自己安排吸尘时间,例如在晚上负载较低的时候完成所有工作。根据固定时间表进行清理的困难在于,如果表在更新活动中出现意外的峰值,它可能会变得臃肿到以至于`真空已满`确实需要回收空间。使用 autovacuum 守护进程可以缓解这个问题,因为守护进程会动态调度清理以响应更新活动。除非您的工作负载非常可预测,否则完全禁用守护程序是不明智的。一种可能的折衷方案是设置守护程序的参数,使其仅对异常繁重的更新活动做出反应,从而防止事情在计划时失控`真空`当负载很典型时,预计 s 将完成大部分工作。 + +对于那些不使用 autovacuum 的人,一种典型的方法是安排一个数据库范围的`真空`在低使用率期间每天一次,并根据需要更频繁地清理大量更新的表。(一些更新率极高的安装每隔几分钟就会清理一次最繁忙的表。)如果集群中有多个数据库,请不要忘记`真空`每一个;该程序[真空数据库](app-vacuumdb.html)可能会有所帮助。 + +### 提示 + +清楚的`真空`当由于大量更新或删除活动而导致表包含大量死行版本时,可能无法令人满意。如果你有这样一张表,并且需要回收它占用的多余磁盘空间,则需要使用`真空已满`, 或者[`簇`](sql-cluster.html)或表重写变体之一[`更改表`](sql-altertable.html).这些命令重写表的全新副本并为其构建新索引。所有这些选项都需要一个`访问独家`锁。请注意,它们还临时使用大约等于表大小的额外磁盘空间,因为表和索引的旧副本在新副本完成之前无法释放。 + +### 提示 + +如果您有一个表,其全部内容会定期删除,请考虑使用[`TRUNCATE`](sql-truncate.html)rather than using`DELETE`followed by`VACUUM`.`TRUNCATE`removes the entire content of the table immediately, without requiring a subsequent`VACUUM`or`VACUUM FULL`to reclaim the now-unused disk space. The disadvantage is that strict MVCC semantics are violated. + +### 25.1.3. Updating Planner Statistics + +[](<>)[](<>) + +The PostgreSQL query planner relies on statistical information about the contents of tables in order to generate good plans for queries. These statistics are gathered by the[`ANALYZE`](sql-analyze.html)command, which can be invoked by itself or as an optional step in`VACUUM`. It is important to have reasonably accurate statistics, otherwise poor choices of plans might degrade database performance. + +The autovacuum daemon, if enabled, will automatically issue`ANALYZE`commands whenever the content of a table has changed sufficiently. However, administrators might prefer to rely on manually-scheduled`ANALYZE`operations, particularly if it is known that update activity on a table will not affect the statistics of “interesting” columns. The daemon schedules`ANALYZE`strictly as a function of the number of rows inserted or updated; it has no knowledge of whether that will lead to meaningful statistical changes. + +As with vacuuming for space recovery, frequent updates of statistics are more useful for heavily-updated tables than for seldom-updated ones. But even for a heavily-updated table, there might be no need for statistics updates if the statistical distribution of the data is not changing much. A simple rule of thumb is to think about how much the minimum and maximum values of the columns in the table change. For example, a`timestamp`column that contains the time of row update will have a constantly-increasing maximum value as rows are added and updated; such a column will probably need more frequent statistics updates than, say, a column containing URLs for pages accessed on a website. The URL column might receive changes just as often, but the statistical distribution of its values probably changes relatively slowly. + +It is possible to run`ANALYZE`on specific tables and even just specific columns of a table, so the flexibility exists to update some statistics more frequently than others if your application requires it. In practice, however, it is usually best to just analyze the entire database, because it is a fast operation.`ANALYZE`uses a statistically random sampling of the rows of a table rather than reading every single row. + +### Tip + +Although per-column tweaking of`ANALYZE`frequency might not be very productive, you might find it worthwhile to do per-column adjustment of the level of detail of the statistics collected by`ANALYZE`. Columns that are heavily used in`WHERE`clauses and have highly irregular data distributions might require a finer-grain data histogram than other columns. See`ALTER TABLE SET STATISTICS`, or change the database-wide default using the[default_statistics_target](runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET)configuration parameter. + +Also, by default there is limited information available about the selectivity of functions. However, if you create a statistics object or an expression index that uses a function call, useful statistics will be gathered about the function, which can greatly improve query plans that use the expression index. + +### Tip + +The autovacuum daemon does not issue`ANALYZE`commands for foreign tables, since it has no means of determining how often that might be useful. If your queries require statistics on foreign tables for proper planning, it's a good idea to run manually-managed`ANALYZE`commands on those tables on a suitable schedule. + +### 25.1.4. Updating the Visibility Map + +Vacuum maintains a[visibility map](storage-vm.html)for each table to keep track of which pages contain only tuples that are known to be visible to all active transactions (and all future transactions, until the page is again modified). This has two purposes. First, vacuum itself can skip such pages on the next run, since there is nothing to clean up. + +Second, it allows PostgreSQL to answer some queries using only the index, without reference to the underlying table. Since PostgreSQL indexes don't contain tuple visibility information, a normal index scan fetches the heap tuple for each matching index entry, to check whether it should be seen by the current transaction. An[*index-only scan*](indexes-index-only-scans.html), on the other hand, checks the visibility map first. If it's known that all tuples on the page are visible, the heap fetch can be skipped. This is most useful on large data sets where the visibility map can prevent disk accesses. The visibility map is vastly smaller than the heap, so it can easily be cached even when the heap is very large. + +### 25.1.5. Preventing Transaction ID Wraparound Failures + +[](<>)[](<>) + +PostgreSQL's[MVCC](mvcc-intro.html)transaction semantics depend on being able to compare transaction ID (XID) numbers: a row version with an insertion XID greater than the current transaction's XID is “in the future” and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits) a cluster that runs for a long time (more than 4 billion transactions) would suffer*transaction ID wraparound*: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their output become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you cannot get at it.) To avoid this, it is necessary to vacuum every table in every database at least once every two billion transactions. + +The reason that periodic vacuuming solves the problem is that`VACUUM`will mark rows as*frozen*, indicating that they were inserted by a transaction that committed sufficiently far in the past that the effects of the inserting transaction are certain to be visible to all current and future transactions. Normal XIDs are compared using modulo-232arithmetic. This means that for every normal XID, there are two billion XIDs that are “older” and two billion that are “newer”; another way to say it is that the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a particular normal XID, the row version will appear to be “in the past” for the next two billion transactions, no matter which normal XID we are talking about. If the row version still exists after more than two billion transactions, it will suddenly appear to be in the future. To prevent this, PostgreSQL reserves a special XID,`FrozenTransactionId`, which does not follow the normal XID comparison rules and is always considered older than every normal XID. Frozen row versions are treated as if the inserting XID were`FrozenTransactionId`, so that they will appear to be “in the past” to all normal transactions regardless of wraparound issues, and so such row versions will be valid until deleted, no matter how long that is. + +### Note + +In PostgreSQL versions before 9.4, freezing was implemented by actually replacing a row's insertion XID with`FrozenTransactionId`, which was visible in the row's`xmin`system column. Newer versions just set a flag bit, preserving the row's original`xmin`for possible forensic use. However, rows with`xmin`equal to`冻结的交易 ID`(2) 仍可能在数据库 pg 中找到\_从 9.4 之前的版本升级。 + +此外,系统目录可能包含带有`xmin`等于`BootstrapTransactionId`(1),表示它们是在initdb的第一阶段插入的。像`冻结的交易 ID`, 这个特殊的 XID 被视为比每个普通 XID 都旧。 + +[真空\_冻结\_分钟\_年龄](runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE)控制 XID 值在冻结带有该 XID 的行之前必须存在多长时间。如果将很快再次修改原本会被冻结的行,则增加此设置可能会避免不必要的工作,但减少此设置会增加必须再次清理表之前可以经过的事务数。 + +`真空`使用[能见度图](storage-vm.html)以确定必须扫描表的哪些页面。通常,它会跳过没有任何死行版本的页面,即使这些页面可能仍有带有旧 XID 值的行版本。因此,正常`真空`s 不会总是冻结表中的每个旧行版本。定期,`真空`将执行一个*侵略性真空*,只跳过那些既不包含死行也不包含任何未冻结的 XID 或 MXID 值的页面。[真空\_冻结\_桌子\_年龄](runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE)控制何时`真空`这样做:如果自上次此类扫描以来通过的事务数大于`vacuum_freeze_table_age`减`vacuum_freeze_min_age`.环境`vacuum_freeze_table_age`为 0 力`真空`对所有扫描使用这种更积极的策略。 + +一个表可以取消真空的最长时间是 20 亿个事务减去`vacuum_freeze_min_age`最后一次激进真空时的值。如果它的空置时间超过此时间,可能会导致数据丢失。为确保不会发生这种情况,将对任何可能包含未冻结行且 XID 早于配置参数指定的年龄的表调用 autovacuum[自动真空\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-FREEZE-MAX-AGE).(即使禁用了 autovacuum,也会发生这种情况。) + +这意味着如果一个表没有被清理,autovacuum 将在其上大约每调用一次`autovacuum_freeze_max_age`减`vacuum_freeze_min_age`交易。对于出于空间回收目的而定期清理的表,这无关紧要。但是,对于静态表(包括接收插入但没有更新或删除的表),不需要清理空间来回收空间,因此尝试在非常大的静态表上最大化强制自动清理之间的间隔会很有用。显然,可以通过增加`autovacuum_freeze_max_age`或减少`vacuum_freeze_min_age`. + +有效最大值为`vacuum_freeze_table_age`是 0.95\* `autovacuum_freeze_max_age`;高于该值的设置将被限制为最大值。一个高于`autovacuum_freeze_max_age`没有意义,因为无论如何都会在那个时候触发反环绕自动真空,并且 0.95 乘数为运行手册留下了一些喘息的空间`真空`在此之前。根据经验,`vacuum_freeze_table_age`应设置为略低于`autovacuum_freeze_max_age`,留下足够的间隙,以便定期安排`真空`或在该窗口中运行由正常删除和更新活动触发的自动清理。将其设置得太近可能会导致反环绕自动清理,即使表最近被清理以回收空间,而较低的值会导致更频繁的激进清理。 + +增加的唯一缺点`autovacuum_freeze_max_age`(和`vacuum_freeze_table_age`连同它)是`pg_xact`和`pg_commit_ts`数据库集群的子目录将占用更多空间,因为它必须存储提交状态和(如果`track_commit_timestamp`已启用)所有事务的时间戳返回到`autovacuum_freeze_max_age`地平线。提交状态每个事务使用两个位,所以如果`autovacuum_freeze_max_age`设置为最大允许值 20 亿,`pg_xact`预计将增长到大约半 GB 和`pg_commit_ts`约20GB。如果这与您的总数据库大小相比微不足道,请设置`autovacuum_freeze_max_age`建议将其设置为最大允许值。否则,请根据您愿意允许的情况进行设置`pg_xact`和`pg_commit_ts`贮存。(默认 2 亿笔交易,转换为大约 50MB`pg_xact`存储空间和大约 2GB`pg_commit_ts`贮存。) + +减少的缺点之一`vacuum_freeze_min_age`是它可能导致`真空`做无用的工作:如果行之后很快被修改(导致它获取新的 XID),冻结行版本是浪费时间。所以设置应该足够大,以至于行不会被冻结,直到它们不太可能再发生变化。 + +要跟踪数据库中最旧的未冻结 XID 的年龄,`真空`在系统表中存储 XID 统计信息`pg_class`和`pg_database`.特别是,`再冷冻西德`表的列`pg_class`行包含上次激进使用的冻结截止 XID`真空`为那张桌子。XID 早于此截止 XID 的事务插入的所有行都保证已被冻结。同样,`冰冻素`数据库的列`pg_database`row 是该数据库中出现的未冻结 XID 的下限——它只是每个表的最小值`再冷冻西德`数据库中的值。检查此信息的一种便捷方法是执行查询,例如: + +``` +SELECT c.oid::regclass as table_name, + greatest(age(c.relfrozenxid),age(t.relfrozenxid)) as age +FROM pg_class c +LEFT JOIN pg_class t ON c.reltoastrelid = t.oid +WHERE c.relkind IN ('r', 'm'); + +SELECT datname, age(datfrozenxid) FROM pg_database; +``` + +这`年龄`列测量从截止 XID 到当前事务的 XID 的事务数。 + +`真空`通常只扫描自上次清理后修改过的页面,但`再冷冻西德`只能在扫描可能包含未冻结 XID 的表的每一页时才可以前进。这发生在`再冷冻西德`超过`vacuum_freeze_table_age`交易旧,何时`真空`的`冻结`使用选项,或者当所有尚未完全冻结的页面碰巧需要清理以删除死行版本时。什么时候`真空`扫描表中尚未完全冻结的每个页面,它应该设置`年龄(relfrozenxid)`值只是比`vacuum_freeze_min_age`使用的设置(更多的是自`真空`开始)。如果不`再冷冻西德`-前进`真空`在桌子上发出直到`autovacuum_freeze_max_age`达到时,将很快对表强制执行自动清空。 + +如果由于某种原因 autovacuum 无法从表中清除旧的 XID,当数据库最旧的 XID 从回绕点达到四千万事务时,系统将开始发出类似这样的警告消息: + +``` +WARNING: database "mydb" must be vacuumed within 39985967 transactions +HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database. +``` + +(一本手册`真空`应该按照提示的建议解决问题;但请注意,`真空`必须由超级用户执行,否则将无法处理系统目录,因此无法推进数据库的`冰冻素`.) 如果这些警告被忽略,系统将关闭并拒绝启动任何新事务,一旦剩下的事务少于 300 万,直到结束: + +``` +ERROR: database is not accepting commands to avoid wraparound data loss in database "mydb" +HINT: Stop the postmaster and vacuum that database in single-user mode. +``` + +300万次交易的安全边际,让管理员在不丢失数据的情况下恢复,通过手动执行所需的`真空`命令。但是,由于系统一旦进入安全关机模式就不会执行命令,唯一的办法就是停止服务器并以单用户模式启动服务器执行`真空`.在单用户模式下不强制执行关闭模式。见[postgres](app-postgres.html)有关使用单用户模式的详细信息,请参阅参考页。 + +#### 25.1.5.1. Multixacts and Wraparound + +[](<>)[](<>) + +*Multixact IDs*are used to support row locking by multiple transactions. Since there is only limited space in a tuple header to store lock information, that information is encoded as a “multiple transaction ID”, or multixact ID for short, whenever there is more than one transaction concurrently locking a row. Information about which transaction IDs are included in any particular multixact ID is stored separately in the`pg_multixact`subdirectory, and only the multixact ID appears in the`xmax`field in the tuple header. Like transaction IDs, multixact IDs are implemented as a 32-bit counter and corresponding storage, all of which requires careful aging management, storage cleanup, and wraparound handling. There is a separate storage area which holds the list of members in each multixact, which also uses a 32-bit counter and which must also be managed. + +Whenever`VACUUM`scans any part of a table, it will replace any multixact ID it encounters which is older than[vacuum_multixact_freeze_min_age](runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-MIN-AGE)by a different value, which can be the zero value, a single transaction ID, or a newer multixact ID. For each table,`pg_class`.`relminmxid`stores the oldest possible multixact ID still appearing in any tuple of that table. If this value is older than[vacuum_multixact_freeze\_桌子\_年龄](runtime-config-client.html#GUC-VACUUM-MULTIXACT-FREEZE-TABLE-AGE),一个具有攻击性的真空被强迫。正如前一节所讨论的,激进的真空意味着只有那些已知已全部冻结的页面才会被跳过。`mxid_age()`可用于`pg_类`.`relminmxid`寻找它的年龄。 + +侵略性的`真空`不管是什么原因导致扫描,都可以使该表的值提前。最终,由于所有数据库中的所有表都会被扫描,并且它们最旧的multixact值会被提升,因此可以删除旧的multixact的磁盘存储。 + +作为一种安全装置,任何具有多层面X射线扫描年龄的手术台都会进行积极的真空扫描(参见[第25.1.5.1节](routine-vacuuming.html#VACUUM-FOR-MULTIXACT-WRAPAROUND))大于[自动真空\_多X射线\_冻结\_最大值\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-MULTIXACT-FREEZE-MAX-AGE)。此外,如果multixact成员占用的存储空间超过2GB,则会更频繁地对所有表进行主动真空扫描,从那些具有最早multixact年龄的表开始。即使autovacuum名义上被禁用,这两种攻击性扫描也会发生。 + +### 25.1.6.自动真空守护进程 + +[](<>) + +PostgreSQL有一个可选但强烈推荐的功能,名为*自动真空*,其目的是自动执行`真空`和`分析`命令。启用后,autovacuum 会检查已插入、更新或删除大量元组的表。这些检查使用统计信息收集工具;因此,除非[追踪\_计数](runtime-config-statistics.html#GUC-TRACK-COUNTS)设定为`真的`.在默认配置中,自动清理已启用,相关配置参数已适当设置。 + +“autovacuum daemon”实际上由多个进程组成。有一个持久的守护进程,称为*自动真空发射器*, 负责启动*自动吸尘器*所有数据库的进程。启动器将跨时间分配工作,尝试在每个数据库中启动一个工作人员[自动真空\_午觉时间](runtime-config-autovacuum.html#GUC-AUTOVACUUM-NAPTIME)秒。(因此,如果安装有*`ñ`*数据库,每启动一个新的worker`autovacuum_naptime`/*`ñ`*秒。)最多[自动真空\_最大限度\_工人](runtime-config-autovacuum.html#GUC-AUTOVACUUM-MAX-WORKERS)允许工作进程同时运行。如果有超过`autovacuum_max_workers`要处理的数据库,下一个数据库将在第一个工作人员完成后立即处理。每个工作进程将检查其数据库中的每个表并执行`真空`和/或`分析`如所须。[日志\_自动真空\_分钟\_期间](runtime-config-logging.html#GUC-LOG-AUTOVACUUM-MIN-DURATION)可以设置为监视 autovacuum 工作人员的活动。 + +如果几个大表都可以在短时间内进行清理,那么所有 autovacuum 工作人员可能会在很长一段时间内忙于清理这些表。这将导致其他表和数据库在工作人员可用之前不会被清理。单个数据库中可能有多少工作人员没有限制,但工作人员确实会尽量避免重复其他工作人员已经完成的工作。请注意,正在运行的工人数量不计入[最大限度\_连接](runtime-config-connection.html#GUC-MAX-CONNECTIONS)或者[超级用户\_预订的\_连接](runtime-config-connection.html#GUC-SUPERUSER-RESERVED-CONNECTIONS)限制。 + +表`再冷冻西德`值大于[自动真空\_freeze_max_age](runtime-config-autovacuum.html#GUC-AUTOVACUUM-FREEZE-MAX-AGE)transactions old are always vacuumed (this also applies to those tables whose freeze max age has been modified via storage parameters; see below). Otherwise, if the number of tuples obsoleted since the last`VACUUM`exceeds the “vacuum threshold”, the table is vacuumed. The vacuum threshold is defined as: + +``` +vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples +``` + +where the vacuum base threshold is[autovacuum_vacuum_threshold](runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-THRESHOLD), the vacuum scale factor is[autovacuum_vacuum_scale_factor](runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-SCALE-FACTOR), and the number of tuples is`pg_class`.`reltuples`. + +The table is also vacuumed if the number of tuples inserted since the last vacuum has exceeded the defined insert threshold, which is defined as: + +``` +vacuum insert threshold = vacuum base insert threshold + vacuum insert scale factor * number of tuples +``` + +where the vacuum insert base threshold is[自动真空\_真空\_插入\_临界点](runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-INSERT-THRESHOLD), 真空插入比例因子是[自动真空\_真空\_插入\_规模\_因素](runtime-config-autovacuum.html#GUC-AUTOVACUUM-VACUUM-INSERT-SCALE-FACTOR).这种真空可能允许将桌子的某些部分标记为*全部可见*并且还允许冻结元组,这可以减少后续真空所需的工作。对于接收表`插入`操作但没有或几乎没有`更新`/`删除`操作,它可能是有益的,以降低表的[自动真空\_冻结\_分钟\_年龄](sql-createtable.html#RELOPTION-AUTOVACUUM-FREEZE-MIN-AGE)因为这可能允许元组被早期的真空冻结。废弃元组的数量和插入的元组数量从统计收集器中获得;它是每个更新的半准确计数`更新`,`删除`和`插入`手术。(这只是半准确的,因为某些信息可能会在重负载下丢失。)如果`再冷冻西德`表的值大于`vacuum_freeze_table_age`旧事务,执行积极的真空以冻结旧元组并推进`再冷冻西德`;否则,仅扫描自上次清理后修改过的页面。 + +对于分析,使用类似的条件:阈值,定义为: + +``` +analyze threshold = analyze base threshold + analyze scale factor * number of tuples +``` + +与自上次以来插入、更新或删除的元组总数进行比较`分析`. + +autovacuum 无法访问临时表。因此,应通过会话 SQL 命令执行适当的清理和分析操作。 + +默认阈值和比例因子取自`postgresql.conf`,但可以在每个表的基础上覆盖它们(以及许多其他自动真空控制参数);看[存储参数](sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS)了解更多信息。如果通过表的存储参数更改了设置,则在处理该表时使用该值;否则使用全局设置。看[第 20.10 节](runtime-config-autovacuum.html)有关全局设置的更多详细信息。 + +当多个工作人员正在运行时,autovacuum 成本延迟参数(请参阅[第 20.4.4 节](runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST)) 在所有正在运行的工作人员之间是“平衡的”,因此无论实际运行的工作人员数量如何,对系统的总 I/O 影响都是相同的。但是,任何处理每个表的表的工作人员`autovacuum_vacuum_cost_delay`或者`autovacuum_vacuum_cost_limit`平衡算法中不考虑已设置的存储参数。 + +Autovacuum 工作人员通常不会阻止其他命令。如果一个进程试图获取一个与`共享更新独家`由 autovacuum 持有的锁,获取锁会中断 autovacuum。有关冲突的锁定模式,请参阅[表 13.2](explicit-locking.html#TABLE-LOCK-COMPATIBILITY).但是,如果 autovacuum 正在运行以防止事务 ID 回绕(即,autovacuum 查询名称在`pg_stat_activity`视图以`(以防止环绕)`),autovacuum 不会自动中断。 + +### 警告 + +定期运行获取与 a 冲突的锁的命令`共享更新独家`锁定(例如,ANALYZE)可以有效地防止自动清空完成。 diff --git a/docs/X/row-estimation-examples.md b/docs/en/row-estimation-examples.md similarity index 100% rename from docs/X/row-estimation-examples.md rename to docs/en/row-estimation-examples.md diff --git a/docs/en/row-estimation-examples.zh.md b/docs/en/row-estimation-examples.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..5f4036509bffbc4fe9089dc55ab97e650d51f90d --- /dev/null +++ b/docs/en/row-estimation-examples.zh.md @@ -0,0 +1,13 @@ +## 72.1. Row Estimation Examples + +[](<>) + +The examples shown below use tables in the PostgreSQL regression test database. The outputs shown are taken from version 8.3. The behavior of earlier (or later) versions might vary. Note also that since`ANALYZE`uses random sampling while producing statistics, the results will change slightly after any new`ANALYZE`. + +Let's start with a very simple query: + +``` +EXPLAIN SELECT * FROM tenk1; + + QUERY PLAN +``` diff --git a/docs/X/rowtypes.md b/docs/en/rowtypes.md similarity index 100% rename from docs/X/rowtypes.md rename to docs/en/rowtypes.md diff --git a/docs/en/rowtypes.zh.md b/docs/en/rowtypes.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..327bc9b5023ebbf6a80a7608a48bf08368a4ca1a --- /dev/null +++ b/docs/en/rowtypes.zh.md @@ -0,0 +1,271 @@ +## 8.16. Composite Types + +[8.16.1. Declaration of Composite Types](rowtypes.html#ROWTYPES-DECLARING) + +[8.16.2. Constructing Composite Values](rowtypes.html#id-1.5.7.24.6) + +[8.16.3. Accessing Composite Types](rowtypes.html#ROWTYPES-ACCESSING) + +[8.16.4. Modifying Composite Types](rowtypes.html#id-1.5.7.24.8) + +[8.16.5. Using Composite Types in Queries](rowtypes.html#ROWTYPES-USAGE) + +[8.16.6. Composite Type Input and Output Syntax](rowtypes.html#ROWTYPES-IO-SYNTAX) + +[](<>)[](<>) + +A*composite type*represents the structure of a row or record; it is essentially just a list of field names and their data types. PostgreSQL allows composite types to be used in many of the same ways that simple types can be used. For example, a column of a table can be declared to be of a composite type. + +### 8.16.1. Declaration of Composite Types + +Here are two simple examples of defining composite types: + +``` +CREATE TYPE complex AS ( + r double precision, + i double precision +); + +CREATE TYPE inventory_item AS ( + name text, + supplier_id integer, + price numeric +); +``` + +The syntax is comparable to`CREATE TABLE`, except that only field names and types can be specified; no constraints (such as`NOT NULL`) can presently be included. Note that the`AS`keyword is essential; without it, the system will think a different kind of`CREATE TYPE`command is meant, and you will get odd syntax errors. + +Having defined the types, we can use them to create tables: + +``` +CREATE TABLE on_hand ( + item inventory_item, + count integer +); + +INSERT INTO on_hand VALUES (ROW('fuzzy dice', 42, 1.99), 1000); +``` + +or functions: + +``` +CREATE FUNCTION price_extension(inventory_item, integer) RETURNS numeric +AS 'SELECT $1.price * $2' LANGUAGE SQL; + +SELECT price_extension(item, 10) FROM on_hand; +``` + +Whenever you create a table, a composite type is also automatically created, with the same name as the table, to represent the table's row type. For example, had we said: + +``` +CREATE TABLE inventory_item ( + name text, + supplier_id integer REFERENCES suppliers, + price numeric CHECK (price > 0) +); +``` + +then the same`inventory_item`composite type shown above would come into being as a byproduct, and could be used just as above. Note however an important restriction of the current implementation: since no constraints are associated with a composite type, the constraints shown in the table definition*do not apply*to values of the composite type outside the table. (To work around this, create a domain over the composite type, and apply the desired constraints as`CHECK`constraints of the domain.) + +### 8.16.2. Constructing Composite Values + +[](<>) + +To write a composite value as a literal constant, enclose the field values within parentheses and separate them by commas. You can put double quotes around any field value, and must do so if it contains commas or parentheses. (More details appear[below](rowtypes.html#ROWTYPES-IO-SYNTAX).) Thus, the general format of a composite constant is the following: + +``` +'( val1 , val2 , ... )' +``` + +An example is: + +``` +'("fuzzy dice",42,1.99)' +``` + +which would be a valid value of the`inventory_item`type defined above. To make a field be NULL, write no characters at all in its position in the list. For example, this constant specifies a NULL third field: + +``` +'("fuzzy dice",42,)' +``` + +If you want an empty string rather than NULL, write double quotes: + +``` +'("",42,)' +``` + +Here the first field is a non-NULL empty string, the third is NULL. + +(These constants are actually only a special case of the generic type constants discussed in[Section 4.1.2.7](sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-GENERIC). The constant is initially treated as a string and passed to the composite-type input conversion routine. An explicit type specification might be necessary to tell which type to convert the constant to.) + +The`ROW`expression syntax can also be used to construct composite values. In most cases this is considerably simpler to use than the string-literal syntax since you don't have to worry about multiple layers of quoting. We already used this method above: + +``` +ROW('fuzzy dice', 42, 1.99) +ROW('', 42, NULL) +``` + +The ROW keyword is actually optional as long as you have more than one field in the expression, so these can be simplified to: + +``` +('fuzzy dice', 42, 1.99) +('', 42, NULL) +``` + +The`ROW`expression syntax is discussed in more detail in[Section 4.2.13](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS). + +### 8.16.3. Accessing Composite Types + +To access a field of a composite column, one writes a dot and the field name, much like selecting a field from a table name. In fact, it's so much like selecting from a table name that you often have to use parentheses to keep from confusing the parser. For example, you might try to select some subfields from our`on_hand`example table with something like: + +``` +SELECT item.name FROM on_hand WHERE item.price > 9.99; +``` + +This will not work since the name`item`is taken to be a table name, not a column name of`on_hand`, per SQL syntax rules. You must write it like this: + +``` +SELECT (item).name FROM on_hand WHERE (item).price > 9.99; +``` + +or if you need to use the table name as well (for instance in a multitable query), like this: + +``` +SELECT (on_hand.item).name FROM on_hand WHERE (on_hand.item).price > 9.99; +``` + +Now the parenthesized object is correctly interpreted as a reference to the`item`column, and then the subfield can be selected from it. + +Similar syntactic issues apply whenever you select a field from a composite value. For instance, to select just one field from the result of a function that returns a composite value, you'd need to write something like: + +``` +SELECT (my_func(...)).field FROM ... +``` + +Without the extra parentheses, this will generate a syntax error. + +The special field name`*`means “all fields”, as further explained in[Section 8.16.5](rowtypes.html#ROWTYPES-USAGE). + +### 8.16.4. Modifying Composite Types + +Here are some examples of the proper syntax for inserting and updating composite columns. First, inserting or updating a whole column: + +``` +INSERT INTO mytab (complex_col) VALUES((1.1,2.2)); + +UPDATE mytab SET complex_col = ROW(1.1,2.2) WHERE ...; +``` + +The first example omits`ROW`, the second uses it; we could have done it either way. + +We can update an individual subfield of a composite column: + +``` +UPDATE mytab SET complex_col.r = (complex_col).r + 1 WHERE ...; +``` + +Notice here that we don't need to (and indeed cannot) put parentheses around the column name appearing just after`SET`, but we do need parentheses when referencing the same column in the expression to the right of the equal sign. + +And we can specify subfields as targets for`INSERT`, too: + +``` +INSERT INTO mytab (complex_col.r, complex_col.i) VALUES(1.1, 2.2); +``` + +Had we not supplied values for all the subfields of the column, the remaining subfields would have been filled with null values. + +### 8.16.5. Using Composite Types in Queries + +There are various special syntax rules and behaviors associated with composite types in queries. These rules provide useful shortcuts, but can be confusing if you don't know the logic behind them. + +In PostgreSQL, a reference to a table name (or alias) in a query is effectively a reference to the composite value of the table's current row. For example, if we had a table`inventory_item`as shown[above](rowtypes.html#ROWTYPES-DECLARING), we could write: + +``` +SELECT c FROM inventory_item c; +``` + +This query produces a single composite-valued column, so we might get output like: + +``` + c +### Tip + +PostgreSQL handles column expansion by actually transforming the first form into the second. So, in this example, `myfunc()` would get invoked three times per row with either syntax. If it's an expensive function you may wish to avoid that, which you can do with a query like: +``` + +SELECT m.\* FROM some_table, LATERAL myfunc(x) AS m; + +``` + Placing the function in a `LATERAL` `FROM` item keeps it from being invoked more than once per row. `m.*` is still expanded into `m.a, m.b, m.c`, but now those variables are just references to the output of the `FROM` item. (The `LATERAL` keyword is optional here, but we show it to clarify that the function is getting `x` from `some_table`.) + + The *`composite_value`*`.*` syntax results in column expansion of this kind when it appears at the top level of a [`SELECT` output list](queries-select-lists.html), a [`RETURNING` list](dml-returning.html) in `INSERT`/`UPDATE`/`DELETE`, a [`VALUES` clause](queries-values.html), or a [row constructor](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS). In all other contexts (including when nested inside one of those constructs), attaching `.*` to a composite value does not change the value, since it means “all columns” and so the same composite value is produced again. For example, if `somefunc()` accepts a composite-valued argument, these queries are the same: +``` + +SELECT somefunc(c.\*) FROM inventory_item c; SELECT somefunc(c) FROM inventory_item c; + +``` + In both cases, the current row of `inventory_item` is passed to the function as a single composite-valued argument. Even though `.*` does nothing in such cases, using it is good style, since it makes clear that a composite value is intended. In particular, the parser will consider `c` in `c.*` to refer to a table name or alias, not to a column name, so that there is no ambiguity; whereas without `.*`, it is not clear whether `c` means a table name or a column name, and in fact the column-name interpretation will be preferred if there is a column named `c`. + + Another example demonstrating these concepts is that all these queries mean the same thing: +``` + +SELECT*FROM inventory_item c ORDER BY c; SELECT*FROM inventory_item c ORDER BY c.*; SELECT*FROM inventory_item c ORDER BY ROW(c.\*); + +``` + All of these `ORDER BY` clauses specify the row's composite value, resulting in sorting the rows according to the rules described in [Section 9.24.6](functions-comparisons.html#COMPOSITE-TYPE-COMPARISON). However, if `inventory_item` contained a column named `c`, the first case would be different from the others, as it would mean to sort by that column only. Given the column names previously shown, these queries are also equivalent to those above: +``` + +SELECT*FROM inventory_item c ORDER BY ROW(c.name, c.supplier_id, c.price); SELECT*FROM inventory_item c ORDER BY (c.name, c.supplier_id, c.price); + +``` + (The last case uses a row constructor with the key word `ROW` omitted.) + + Another special syntactical behavior associated with composite values is that we can use *functional notation* for extracting a field of a composite value. The simple way to explain this is that the notations `*`field`*(*`table`*)` and `*`table`*.*`field`*` are interchangeable. For example, these queries are equivalent: +``` + +SELECT c.name FROM inventory_item c WHERE c.price > 1000; SELECT name(c) FROM inventory_item c WHERE price(c) > 1000; + +``` + Moreover, if we have a function that accepts a single argument of a composite type, we can call it with either notation. These queries are all equivalent: +``` + +SELECT somefunc(c) FROM inventory_item c; SELECT somefunc(c.\*) FROM inventory_item c; SELECT c.somefunc FROM inventory_item c; + +``` + This equivalence between functional notation and field notation makes it possible to use functions on composite types to implement “computed fields”. []() []() An application using the last query above wouldn't need to be directly aware that `somefunc` isn't a real column of the table. + +### Tip + + Because of this behavior, it's unwise to give a function that takes a single composite-type argument the same name as any of the fields of that composite type. If there is ambiguity, the field-name interpretation will be chosen if field-name syntax is used, while the function will be chosen if function-call syntax is used. However, PostgreSQL versions before 11 always chose the field-name interpretation, unless the syntax of the call required it to be a function call. One way to force the function interpretation in older versions is to schema-qualify the function name, that is, write `*`schema`*.*`func`*(*`compositevalue`*)`. + +### 8.16.6. Composite Type Input and Output Syntax + + The external text representation of a composite value consists of items that are interpreted according to the I/O conversion rules for the individual field types, plus decoration that indicates the composite structure. The decoration consists of parentheses (`(` and `)`) around the whole value, plus commas (`,`) between adjacent items. Whitespace outside the parentheses is ignored, but within the parentheses it is considered part of the field value, and might or might not be significant depending on the input conversion rules for the field data type. For example, in: +``` + +'( 42)' + +``` + the whitespace will be ignored if the field type is integer, but not if it is text. + + As shown previously, when writing a composite value you can write double quotes around any individual field value. You *must* do so if the field value would otherwise confuse the composite-value parser. In particular, fields containing parentheses, commas, double quotes, or backslashes must be double-quoted. To put a double quote or backslash in a quoted composite field value, precede it with a backslash. (Also, a pair of double quotes within a double-quoted field value is taken to represent a double quote character, analogously to the rules for single quotes in SQL literal strings.) Alternatively, you can avoid quoting and use backslash-escaping to protect all data characters that would otherwise be taken as composite syntax. + + A completely empty field value (no characters at all between the commas or parentheses) represents a NULL. To write a value that is an empty string rather than NULL, write `""`. + + The composite output routine will put double quotes around field values if they are empty strings or contain parentheses, commas, double quotes, backslashes, or white space. (Doing so for white space is not essential, but aids legibility.) Double quotes and backslashes embedded in field values will be doubled. + +### Note + + Remember that what you write in an SQL command will first be interpreted as a string literal, and then as a composite. This doubles the number of backslashes you need (assuming escape string syntax is used). For example, to insert a `text` field containing a double quote and a backslash in a composite value, you'd need to write: +``` + +INSERT ... VALUES ('("\\"\\")'); + +``` + The string-literal processor removes one level of backslashes, so that what arrives at the composite-value parser looks like `("\"\\")`. In turn, the string fed to the `text` data type's input routine becomes `"\`. (If we were working with a data type whose input routine also treated backslashes specially, `bytea` for example, we might need as many as eight backslashes in the command to get one backslash into the stored composite field.) Dollar quoting (see [Section 4.1.2.4](sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING)) can be used to avoid the need to double backslashes. + +### Tip + + The `ROW` constructor syntax is usually easier to work with than the composite-literal syntax when writing composite values in SQL commands. In `ROW`, individual field values are written the same way they would be written when not members of a composite. +``` diff --git a/docs/X/rule-system.md b/docs/en/rule-system.md similarity index 100% rename from docs/X/rule-system.md rename to docs/en/rule-system.md diff --git a/docs/en/rule-system.zh.md b/docs/en/rule-system.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c1e6f90b78c8df0e7fe4419210369ee0cd7d694a --- /dev/null +++ b/docs/en/rule-system.zh.md @@ -0,0 +1,9 @@ +## 51.4. The PostgreSQL Rule System + +PostgreSQL supports a powerful*rule system*for the specification of*views*and ambiguous*view updates*. Originally the PostgreSQL rule system consisted of two implementations: + +- The first one worked using*row level*processing and was implemented deep in the*executor*. The rule system was called whenever an individual row had been accessed. This implementation was removed in 1995 when the last official release of the Berkeley Postgres project was transformed into Postgres95. + +- The second implementation of the rule system is a technique called*query rewriting*. The*rewrite system*is a module that exists between the*parser stage*and the*planner/optimizer*. This technique is still implemented. + + The query rewriter is discussed in some detail in[Chapter 41](rules.html), so there is no need to cover it here. We will only point out that both the input and the output of the rewriter are query trees, that is, there is no change in the representation or level of semantic detail in the trees. Rewriting can be thought of as a form of macro expansion. diff --git a/docs/X/rules-materializedviews.md b/docs/en/rules-materializedviews.md similarity index 100% rename from docs/X/rules-materializedviews.md rename to docs/en/rules-materializedviews.md diff --git a/docs/en/rules-materializedviews.zh.md b/docs/en/rules-materializedviews.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..cc82f09c806950a9f5b1883669cd6bd10b6fceb7 --- /dev/null +++ b/docs/en/rules-materializedviews.zh.md @@ -0,0 +1,83 @@ +## 41.3. Materialized Views + +[](<>)[](<>)[](<>) + +Materialized views in PostgreSQL use the rule system like views do, but persist the results in a table-like form. The main differences between: + +``` +CREATE MATERIALIZED VIEW mymatview AS SELECT * FROM mytab; +``` + +and: + +``` +CREATE TABLE mymatview AS SELECT * FROM mytab; +``` + +are that the materialized view cannot subsequently be directly updated and that the query used to create the materialized view is stored in exactly the same way that a view's query is stored, so that fresh data can be generated for the materialized view with: + +``` +REFRESH MATERIALIZED VIEW mymatview; +``` + +The information about a materialized view in the PostgreSQL system catalogs is exactly the same as it is for a table or view. So for the parser, a materialized view is a relation, just like a table or a view. When a materialized view is referenced in a query, the data is returned directly from the materialized view, like from a table; the rule is only used for populating the materialized view. + +While access to the data stored in a materialized view is often much faster than accessing the underlying tables directly or through a view, the data is not always current; yet sometimes current data is not needed. Consider a table which records sales: + +``` +CREATE TABLE invoice ( + invoice_no integer PRIMARY KEY, + seller_no integer, -- ID of salesperson + invoice_date date, -- date of sale + invoice_amt numeric(13,2) -- amount of sale +); +``` + +If people want to be able to quickly graph historical sales data, they might want to summarize, and they may not care about the incomplete data for the current date: + +``` +CREATE MATERIALIZED VIEW sales_summary AS + SELECT + seller_no, + invoice_date, + sum(invoice_amt)::numeric(13,2) as sales_amt + FROM invoice + WHERE invoice_date < CURRENT_DATE + GROUP BY + seller_no, + invoice_date; + +CREATE UNIQUE INDEX sales_summary_seller + ON sales_summary (seller_no, invoice_date); +``` + +This materialized view might be useful for displaying a graph in the dashboard created for salespeople. A job could be scheduled to update the statistics each night using this SQL statement: + +``` +REFRESH MATERIALIZED VIEW sales_summary; +``` + +Another use for a materialized view is to allow faster access to data brought across from a remote system through a foreign data wrapper. A simple example using`file_fdw`is below, with timings, but since this is using cache on the local system the performance difference compared to access to a remote system would usually be greater than shown here. Notice we are also exploiting the ability to put an index on the materialized view, whereas`file_fdw`does not support indexes; this advantage might not apply for other sorts of foreign data access. + +Setup: + +``` +CREATE EXTENSION file_fdw; +CREATE SERVER local_file FOREIGN DATA WRAPPER file_fdw; +CREATE FOREIGN TABLE words (word text NOT NULL) + SERVER local_file + OPTIONS (filename '/usr/share/dict/words'); +CREATE MATERIALIZED VIEW wrd AS SELECT * FROM words; +CREATE UNIQUE INDEX wrd_word ON wrd (word); +CREATE EXTENSION pg_trgm; +CREATE INDEX wrd_trgm ON wrd USING gist (word gist_trgm_ops); +VACUUM ANALYZE wrd; +``` + +Now let's spell-check a word. Using`file_fdw`directly: + +``` +SELECT count(*) FROM words WHERE word = 'caterpiler'; + + count +``` diff --git a/docs/X/rules-privileges.md b/docs/en/rules-privileges.md similarity index 100% rename from docs/X/rules-privileges.md rename to docs/en/rules-privileges.md diff --git a/docs/en/rules-privileges.zh.md b/docs/en/rules-privileges.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e8c0f1261d4f28fff96e3be1e7bdfd3e4ace7382 --- /dev/null +++ b/docs/en/rules-privileges.zh.md @@ -0,0 +1,60 @@ +## 41.5. Rules and Privileges + +[](<>)[](<>) + +Due to rewriting of queries by the PostgreSQL rule system, other tables/views than those used in the original query get accessed. When update rules are used, this can include write access to tables. + +Rewrite rules don't have a separate owner. The owner of a relation (table or view) is automatically the owner of the rewrite rules that are defined for it. The PostgreSQL rule system changes the behavior of the default access control system. Relations that are used due to rules get checked against the privileges of the rule owner, not the user invoking the rule. This means that users only need the required privileges for the tables/views that are explicitly named in their queries. + +For example: A user has a list of phone numbers where some of them are private, the others are of interest for the assistant of the office. The user can construct the following: + +``` +CREATE TABLE phone_data (person text, phone text, private boolean); +CREATE VIEW phone_number AS + SELECT person, CASE WHEN NOT private THEN phone END AS phone + FROM phone_data; +GRANT SELECT ON phone_number TO assistant; +``` + +Nobody except that user (and the database superusers) can access the`phone_data`table. But because of the`GRANT`, the assistant can run a`SELECT`on the`phone_number`view. The rule system will rewrite the`SELECT`from`phone_number`into a`SELECT`from`phone_data`. Since the user is the owner of`phone_number`and therefore the owner of the rule, the read access to`phone_data`is now checked against the user's privileges and the query is permitted. The check for accessing`phone_number`也会执行,但这是针对调用用户执行的,因此除了用户和助手之外没有人可以使用它。 + +权限逐条检查。所以助理是目前唯一可以看到公共电话号码的人。但是助手可以设置另一个视图并向公众授予对该视图的访问权限。然后,任何人都可以看到`电话号码`数据通过助手的视图。助手不能做的是创建一个直接访问的视图`电话数据`.(实际上助手可以,但它不会工作,因为在权限检查期间每次访问都将被拒绝。)一旦用户注意到助手打开了他们的`电话号码`查看,用户可以撤销助手的访问权限。立即,对助手视图的任何访问都将失败。 + +有人可能认为这种逐条检查是一个安全漏洞,但实际上并非如此。但如果它不这样工作,助手可以设置一个具有相同列的表`电话号码`并每天将数据复制到那里一次。然后是助手自己的数据,助手可以授予他们想要的每个人的访问权限。一种`授予`command 的意思是“我相信你”。如果您信任的人做了上述事情,是时候考虑一​​下然后使用`撤销`. + +请注意,虽然视图可用于使用上面显示的技术隐藏某些列的内容,但它们不能用于可靠地隐藏看不见的行中的数据,除非`安全屏障`标志已设置。例如,以下视图是不安全的: + +``` +CREATE VIEW phone_number AS + SELECT person, phone FROM phone_data WHERE phone NOT LIKE '412%'; +``` + +这个视图可能看起来很安全,因为规则系统将重写任何`选择`从`电话号码`成一个`选择`从`电话数据`并添加只有条目的资格`电话`不以 412 开头的都是通缉犯。但是如果用户可以创建自己的函数,那么说服规划者在执行用户自定义函数之前并不难。`不喜欢`表达。例如: + +``` +CREATE FUNCTION tricky(text, text) RETURNS bool AS $$ +BEGIN + RAISE NOTICE '% => %', $1, $2; + RETURN true; +END; +$$ LANGUAGE plpgsql COST 0.0000000000000000000001; + +SELECT * FROM phone_number WHERE tricky(person, phone); +``` + +中的每个人和电话号码`电话数据`表格将打印为`注意`,因为计划者会选择执行廉价的`棘手`功能前更贵`不喜欢`.即使阻止用户定义新函数,内置函数也可以用于类似的攻击。(例如,大多数转换函数在它们产生的错误消息中包含它们的输入值。) + +类似的考虑适用于更新规则。在上一节的示例中,示例数据库中表的所有者可以授予权限`选择`,`插入`,`更新`, 和`删除`在`鞋带`给别人看,但只是`选择`on`shoelace_log`. The rule action to write log entries will still be executed successfully, and that other user could see the log entries. But they could not create fake entries, nor could they manipulate or remove existing ones. In this case, there is no possibility of subverting the rules by convincing the planner to alter the order of operations, because the only rule which references`shoelace_log`is an unqualified`INSERT`. This might not be true in more complex scenarios. + +When it is necessary for a view to provide row-level security, the`security_barrier`attribute should be applied to the view. This prevents maliciously-chosen functions and operators from being passed values from rows until after the view has done its work. For example, if the view shown above had been created like this, it would be secure: + +``` +CREATE VIEW phone_number WITH (security_barrier) AS + SELECT person, phone FROM phone_data WHERE phone NOT LIKE '412%'; +``` + +Views created with the`security_barrier`may perform far worse than views created without this option. In general, there is no way to avoid this: the fastest possible plan must be rejected if it may compromise security. For this reason, this option is not enabled by default. + +The query planner has more flexibility when dealing with functions that have no side effects. Such functions are referred to as`LEAKPROOF`, and include many simple, commonly used operators, such as many equality operators. The query planner can safely allow such functions to be evaluated at any point in the query execution process, since invoking them on rows invisible to the user will not leak any information about the unseen rows. Further, functions which do not take arguments or which are not passed any arguments from the security barrier view do not have to be marked as`LEAKPROOF`to be pushed down, as they never receive data from the view. In contrast, a function that might throw an error depending on the values received as arguments (such as one that throws an error in the event of overflow or division by zero) is not leak-proof, and could provide significant information about the unseen rows if applied before the security view's row filters. + +It is important to understand that even a view created with the`security_barrier`option is intended to be secure only in the limited sense that the contents of the invisible tuples will not be passed to possibly-insecure functions. The user may well have other means of making inferences about the unseen data; for example, they can see the query plan using`EXPLAIN`, or measure the run time of queries against the view. A malicious attacker might be able to infer something about the amount of unseen data, or even gain some information about the data distribution or most common values (since these things may affect the run time of the plan; or even, since they are also reflected in the optimizer statistics, the choice of plan). If these types of "covert channel" attacks are of concern, it is probably unwise to grant any access to the data at all. diff --git a/docs/X/rules-triggers.md b/docs/en/rules-triggers.md similarity index 100% rename from docs/X/rules-triggers.md rename to docs/en/rules-triggers.md diff --git a/docs/en/rules-triggers.zh.md b/docs/en/rules-triggers.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..022c025d321894be7d840ae84e7c0c082fb88e23 --- /dev/null +++ b/docs/en/rules-triggers.zh.md @@ -0,0 +1,123 @@ +## 41.7. Rules Versus Triggers + +[](<>)[](<>) + +Many things that can be done using triggers can also be implemented using the PostgreSQL rule system. One of the things that cannot be implemented by rules are some kinds of constraints, especially foreign keys. It is possible to place a qualified rule that rewrites a command to`NOTHING`if the value of a column does not appear in another table. But then the data is silently thrown away and that's not a good idea. If checks for valid values are required, and in the case of an invalid value an error message should be generated, it must be done by a trigger. + +In this chapter, we focused on using rules to update views. All of the update rule examples in this chapter can also be implemented using`INSTEAD OF`triggers on the views. Writing such triggers is often easier than writing rules, particularly if complex logic is required to perform the update. + +For the things that can be implemented by both, which is best depends on the usage of the database. A trigger is fired once for each affected row. A rule modifies the query or generates an additional query. So if many rows are affected in one statement, a rule issuing one extra command is likely to be faster than a trigger that is called for every single row and must re-determine what to do many times. However, the trigger approach is conceptually far simpler than the rule approach, and is easier for novices to get right. + +Here we show an example of how the choice of rules versus triggers plays out in one situation. There are two tables: + +``` +CREATE TABLE computer ( + hostname text, -- indexed + manufacturer text -- indexed +); + +CREATE TABLE software ( + software text, -- indexed + hostname text -- indexed +); +``` + +Both tables have many thousands of rows and the indexes on`hostname`are unique. The rule or trigger should implement a constraint that deletes rows from`software`that reference a deleted computer. The trigger would use this command: + +``` +DELETE FROM software WHERE hostname = $1; +``` + +Since the trigger is called for each individual row deleted from`computer`, it can prepare and save the plan for this command and pass the`hostname`value in the parameter. The rule would be written as: + +``` +CREATE RULE computer_del AS ON DELETE TO computer + DO DELETE FROM software WHERE hostname = OLD.hostname; +``` + +Now we look at different types of deletes. In the case of a: + +``` +DELETE FROM computer WHERE hostname = 'mypc.local.net'; +``` + +the table`computer`is scanned by index (fast), and the command issued by the trigger would also use an index scan (also fast). The extra command from the rule would be: + +``` +DELETE FROM software WHERE computer.hostname = 'mypc.local.net' + AND software.hostname = computer.hostname; +``` + +Since there are appropriate indexes set up, the planner will create a plan of + +``` +Nestloop + -> Index Scan using comp_hostidx on computer + -> Index Scan using soft_hostidx on software +``` + +So there would be not that much difference in speed between the trigger and the rule implementation. + +With the next delete we want to get rid of all the 2000 computers where the`hostname`starts with`old`.有两个可能的命令可以做到这一点。一种是: + +``` +DELETE FROM computer WHERE hostname >= 'old' + AND hostname < 'ole' +``` + +规则添加的命令将是: + +``` +DELETE FROM software WHERE computer.hostname >= 'old' AND computer.hostname < 'ole' + AND software.hostname = computer.hostname; +``` + +与计划 + +``` +Hash Join + -> Seq Scan on software + -> Hash + -> Index Scan using comp_hostidx on computer +``` + +另一个可能的命令是: + +``` +DELETE FROM computer WHERE hostname ~ '^old'; +``` + +这导致规则添加的命令的执行计划如下: + +``` +Nestloop + -> Index Scan using comp_hostidx on computer + -> Index Scan using soft_hostidx on software +``` + +这表明,规划者没有意识到`主机名`在`计算机`也可以用于索引扫描`软件`当有多个限定表达式与`和`,这就是它在命令的正则表达式版本中所做的。触发器将为必须删除的 2000 台旧计算机中的每台调用一次,这将导致一次索引扫描`计算机`和 2000 次索引扫描`软件`.规则实现将使用两个使用索引的命令来完成。这取决于桌子的整体大小`软件`在顺序扫描情况下规则是否仍然更快。通过 SPI 管理器从触发器执行 2000 次命令需要一些时间,即使所有索引块很快都会在缓存中。 + +我们查看的最后一个命令是: + +``` +DELETE FROM computer WHERE manufacturer = 'bim'; +``` + +同样,这可能会导致许多行被删除`计算机`.所以触发器将再次通过执行器运行许多命令。规则生成的命令将是: + +``` +DELETE FROM software WHERE computer.manufacturer = 'bim' + AND software.hostname = computer.hostname; +``` + +该命令的计划将再次成为两次索引扫描的嵌套循环,仅使用不同的索引`计算机`: + +``` +Nestloop + -> Index Scan using comp_manufidx on computer + -> Index Scan using soft_hostidx on software +``` + +在任何这些情况下,来自规则系统的额外命令将或多或少独立于命令中受影响的行数。 + +总结是,如果规则的操作导致大型且不合格的联接,则规则只会比触发器慢得多,这是规划器失败的情况。 diff --git a/docs/X/rules-update.md b/docs/en/rules-update.md similarity index 100% rename from docs/X/rules-update.md rename to docs/en/rules-update.md diff --git a/docs/en/rules-update.zh.md b/docs/en/rules-update.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..d2b34a03ba32a99b9645b7efa639479263cb67f0 --- /dev/null +++ b/docs/en/rules-update.zh.md @@ -0,0 +1,148 @@ +## 41.4. Rules on`INSERT`,`UPDATE`, and`DELETE` + +[41.4.1. How Update Rules Work](rules-update.html#id-1.8.6.9.7) + +[41.4.2. Cooperation with Views](rules-update.html#RULES-UPDATE-VIEWS) + +[](<>)[](<>)[](<>) + +Rules that are defined on`INSERT`,`UPDATE`, and`DELETE`are significantly different from the view rules described in the previous section. First, their`CREATE RULE`command allows more: + +- They are allowed to have no action. + +- They can have multiple actions. + +- They can be`INSTEAD`or`ALSO`(the default). + +- The pseudorelations`NEW`and`OLD`become useful. + +- They can have rule qualifications. + + Second, they don't modify the query tree in place. Instead they create zero or more new query trees and can throw away the original one. + +### Caution + +In many cases, tasks that could be performed by rules on`INSERT`/`UPDATE`/`DELETE`are better done with triggers. Triggers are notationally a bit more complicated, but their semantics are much simpler to understand. Rules tend to have surprising results when the original query contains volatile functions: volatile functions may get executed more times than expected in the process of carrying out the rules. + +Also, there are some cases that are not supported by these types of rules at all, notably including`WITH`clauses in the original query and multiple-assignment sub-`SELECT`s in the`SET`list of`UPDATE`queries. This is because copying these constructs into a rule query would result in multiple evaluations of the sub-query, contrary to the express intent of the query's author. + +### 41.4.1. How Update Rules Work + +Keep the syntax: + +``` +CREATE [ OR REPLACE ] RULE name AS ON event + TO table [ WHERE condition ] + DO [ ALSO | INSTEAD ] { NOTHING | command | ( command ; command ... ) } +``` + +in mind. In the following,*update rules*means rules that are defined on`INSERT`,`UPDATE`, or`DELETE`. + +当查询树的结果关系和命令类型等于给定的对象和事件时,规则系统将应用更新规则`创建规则`命令。对于更新规则,规则系统会创建一个查询树列表。最初,查询树列表是空的。可以有零(`没有`关键字),一个或多个动作。为简化起见,我们将通过一个操作来查看规则。这条规则可以有或没有,它可以是`反而`要么`还`(默认)。 + +什么是规则资格?这是一个限制,它告诉何时应该执行规则的操作,何时不执行。此限定只能引用伪关系`新的`和/或`老的`,它基本上表示作为对象给出的关系(但具有特殊含义)。 + +因此,我们有三个案例为单操作规则生成以下查询树。 + +没有资格,有任何一个`还`要么`反而` + +添加了原始查询树限定条件的规则操作中的查询树 + +获得的资格和`还` + +来自规则操作的查询树,添加了规则限定和原始查询树的限定 + +获得的资格和`反而` + +来自具有规则限定和原始查询树限定的规则操作的查询树;并添加了否定规则限定的原始查询树 + +最后,如果规则是`还`,未更改的原始查询树被添加到列表中。由于只有合格`反而`规则已经添加了原始查询树,我们最终得到一个或两个输出查询树,用于具有一个操作的规则。 + +为了`插入`规则,原始查询(如果没有被`反而`) 在规则添加的任何操作之前完成。这允许操作查看插入的行。但对于`在更新`和`删除时`规则,原始查询是在规则添加的操作之后完成的。这确保了操作可以看到要更新或要删除的行;否则,这些操作可能什么都不做,因为它们找不到符合其条件的行。 + +从规则操作生成的查询树再次被扔到重写系统中,并且可能会应用更多规则,从而产生更多或更少的查询树。因此,规则的操作必须具有与规则本身不同的命令类型或不同的结果关系,否则此递归过程将最终陷入无限循环。(将检测到规则的递归扩展并报告为错误。) + +在操作中找到的查询树`pg_rewrite`系统目录只是模板。因为他们可以引用范围表条目`新的`和`老的`, 必须先进行一些替换才能使用。对于任何参考`新的`,在原始查询的目标列表中搜索相应的条目。如果找到,该条目的表达式将替换引用。除此以外,`新的`意思是一样的`老的`(为`更新`) 或被空值替换(对于`插入`). Any reference to`OLD`is replaced by a reference to the range-table entry that is the result relation. + +After the system is done applying update rules, it applies view rules to the produced query tree(s). Views cannot insert new update actions so there is no need to apply update rules to the output of view rewriting. + +#### 41.4.1.1. A First Rule Step by Step + +Say we want to trace changes to the`sl_avail`column in the`shoelace_data`relation. So we set up a log table and a rule that conditionally writes a log entry when an`UPDATE`is performed on`shoelace_data`. + +``` +CREATE TABLE shoelace_log ( + sl_name text, -- shoelace changed + sl_avail integer, -- new available value + log_who text, -- who did it + log_when timestamp -- when +); + +CREATE RULE log_shoelace AS ON UPDATE TO shoelace_data + WHERE NEW.sl_avail <> OLD.sl_avail + DO INSERT INTO shoelace_log VALUES ( + NEW.sl_name, + NEW.sl_avail, + current_user, + current_timestamp + ); +``` + +Now someone does: + +``` +UPDATE shoelace_data SET sl_avail = 6 WHERE sl_name = 'sl7'; +``` + +and we look at the log table: + +``` +SELECT * FROM shoelace_log; + + sl_name | sl_avail | log_who | log_when +### 41.4.2. Cooperation with Views + +[]() + + A simple way to protect view relations from the mentioned possibility that someone can try to run `INSERT`, `UPDATE`, or `DELETE` on them is to let those query trees get thrown away. So we could create the rules: +``` + +CREATE RULE shoe_ins_protect AS ON INSERT TO shoe DO INSTEAD NOTHING; CREATE RULE shoe_upd_protect AS ON UPDATE TO shoe DO INSTEAD NOTHING; CREATE RULE shoe_del_protect AS ON DELETE TO shoe DO INSTEAD NOTHING; + +``` + If someone now tries to do any of these operations on the view relation `shoe`, the rule system will apply these rules. Since the rules have no actions and are `INSTEAD`, the resulting list of query trees will be empty and the whole query will become nothing because there is nothing left to be optimized or executed after the rule system is done with it. + + A more sophisticated way to use the rule system is to create rules that rewrite the query tree into one that does the right operation on the real tables. To do that on the `shoelace` view, we create the following rules: +``` + +CREATE RULE shoelace_ins AS ON INSERT TO shoelace DO INSTEAD INSERT INTO shoelace_data VALUES ( NEW.sl_name, NEW.sl_avail, NEW.sl_color, NEW.sl_len, NEW.sl_unit ); + +CREATE RULE shoelace_upd AS ON UPDATE TO shoelace DO INSTEAD UPDATE shoelace_data SET sl_name = NEW.sl_name, sl_avail = NEW.sl_avail, sl_color = NEW.sl_color, sl_len = NEW.sl_len, sl_unit = NEW.sl_unit WHERE sl_name = OLD.sl_name; + +CREATE RULE shoelace_del AS ON DELETE TO shoelace DO INSTEAD DELETE FROM shoelace_data WHERE sl_name = OLD.sl_name; + +``` + If you want to support `RETURNING` queries on the view, you need to make the rules include `RETURNING` clauses that compute the view rows. This is usually pretty trivial for views on a single table, but it's a bit tedious for join views such as `shoelace`. An example for the insert case is: +``` + +CREATE RULE shoelace_ins AS ON INSERT TO shoelace DO INSTEAD INSERT INTO shoelace_data VALUES ( NEW.sl_name, NEW.sl_avail, NEW.sl_color, NEW.sl_len, NEW.sl_unit ) RETURNING shoelace_data.*, (SELECT shoelace_data.sl_len*u.un_fact FROM unit u WHERE shoelace_data.sl_unit = u.un_name); + +``` + Note that this one rule supports both `INSERT` and `INSERT RETURNING` queries on the view — the `RETURNING` clause is simply ignored for `INSERT`. + + Now assume that once in a while, a pack of shoelaces arrives at the shop and a big parts list along with it. But you don't want to manually update the `shoelace` view every time. Instead we set up two little tables: one where you can insert the items from the part list, and one with a special trick. The creation commands for these are: +``` + +CREATE TABLE shoelace_arrive ( arr_name text, arr_quant integer ); + +CREATE TABLE shoelace_ok ( ok_name text, ok_quant integer ); + +CREATE RULE shoelace_ok_ins AS ON INSERT TO shoelace_ok DO INSTEAD UPDATE shoelace SET sl_avail = sl_avail + NEW.ok_quant WHERE sl_name = NEW.ok_name; + +``` + Now you can fill the table `shoelace_arrive` with the data from the parts list: +``` + +SELECT \* FROM shoelace_arrive; + +arr_name | arr_quant diff --git a/docs/X/runtime-config-autovacuum.md b/docs/en/runtime-config-autovacuum.md similarity index 100% rename from docs/X/runtime-config-autovacuum.md rename to docs/en/runtime-config-autovacuum.md diff --git a/docs/en/runtime-config-autovacuum.zh.md b/docs/en/runtime-config-autovacuum.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f10d551c1a9459fc1d2dc7e1dd6198d8ceec3cbb --- /dev/null +++ b/docs/en/runtime-config-autovacuum.zh.md @@ -0,0 +1,63 @@ +## 20.10. Automatic Vacuuming + +[](<>) + +These settings control the behavior of the*autovacuum*feature. Refer to[Section 25.1.6](routine-vacuuming.html#AUTOVACUUM)for more information. Note that many of these settings can be overridden on a per-table basis; see[Storage Parameters](sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS). + +`autovacuum`(`boolean`)[](<>) + +Controls whether the server should run the autovacuum launcher daemon. This is on by default; however,[track_counts](runtime-config-statistics.html#GUC-TRACK-COUNTS)must also be enabled for autovacuum to work. This parameter can only be set in the`postgresql.conf`file or on the server command line; however, autovacuuming can be disabled for individual tables by changing table storage parameters. + +Note that even when this parameter is disabled, the system will launch autovacuum processes if necessary to prevent transaction ID wraparound. See[Section 25.1.5](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND)for more information. + +`autovacuum_max_workers`(`integer`)[](<>) + +Specifies the maximum number of autovacuum processes (other than the autovacuum launcher) that may be running at any one time. The default is three. This parameter can only be set at server start. + +`autovacuum_naptime`(`整数`)[](<>) + +指定在任何给定数据库上运行 autovacuum 之间的最小延迟。在每一轮中,守护进程检查数据库和问题`真空`和`分析`该数据库中的表所需的命令。如果指定此值没有单位,则以秒为单位。默认值为一分钟(`1分钟`)。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`autovacuum_vacuum_threshold`(`整数`)[](<>) + +指定触发一个更新或删除元组的最小数量`真空`在任何一张桌子上。默认值为 50 个元组。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_vacuum_insert_threshold`(`整数`)[](<>) + +指定触发 a 所需的插入元组数`真空`在任何一张桌子上。默认值为 1000 个元组。如果指定了 -1,autovacuum 将不会触发`真空`根据插入次数对任何表进行操作。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_analyze_threshold`(`整数`)[](<>) + +指定触发一个`分析`在任何一张桌子上。默认值为 50 个元组。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_vacuum_scale_factor`(`浮点`)[](<>) + +指定要添加到的表大小的一部分`autovacuum_vacuum_threshold`在决定是否触发`真空`.默认值为 0.2(表大小的 20%)。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_vacuum_insert_scale_factor`(`浮点`)[](<>) + +指定要添加到的表大小的一部分`autovacuum_vacuum_insert_threshold`在决定是否触发`真空`.默认值为 0.2(表大小的 20%)。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_analyze_scale_factor`(`浮点`)[](<>) + +指定要添加到的表大小的一部分`autovacuum_analyze_threshold`在决定是否触发`分析`.默认值为 0.1(表大小的 10%)。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_freeze_max_age`(`整数`)[](<>) + +指定表的最大年龄(在事务中)`pg_class`.`再冷冻西德`领域可以达到之前`真空`强制操作以防止表内的事务 ID 回绕。请注意,即使禁用了 autovacuum,系统也会启动 autovacuum 进程以防止回绕。 + +Vacuum 还允许从`pg_xact`子目录,这也是为什么默认是2亿笔交易比较低的原因。该参数只能在服务器启动时设置,但可以通过更改表存储参数来减少单个表的设置。有关更多信息,请参阅[第 25.1.5 节](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND). + +`autovacuum_multixact_freeze_max_age`(`整数`)[](<>) + +指定表的最大年龄(在 multixacts 中)`pg_class`.`relminxid`领域可以达到之前`真空`强制操作以防止表内的 multixact ID 回绕。请注意,即使禁用了 autovacuum,系统也会启动 autovacuum 进程以防止回绕。 + +清理 multixacts 还允许从`pg_multixact/成员`和`pg_multixact/offsets`子目录,这就是为什么默认是一个相对较低的 4 亿 multixacts。该参数只能在服务器启动时设置,但可以通过更改表存储参数来减少单个表的设置。有关更多信息,请参阅[第 25.1.5.1 节](routine-vacuuming.html#VACUUM-FOR-MULTIXACT-WRAPAROUND). + +`autovacuum_vacuum_cost_delay`(`浮点`)[](<>) + +指定将在自动中使用的成本延迟值`真空`操作。如果指定了 -1,则常规[真空\_成本\_延迟](runtime-config-resource.html#GUC-VACUUM-COST-DELAY)值将被使用。如果指定此值没有单位,则以毫秒为单位。默认值为 2 毫秒。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 + +`autovacuum_vacuum_cost_limit`(`整数`)[](<>) + +指定将在自动中使用的成本限制值`真空`操作。如果指定了 -1(这是默认值),则常规[真空\_成本\_限制](runtime-config-resource.html#GUC-VACUUM-COST-LIMIT)值将被使用。请注意,该值会按比例分配给正在运行的 autovacuum 工作人员(如果有多个工作人员),因此每个工作人员的限制总和不会超过此变量的值。该参数只能在`postgresql.conf`文件或服务器命令行;但是可以通过更改表存储参数来覆盖单个表的设置。 diff --git a/docs/X/runtime-config-client.md b/docs/en/runtime-config-client.md similarity index 100% rename from docs/X/runtime-config-client.md rename to docs/en/runtime-config-client.md diff --git a/docs/en/runtime-config-client.zh.md b/docs/en/runtime-config-client.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..24e7a52e7c044ebcdb2ee483689a75ba506bb188 --- /dev/null +++ b/docs/en/runtime-config-client.zh.md @@ -0,0 +1,343 @@ +## 20.11. Client Connection Defaults + +[20.11.1. Statement Behavior](runtime-config-client.html#RUNTIME-CONFIG-CLIENT-STATEMENT) + +[20.11.2. Locale and Formatting](runtime-config-client.html#RUNTIME-CONFIG-CLIENT-FORMAT) + +[20.11.3. Shared Library Preloading](runtime-config-client.html#RUNTIME-CONFIG-CLIENT-PRELOAD) + +[20.11.4. Other Defaults](runtime-config-client.html#RUNTIME-CONFIG-CLIENT-OTHER) + +### 20.11.1. Statement Behavior + +`client_min_messages`(`enum`)[](<>) + +Controls which[message levels](runtime-config-logging.html#RUNTIME-CONFIG-SEVERITY-LEVELS)are sent to the client. Valid values are`DEBUG5`,`DEBUG4`,`DEBUG3`,`DEBUG2`,`DEBUG1`,`LOG`,`NOTICE`,`WARNING`, and`ERROR`.每个级别包括其后的所有级别。级别越高,发送的消息就越少。默认是`注意`.注意`日志`这里的排名与在[日志\_分钟\_消息](runtime-config-logging.html#GUC-LOG-MIN-MESSAGES). + +`信息`级别消息总是发送到客户端。 + +`搜索路径`(`细绳`)[](<>) [](<>) + +此变量指定在未指定模式的简单名称引用对象(表、数据类型、函数等)时搜索模式的顺序。当不同模式中存在同名对象时,使用在搜索路径中首先找到的对象。不在搜索路径中的任何模式中的对象只能通过使用限定(点分)名称指定其包含模式来引用。 + +价值为`搜索路径`必须是以逗号分隔的模式名称列表。任何不是现有架构的名称,或者是用户没有的架构`用法`权限,被默默地忽略。 + +如果列表项之一是特殊名称`$用户`,然后具有由返回的名称的模式`当前用户`被替换,如果存在这样的模式并且用户有`用法`许可。(如果不,`$用户`被忽略。) + +系统目录架构,`pg_catalog`, 总是被搜索,无论它是否在路径中被提及。如果它在路径中被提及,那么它将按照指定的顺序进行搜索。如果`pg_catalog`不在路径中,那么它将被搜索*前*搜索任何路径项。 + +同样,当前会话的临时表模式,`pg_temp_*`nnn`*`, 如果存在则总是被搜索。可以使用别名在路径中显式列出`pg_temp`[](<>).如果它未在路径中列出,则首先搜索它(甚至在`pg_catalog`)。但是,临时模式仅搜索关系(表、视图、序列等)和数据类型名称。它永远不会搜索函数或运算符名称。 + +当对象在没有指定特定目标模式的情况下创建时,它们将被放置在第一个有效的模式中`搜索路径`.如果搜索路径为空,则会报错。 + +此参数的默认值为`“$用户”,公共`.此设置支持共享使用数据库(其中没有用户拥有私有模式,并且所有共享使用`民众`)、私有的每用户模式,以及这些模式的组合。通过更改默认搜索路径设置(全局或每个用户)可以获得其他效果。 + +有关模式处理的更多信息,请参阅[第 5.9 节](ddl-schemas.html).特别是,默认配置仅适用于数据库只有一个用户或几个相互信任的用户时。 + +可以通过SQL函数查看搜索路径的当前有效值`current_schemas`(看[第 9.26 节](functions-info.html))。这与检查价值不完全相同`搜索路径`, 自从`current_schemas`显示项目如何出现在`搜索路径`得到解决。 + +`行安全`(`布尔值`)[](<>) + +此变量控制是否引发错误以代替应用行安全策略。当设置为`在`, 政策正常适用。当设置为`离开`, 查询失败,否则将应用至少一项策略。默认是`在`.改成`离开`有限的行可见性可能导致不正确的结果;例如,pg_dump 默认情况下会进行更改。此变量对绕过每行安全策略的角色没有影响,即超级用户和角色`旁路`属性。 + +有关行安全策略的更多信息,请参阅[创建政策](sql-createpolicy.html). + +`default_table_access_method`(`细绳`)[](<>) + +此参数指定在创建表或物化视图时使用的默认表访问方法,如果`创建`命令没有明确指定访问方法,或者何时`选择...进入`使用,它不允许指定表访问方法。默认是`堆`. + +`默认表空间`(`细绳`)[](<>) [](<>) + +此变量指定默认表空间,当一个对象(表和索引)`创造`命令没有明确指定表空间。 + +该值可以是表空间的名称,也可以是指定使用当前数据库的默认表空间的空字符串。如果该值与任何现有表空间的名称不匹配,PostgreSQL 将自动使用当前数据库的默认表空间。如果指定了非默认表空间,用户必须拥有`创造`权限,否则创建尝试将失败。 + +该变量不用于临时表;对他们来说,[温度\_表空间](runtime-config-client.html#GUC-TEMP-TABLESPACES)而是咨询。 + +创建数据库时也不使用此变量。默认情况下,新数据库从复制它的模板数据库继承其表空间设置。 + +如果在创建分区表时将此参数设置为空字符串以外的值,则分区表的表空间将设置为该值,该值将用作将来创建的分区的默认表空间,即使`默认表空间`从那以后发生了变化。 + +有关表空间的更多信息,请参阅[第 23.6 节](manage-ag-tablespaces.html). + +`default_toast_compression`(`枚举`)[](<>) + +此变量设置默认值[吐司](storage-toast.html)可压缩列的值的压缩方法。(这可以通过设置`压缩`列选项`创建表`要么`更改表`.) 支持的压缩方法是`pglz`和(如果 PostgreSQL 是用`--with-lz4`)`lz4`.默认是`pglz`. + +`临时表空间`(`细绳`)[](<>) [](<>) + +此变量指定在其中创建临时对象(临时表和临时表上的索引)的表空间,当`创建`命令没有明确指定表空间。用于对大型数据集进行排序等目的的临时文件也在这些表空间中创建。 + +该值是表空间名称的列表。当列表中有多个名称时,PostgreSQL 会在每次创建临时对象时随机选择列表中的一个成员;除了在事务中,连续创建的临时对象被放置在列表中的连续表空间中。如果列表的选定元素是空字符串,PostgreSQL 将自动使用当前数据库的默认表空间。 + +什么时候`临时表空间`以交互方式设置,指定不存在的表空间是错误的,指定用户没有的表空间也是如此`创造`特权。但是,当使用先前设置的值时,不存在的表空间将被忽略,用户缺少的表空间也是如此`创造`特权。特别是,当使用设置的值时,此规则适用`postgresql.conf`. + +默认值为空字符串,这会导致在当前数据库的默认表空间中创建所有临时对象。 + +也可以看看[默认\_表空间](runtime-config-client.html#GUC-DEFAULT-TABLESPACE). + +`check_function_body`(`布尔值`)[](<>) + +该参数常开。当设置为`离开`,它在期间禁用例程正文字符串的验证[创建函数](sql-createfunction.html)和[创建过程](sql-createprocedure.html).禁用验证可以避免验证过程的副作用,特别是防止由于前向引用等问题导致的误报。将此参数设置为`离开`在代表其他用户加载功能之前;皮克\_dump 会自动执行此操作。 + +`default_transaction_isolation`(`枚举`)[](<>) [](<>) + +每个 SQL 事务都有一个隔离级别,可以是“未提交读”、“已提交读”、“可重复读”或“可序列化”。此参数控制每个新事务的默认隔离级别。默认值为“已提交读”。 + +咨询[第 13 章](mvcc.html)和[设置交易](sql-set-transaction.html)了解更多信息。 + +`default_transaction_read_only`(`布尔值`)[](<>) [](<>) + +只读 SQL 事务不能更改非临时表。此参数控制每个新事务的默认只读状态。默认是`离开`(读/写)。 + +咨询[设置交易](sql-set-transaction.html)了解更多信息。 + +`default_transaction_deferrable`(`布尔值`)[](<>) [](<>) + +在运行时`可序列化`隔离级别,可延迟的只读 SQL 事务可能会在被允许继续之前被延迟。然而,一旦它开始执行,它就不会产生任何确保可串行化所需的开销;因此序列化代码没有理由因为并发更新而强制中止它,这使得该选项适用于长时间运行的只读事务。 + +此参数控制每个新事务的默认可延迟状态。它目前对读写事务或在隔离级别低于`可序列化`.默认是`离开`. + +咨询[设置交易](sql-set-transaction.html)了解更多信息。 + +`交易隔离`(`枚举`)[](<>) [](<>) + +该参数反映了当前事务的隔离级别。在每笔交易开始时,将其设置为当前值[默认\_交易\_隔离](runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION).任何后续更改它的尝试都等同于[设置交易](sql-set-transaction.html)命令。 + +`transaction_read_only`(`布尔值`)[](<>) [](<>) + +该参数反映当前事务的只读状态。在每笔交易开始时,将其设置为当前值[默认\_交易\_读\_只要](runtime-config-client.html#GUC-DEFAULT-TRANSACTION-READ-ONLY).任何后续更改它的尝试都等同于[设置交易](sql-set-transaction.html)命令。 + +`交易延迟`(`布尔值`)[](<>) [](<>) + +该参数反映了当前交易的延期状态。在每笔交易开始时,将其设置为当前值[默认\_交易\_可延期的](runtime-config-client.html#GUC-DEFAULT-TRANSACTION-DEFERRABLE).任何后续更改它的尝试都等同于[设置交易](sql-set-transaction.html)命令。 + +`session_replication_role`(`枚举`)[](<>) + +控制当前会话的复制相关触发器和规则的触发。设置此变量需要超级用户权限,并导致丢弃任何以前缓存的查询计划。可能的值为`起源`(默认),`复制品`和`当地的`. + +此设置的预期用途是逻辑复制系统将其设置为`replica`when they are applying replicated changes. The effect of that will be that triggers and rules (that have not been altered from their default configuration) will not fire on the replica. See the[`ALTER TABLE`](sql-altertable.html)clauses`ENABLE TRIGGER`and`ENABLE RULE`for more information. + +PostgreSQL treats the settings`origin`and`local`the same internally. Third-party replication systems may use these two values for their internal purposes, for example using`local`to designate a session whose changes should not be replicated. + +Since foreign keys are implemented as triggers, setting this parameter to`replica`also disables all foreign key checks, which can leave data in an inconsistent state if improperly used. + +`statement_timeout`(`integer`)[](<>) + +Abort any statement that takes more than the specified amount of time. If`log_min_error_statement`is set to`ERROR`or lower, the statement that timed out will also be logged. If this value is specified without units, it is taken as milliseconds. A value of zero (the default) disables the timeout. + +The timeout is measured from the time a command arrives at the server until it is completed by the server. If multiple SQL statements appear in a single simple-Query message, the timeout is applied to each statement separately. (PostgreSQL versions before 13 usually treated the timeout as applying to the whole query string.) In extended query protocol, the timeout starts running when any query-related message (Parse, Bind, Execute, Describe) arrives, and it is canceled by completion of an Execute or Sync message. + +Setting`statement_timeout`在`postgresql.conf`不推荐,因为它会影响所有会话。 + +`lock_timeout`(`整数`)[](<>) + +在尝试获取表、索引、行或其他数据库对象上的锁时,中止等待超过指定时间量的任何语句。时间限制分别适用于每次锁定获取尝试。该限制适用于显式锁定请求(例如`锁表`, 或者`选择更新`没有`现在等待`) 和隐式获取的锁。如果指定此值没有单位,则以毫秒为单位。零值(默认值)禁用超时。 + +不像`statement_timeout`, 这个超时只能在等待锁时发生。请注意,如果`statement_timeout`是非零的,设置是相当没有意义的`lock_timeout`到相同或更大的值,因为语句超时总是首先触发。如果`log_min_error_statement`设定为`错误`或更低,超时的语句将被记录。 + +环境`lock_timeout`在`postgresql.conf`不推荐,因为它会影响所有会话。 + +`idle_in_transaction_session_timeout`(`整数`)[](<>) + +终止在打开的事务中空闲(即等待客户端查询)超过指定时间量的任何会话。如果指定此值没有单位,则以毫秒为单位。零值(默认值)禁用超时。 + +此选项可用于确保空闲会话不会在不合理的时间内保持锁定。即使没有持有重要的锁,打开的事务也可以防止清除可能仅对该事务可见的最近死亡的元组;所以长时间保持空闲会导致表膨胀。看[第 25.1 节](routine-vacuuming.html)更多细节。 + +`idle_session_timeout`(`整数`)[](<>) + +终止任何已空闲(即等待客户端查询)但不在打开的事务中超过指定时间量的会话。如果指定此值没有单位,则以毫秒为单位。零值(默认值)禁用超时。 + +与打开事务的情况不同,没有事务的空闲会话不会对服务器产生大的成本,因此启用此超时的需要少于`idle_in_transaction_session_timeout`. + +小心对通过连接池软件或其他中间件建立的连接强制执行此超时,因为这样的层可能对意外的连接关闭反应不佳。仅对交互式会话启用此超时可能会有所帮助,也许仅将其应用于特定用户。 + +`vacuum_freeze_table_age`(`整数`)[](<>) + +`真空`如果表的`pg_class`.`再冷冻西德`字段已达到此设置指定的年龄。积极扫描不同于常规扫描`真空`因为它会访问每个可能包含未冻结 XID 或 MXID 的页面,而不仅仅是那些可能包含死元组的页面。默认为 1.5 亿笔交易。尽管用户可以将此值设置为从零到 20 亿的任意值,`真空`将默默地限制有效值到 95%[自动真空\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-FREEZE-MAX-AGE), 以便定期手册`真空`有机会在为表启动反环绕自动真空之前运行。有关更多信息,请参阅[第 25.1.5 节](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND). + +`vacuum_freeze_min_age`(`整数`)[](<>) + +指定截止年龄(在事务中)`真空`应该用来决定是否在扫描表时冻结行版本。默认为 5000 万次交易。尽管用户可以将此值设置为从零到十亿的任何值,`真空`将默默地将有效值限制为值的一半[自动真空\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-FREEZE-MAX-AGE),以便强制自动清空之间没有不合理的短时间。有关更多信息,请参阅[第 25.1.5 节](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND). + +`vacuum_failsafe_age`(`整数`)[](<>) + +指定表的最大年龄(在事务中)`pg_class`.`再冷冻西德`领域可以达到之前`真空`采取非常措施来避免系统范围的事务 ID 环绕失败。这是`真空`的最后手段。故障保护通常在自动清理以防止事务 ID 回绕已经运行了一段时间时触发,尽管故障保护可能在任何期间触发`真空`. + +触发故障保护时,将不再应用任何有效的基于成本的延迟,并且会绕过进一步的非必要维护任务(例如索引清理)。 + +默认为 16 亿笔交易。尽管用户可以将此值设置为从零到 21 亿的任意值,`真空`会默默调整有效值不小于105%[自动真空\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-FREEZE-MAX-AGE). + +`vacuum_multixact_freeze_table_age`(`整数`)[](<>) + +`真空`如果表的`pg_class`.`relminxid`字段已达到此设置指定的年龄。积极扫描不同于常规扫描`真空`因为它会访问每个可能包含未冻结 XID 或 MXID 的页面,而不仅仅是那些可能包含死元组的页面。默认值为 1.5 亿个 multixact。尽管用户可以将此值设置为从零到 20 亿的任意值,`真空`将默默地限制有效值到 95%[自动真空\_多方位\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-MULTIXACT-FREEZE-MAX-AGE), 以便定期手册`真空`在为表启动反环绕之前有机会运行。有关更多信息,请参阅[第 25.1.5.1 节](routine-vacuuming.html#VACUUM-FOR-MULTIXACT-WRAPAROUND). + +`vacuum_multixact_freeze_min_age`(`整数`)[](<>) + +指定截止年龄(在 multixacts 中)`真空`应该用来决定在扫描表时是否用更新的事务 ID 或 multixact ID 替换 multixact ID。默认值为 500 万个多任务。尽管用户可以将此值设置为从零到十亿的任何值,`真空`将默默地将有效值限制为值的一半[自动真空\_多方位\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-MULTIXACT-FREEZE-MAX-AGE),以便强制自动清空之间没有不合理的短时间。有关更多信息,请参阅[第 25.1.5.1 节](routine-vacuuming.html#VACUUM-FOR-MULTIXACT-WRAPAROUND). + +`vacuum_multixact_failsafe_age`(`整数`)[](<>) + +指定表的最大年龄(在 multixacts 中)`pg_class`.`relminxid`领域可以达到之前`真空`采取非常措施来避免系统范围内的 multixact ID 环绕失败。这是`真空`的最后手段。故障保护通常在自动清理以防止事务 ID 回绕已经运行了一段时间时触发,尽管故障保护可能在任何期间触发`真空`. + +触发故障保护时,将不再应用任何有效的基于成本的延迟,并且会绕过进一步的非必要维护任务(例如索引清理)。 + +默认值为 16 亿个 multixact。尽管用户可以将此值设置为从零到 21 亿的任意值,`真空`会默默调整有效值不小于105%[自动真空\_多方位\_冻结\_最大限度\_年龄](runtime-config-autovacuum.html#GUC-AUTOVACUUM-MULTIXACT-FREEZE-MAX-AGE). + +`bytea_output`(`枚举`)[](<>) + +设置类型值的输出格式`拜茶`.有效值为`十六进制`(默认)和`逃脱`(传统的 PostgreSQL 格式)。看[第 8.4 节](datatype-binary.html)了解更多信息。这`拜茶`无论此设置如何,type 始终接受两种输入格式。 + +`xml二进制`(`枚举`)[](<>) + +设置二进制值在 XML 中的编码方式。这适用于例如`拜茶`值由函数转换为 XML`xml元素`或者`xml森林`.可能的值为`base64`和`十六进制`,它们都在 XML Schema 标准中定义。默认是`base64`.有关 XML 相关函数的更多信息,请参阅[第 9.15 节](functions-xml.html). + +这里的实际选择主要是个人喜好问题,仅受客户端应用程序中可能的限制的限制。这两种方法都支持所有可能的值,尽管十六进制编码会比 base64 编码稍大一些。 + +`xml选项`(`枚举`)[](<>) [](<>) [](<>) + +设置是否`文档`或者`内容`is implicit when converting between XML and character string values. See[Section 8.13](datatype-xml.html)for a description of this. Valid values are`DOCUMENT`and`CONTENT`. The default is`CONTENT`. + +According to the SQL standard, the command to set this option is + +``` +SET XML OPTION { DOCUMENT | CONTENT }; +``` + +This syntax is also available in PostgreSQL. + +`gin_pending_list_limit`(`integer`)[](<>) + +Sets the maximum size of a GIN index's pending list, which is used when`fastupdate`is enabled. If the list grows larger than this maximum size, it is cleaned up by moving the entries in it to the index's main GIN data structure in bulk. If this value is specified without units, it is taken as kilobytes. The default is four megabytes (`4MB`). This setting can be overridden for individual GIN indexes by changing index storage parameters. See[Section 67.4.1](gin-implementation.html#GIN-FAST-UPDATE)and[Section 67.5](gin-tips.html)for more information. + +### 20.11.2. Locale and Formatting + +`DateStyle`(`string`)[](<>) + +Sets the display format for date and time values, as well as the rules for interpreting ambiguous date input values. For historical reasons, this variable contains two independent components: the output format specification (`国际标准化组织`,`Postgres`,`SQL`, 或者`德语`) 和年/月/日排序的输入/输出规范 (`DMY`,`MDY`, 或者`YMD`)。这些可以单独设置或一起设置。关键词`欧元`和`欧洲的`是同义词`DMY`;关键字`我们`,`非欧元`, 和`非欧洲`是同义词`MDY`.看[第 8.5 节](datatype-datetime.html)了解更多信息。内置默认值是`ISO、MDY`, 但 initdb 将使用与所选行为相对应的设置初始化配置文件`lc_time`语言环境。 + +`间隔样式`(`枚举`)[](<>) + +设置间隔值的显示格式。价值`sql_standard`将产生与 SQL 标准间隔文字匹配的输出。价值`postgres`(这是默认值)将产生匹配 8.4 之前的 PostgreSQL 版本的输出,当[日期样式](runtime-config-client.html#GUC-DATESTYLE)参数设置为`国际标准化组织`.价值`postgres_verbose`将产生匹配 8.4 之前的 PostgreSQL 版本的输出,当`日期样式`参数设置为非`国际标准化组织`输出。价值`iso_8601`将产生与 ISO 8601 第 4.4.3.2 节中定义的时间间隔“带指示符的格式”相匹配的输出。 + +这`间隔样式`参数也影响模糊区间输入的解释。看[第 8.5.4 节](datatype-datetime.html#DATATYPE-INTERVAL-INPUT)了解更多信息。 + +`时区`(`细绳`)[](<>) [](<>) + +设置显示和解释时间戳的时区。内置默认值是`格林威治标准时间`,但这通常在`postgresql.conf`;initdb 将在那里安装与其系统环境相对应的设置。看[第 8.5.3 节](datatype-datetime.html#DATATYPE-TIMEZONES)了解更多信息。 + +`timezone_abbreviations`(`细绳`)[](<>) [](<>) + +设置服务器将接受的用于日期时间输入的时区缩写集合。默认是`'默认'`,这是一个适用于世界大部分地区的集合;还有`'澳大利亚'`和`'印度'`,并且可以为特定安装定义其他集合。看[B.4 节](datetime-config-files.html)了解更多信息。 + +`extra_float_digits`(`整数`)[](<>) [](<>) [](<>) + +此参数调整用于浮点值文本输出的位数,包括`浮动4`,`浮动8`, 和几何数据类型。 + +如果值为 1(默认值)或更大,则浮点值以最短精度格式输出;看[第 8.1.3 节](datatype-numeric.html#DATATYPE-FLOAT).生成的实际位数仅取决于输出的值,而不取决于此参数的值。最多需要 17 位数字`浮动8`值,和 9 为`浮动4`价值观。这种格式既快速又精确,在正确读取时准确地保留了原始二进制浮点值。为了历史兼容性,允许值最大为 3。 + +如果该值为零或负数,则将输出四舍五入到给定的十进制精度。使用的精度是类型的标准位数 (`FLT_DIG`或者`DBL_DIG`视情况而定)根据此参数的值减小。(例如,指定 -1 将导致`浮动4`要输出的值四舍五入到 5 位有效数字,以及`浮动8`值四舍五入为 14 位。)这种格式速度较慢,并且不会保留二进制浮点值的所有位,但可能更易于人类阅读。 + +### 笔记 + +此参数的含义及其默认值在 PostgreSQL 12 中发生了变化;看[第 8.1.3 节](datatype-numeric.html#DATATYPE-FLOAT)进一步讨论。 + +`客户端编码`(`细绳`)[](<>) [](<>) + +设置客户端编码(字符集)。默认是使用数据库编码。PostgreSQL 服务器支持的字符集在[第 24.3.1 节](multibyte.html#MULTIBYTE-CHARSET-SUPPORTED). + +`lc_messages`(`细绳`)[](<>) + +设置显示消息的语言。可接受的值取决于系统;看[第 24.1 节](locale.html)了解更多信息。如果此变量设置为空字符串(这是默认值),则该值以系统相关的方式从服务器的执行环境继承。 + +在某些系统上,此区域设置类别不存在。设置此变量仍然有效,但不会有任何效果。此外,有可能不存在所需语言的翻译消息。在这种情况下,您将继续看到英文消息。 + +只有超级用户可以更改此设置,因为它会影响发送到服务器日志以及客户端的消息,并且不正确的值可能会掩盖服务器日志的可读性。 + +`lc_monetary`(`细绳`)[](<>) + +设置用于格式化货币金额的语言环境,例如使用`to_char`函数族。可接受的值取决于系统;看[第 24.1 节](locale.html)了解更多信息。如果此变量设置为空字符串(这是默认值),则该值以系统相关的方式从服务器的执行环境继承。 + +`lc_numeric`(`细绳`)[](<>) + +设置用于格式化数字的语言环境,例如使用`to_char`函数族。可接受的值取决于系统;看[第 24.1 节](locale.html)了解更多信息。如果此变量设置为空字符串(这是默认值),则该值以系统相关的方式从服务器的执行环境继承。 + +`lc_time`(`细绳`)[](<>) + +Sets the locale to use for formatting dates and times, for example with the`to_char`family of functions. Acceptable values are system-dependent; see[Section 24.1](locale.html)for more information. If this variable is set to the empty string (which is the default) then the value is inherited from the execution environment of the server in a system-dependent way. + +`default_text_search_config`(`string`)[](<>) + +Selects the text search configuration that is used by those variants of the text search functions that do not have an explicit argument specifying the configuration. See[Chapter 12](textsearch.html)for further information. The built-in default is`pg_catalog.simple`, but initdb will initialize the configuration file with a setting that corresponds to the chosen`lc_ctype`locale, if a configuration matching that locale can be identified. + +### 20.11.3. Shared Library Preloading + +Several settings are available for preloading shared libraries into the server, in order to load additional functionality or achieve performance benefits. For example, a setting of`'$libdir/mylib'`would cause`mylib.so`(or on some platforms,`mylib.sl`) to be preloaded from the installation's standard library directory. The differences between the settings are when they take effect and what privileges are required to change them. + +PostgreSQL procedural language libraries can be preloaded in this way, typically by using the syntax`'$libdir/plXXX'`where`XXX`is`pgsql`,`perl`,`tcl`, 要么`python`. + +只有专门用于 PostgreSQL 的共享库才能以这种方式加载。每个 PostgreSQL 支持的库都有一个“魔术块”,经过检查以保证兼容性。因此,不能以这种方式加载非 PostgreSQL 库。您也许可以使用操作系统工具,例如`LD_PRELOAD`为了那个原因。 + +通常,请参阅特定模块的文档以了解加载该模块的推荐方式。 + +`local_preload_libraries`(`细绳`)[](<>) [](<>) + +此变量指定要在连接开始时预加载的一个或多个共享库。它包含一个以逗号分隔的库名称列表,其中每个名称都被解释为[`加载`](sql-load.html)命令。条目之间的空格被忽略;如果您需要在名称中包含空格或逗号,请用双引号将库名称括起来。该参数值仅在连接开始时生效。后续更改无效。如果未找到指定的库,则连接尝试将失败。 + +任何用户都可以设置此选项。因此,可以加载的库仅限于出现在`插件`安装的标准库目录的子目录。(数据库管理员有责任确保那里只安装“安全”的库。)`local_preload_libraries`可以显式指定此目录,例如`$libdir/插件/mylib`,或者只指定库名称 -`我的库`将具有相同的效果`$libdir/插件/mylib`. + +此功能的目的是允许非特权用户将调试或性能测量库加载到特定会话中,而无需显式`加载`命令。为此,通常使用`选项`客户端上的环境变量或使用`改变角色集`. + +但是,除非一个模块是专门为非超级用户以这种方式使用而设计的,否则这通常不是正确的设置。看着[会议\_预载\_图书馆](runtime-config-client.html#GUC-SESSION-PRELOAD-LIBRARIES)反而。 + +`session_preload_libraries`(`细绳`)[](<>) + +此变量指定要在连接开始时预加载的一个或多个共享库。它包含一个以逗号分隔的库名称列表,其中每个名称都被解释为[`加载`](sql-load.html)命令。条目之间的空格被忽略;如果您需要在名称中包含空格或逗号,请用双引号将库名称括起来。该参数值仅在连接开始时生效。后续更改无效。如果未找到指定的库,则连接尝试将失败。只有超级用户可以更改此设置。 + +此功能的目的是允许将调试或性能测量库加载到特定会话中,而无需显式`加载`发出命令。例如,[汽车\_解释](auto-explain.html)可以通过设置此参数为给定用户名下的所有会话启用`改变角色集`.此外,无需重新启动服务器即可更改此参数(但更改仅在启动新会话时生效),因此以这种方式添加新模块更容易,即使它们应该应用于所有会话。 + +不像[共享\_预载\_图书馆](runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES),在会话开始时而不是在首次使用时加载库并没有很大的性能优势。但是,使用连接池时有一些优势。 + +`shared_preload_libraries`(`细绳`)[](<>) + +此变量指定在服务器启动时要预加载的一个或多个共享库。它包含一个以逗号分隔的库名称列表,其中每个名称都被解释为[`加载`](sql-load.html)命令。条目之间的空格被忽略;如果您需要在名称中包含空格或逗号,请用双引号将库名称括起来。此参数只能在服务器启动时设置。如果找不到指定的库,服务器将无法启动。 + +一些库需要执行某些只能在 postmaster 启动时进行的操作,例如分配共享内存、保留轻量级锁或启动后台工作程序。这些库必须在服务器启动时通过此参数加载。有关详细信息,请参阅每个库的文档。 + +也可以预加载其他库。通过预加载共享库,在首次使用该库时避免了库启动时间。但是,启动每个新服务器进程的时间可能会略有增加,即使该进程从不使用该库。因此,此参数仅推荐用于将在大多数会话中使用的库。此外,更改此参数需要重新启动服务器,因此这不是用于短期调试任务的正确设置,例如。采用[会议\_预载\_图书馆](runtime-config-client.html#GUC-SESSION-PRELOAD-LIBRARIES)为此。 + +### 笔记 + +在 Windows 主机上,在服务器启动时预加载库不会减少启动每个新服务器进程所需的时间;每个服务器进程将重新加载所有预加载库。然而,`shared_preload_libraries`对于需要在 postmaster 启动时执行操作的库,在 Windows 主机上仍然有用。 + +`jit_provider`(`细绳`)[](<>) + +This variable is the name of the JIT provider library to be used (see[Section 32.4.2](jit-extensibility.html#JIT-PLUGGABLE)). The default is`llvmjit`. This parameter can only be set at server start. + +If set to a non-existent library, JIT will not be available, but no error will be raised. This allows JIT support to be installed separately from the main PostgreSQL package. + +### 20.11.4. Other Defaults + +`dynamic_library_path`(`string`)[](<>) [](<>) + +If a dynamically loadable module needs to be opened and the file name specified in the`CREATE FUNCTION`or`LOAD`command does not have a directory component (i.e., the name does not contain a slash), the system will search this path for the required file. + +The value for`dynamic_library_path`must be a list of absolute directory paths separated by colons (or semi-colons on Windows). If a list element starts with the special string`$libdir`, the compiled-in PostgreSQL package library directory is substituted for`$libdir`; this is where the modules provided by the standard PostgreSQL distribution are installed. (Use`pg_config --pkglibdir`to find out the name of this directory.) For example: + +``` +dynamic_library_path = '/usr/local/lib/postgresql:/home/my_project/lib:$libdir' +``` + +or, in a Windows environment: + +``` +dynamic_library_path = 'C:\tools\postgresql;H:\my_project\lib;$libdir' +``` + +The default value for this parameter is`'$libdir'`. If the value is set to an empty string, the automatic path search is turned off. + +This parameter can be changed at run time by superusers, but a setting done that way will only persist until the end of the client connection, so this method should be reserved for development purposes. The recommended way to set this parameter is in the`postgresql.conf`配置文件。 + +`gin_fuzzy_search_limit`(`整数`)[](<>) + +GIN 索引扫描返回的集合大小的软上限。有关更多信息,请参阅[第 67.5 节](gin-tips.html). diff --git a/docs/X/runtime-config-compatible.md b/docs/en/runtime-config-compatible.md similarity index 100% rename from docs/X/runtime-config-compatible.md rename to docs/en/runtime-config-compatible.md diff --git a/docs/en/runtime-config-compatible.zh.md b/docs/en/runtime-config-compatible.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..831cce4452073c955c7ab3681d4bf60c838e432a --- /dev/null +++ b/docs/en/runtime-config-compatible.zh.md @@ -0,0 +1,55 @@ +## 20.13. Version and Platform Compatibility + +[20.13.1. Previous PostgreSQL Versions](runtime-config-compatible.html#RUNTIME-CONFIG-COMPATIBLE-VERSION) + +[20.13.2. Platform and Client Compatibility](runtime-config-compatible.html#RUNTIME-CONFIG-COMPATIBLE-CLIENTS) + +### 20.13.1. Previous PostgreSQL Versions + +`array_nulls`(`boolean`)[](<>) + +This controls whether the array input parser recognizes unquoted`NULL`as specifying a null array element. By default, this is`on`, allowing array values containing null values to be entered. However, PostgreSQL versions before 8.2 did not support null values in arrays, and therefore would treat`NULL`as specifying a normal array element with the string value “NULL”. For backward compatibility with applications that require the old behavior, this variable can be turned`off`. + +Note that it is possible to create array values containing null values even when this variable is`off`. + +`backslash_quote`(`enum`)[](<>) [](<>) + +This controls whether a quote mark can be represented by`\'`in a string literal. The preferred, SQL-standard way to represent a quote mark is by doubling it (`''`) but PostgreSQL has historically also accepted`\'`.然而,使用`\'`会产生安全风险,因为在某些客户端字符集编码中,存在最后一个字节在数字上等同于 ASCII 的多字节字符`\`.如果客户端代码不正确地转义,则可能发生 SQL 注入攻击。通过使服务器拒绝其中引号似乎被反斜杠转义的查询,可以防止这种风险。的允许值`反斜杠引用`是`在`(允许`\'`总是),`离开`(总是拒绝),和`安全编码`(仅当客户端编码不允许 ASCII 时才允许`\`在一个多字节字符内)。`安全编码`是默认设置。 + +请注意,在符合标准的字符串文字中,`\`只是意味着`\`反正。此参数仅影响对不符合标准的文字的处理,包括转义字符串语法 (`E'...'`)。 + +`escape_string_warning`(`布尔值`)[](<>) [](<>) + +打开时,如果反斜杠 (`\`) 出现在普通字符串文字 (`“……”`语法)和`standard_conforming_strings`已关闭。默认是`在`. + +希望使用反斜杠作为转义的应用程序应修改为使用转义字符串语法 (`E'...'`),因为根据 SQL 标准,普通字符串的默认行为现在是将反斜杠视为普通字符。可以启用此变量来帮助定位需要更改的代码。 + +`lo_compat_privileges`(`布尔值`)[](<>) + +在 PostgreSQL 9.0 之前的版本中,大对象没有访问权限,因此,所有用户始终可以读写。将此变量设置为`在`禁用新的权限检查,以与以前的版本兼容。默认是`离开`.只有超级用户可以更改此设置。 + +设置此变量不会禁用所有与大对象相关的安全检查——仅禁用那些在 PostgreSQL 9.0 中更改了默认行为的安全检查。 + +`quote_all_identifiers`(`布尔值`)[](<>) + +当数据库生成 SQL 时,强制所有标识符都被引用,即使它们不是(当前)关键字。这会影响输出`解释`以及函数的结果,如`pg_get_viewdef`.另见`--quote-all-identifiers`选项[皮克\_倾倒](app-pgdump.html)和[皮克\_饺子](app-pg-dumpall.html). + +`standard_conforming_strings`(`布尔值`)[](<>) [](<>) + +这控制普通字符串文字(`“……”`) 按照 SQL 标准中的规定逐字处理反斜杠。从 PostgreSQL 9.1 开始,默认值为`在`(以前的版本默认为`离开`)。应用程序可以检查此参数以确定如何处理字符串文字。此参数的存在也可以作为转义字符串语法 (`E'...'`) 受支持。转义字符串语法 ([第 4.1.2.2 节](sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE)) 如果应用程序希望将反斜杠视为转义字符,则应使用。 + +`synchronize_seqscans`(`boolean`)[](<>) + +This allows sequential scans of large tables to synchronize with each other, so that concurrent scans read the same block at about the same time and hence share the I/O workload. When this is enabled, a scan might start in the middle of the table and then “wrap around” the end to cover all rows, so as to synchronize with the activity of scans already in progress. This can result in unpredictable changes in the row ordering returned by queries that have no`ORDER BY`clause. Setting this parameter to`off`ensures the pre-8.3 behavior in which a sequential scan always starts from the beginning of the table. The default is`on`. + +### 20.13.2. Platform and Client Compatibility + +`transform_null_equals`(`boolean`)[](<>) [](<>) + +When on, expressions of the form`*`expr`* = NULL`(or`NULL = *`expr`*`) are treated as`*`expr`* IS NULL`, that is, they return true if*`表达式`*评估为空值,否则为假。正确的符合 SQL 规范的行为`*`表达式`* = 空`是始终返回 null(未知)。因此该参数默认为`离开`. + +但是,Microsoft Access 中的筛选表单生成的查询似乎使用`*`表达式`* = 空`测试空值,因此如果您使用该接口访问数据库,您可能需要打开此选项。由于形式的表达`*`表达式`* = 空`总是返回空值(使用 SQL 标准解释),它们不是很有用,并且在正常应用程序中不经常出现,因此此选项在实践中几乎没有危害。但是新用户经常对涉及空值的表达式的语义感到困惑,所以这个选项默认是关闭的。 + +请注意,此选项仅影响确切的形式`= 空`,而不是其他比较运算符或其他在计算上等效于某些涉及等于运算符的表达式的表达式(例如`在`)。因此,此选项不是对不良编程的一般修复。 + +参考[第 9.2 节](functions-comparison.html)获取相关信息。 diff --git a/docs/X/runtime-config-connection.md b/docs/en/runtime-config-connection.md similarity index 100% rename from docs/X/runtime-config-connection.md rename to docs/en/runtime-config-connection.md diff --git a/docs/en/runtime-config-connection.zh.md b/docs/en/runtime-config-connection.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..20ff906e47802e7cdaeda2dc252f5b25286781ac --- /dev/null +++ b/docs/en/runtime-config-connection.zh.md @@ -0,0 +1,249 @@ +## 20.3.连接和身份验证 + +[20.3.1. 连接设置](runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SETTINGS) + +[20.3.2. 认证](runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-AUTHENTICATION) + +[20.3.3. SSL](runtime-config-connection.html#RUNTIME-CONFIG-CONNECTION-SSL) + +### 20.3.1.连接设置 + +`听一下你的地址`(`一串`)[](<>) + +指定服务器将在其上侦听来自客户端应用程序的连接的TCP/IP地址。该值采用以逗号分隔的主机名和/或数字IP地址列表的形式。特别条目`*`对应于所有可用的IP接口。条目`0.0.0.0`允许侦听所有IPv4地址和`::`允许侦听所有IPv6地址。如果列表为空,则服务器根本不会监听任何IP接口,在这种情况下,只能使用Unix域套接字连接到它。默认值是localhost,它只允许进行本地TCP/IP“环回”连接。而客户端身份验证([第21章](client-authentication.html))允许对谁可以访问服务器进行细粒度控制,`听一下你的地址`控制哪些接口接受连接尝试,这有助于防止在不安全的网络接口上重复恶意连接请求。此参数只能在服务器启动时设置。 + +`港口城市`(`整数`)[](<>) + +服务器监听的TCP端口;默认为5432。请注意,服务器侦听的所有IP地址都使用相同的端口号。此参数只能在服务器启动时设置。 + +`max_连接`(`整数`)[](<>) + +确定与数据库服务器的最大并发连接数。默认值通常为 100 个连接,但如果您的内核设置不支持它可能会更少(在 initdb 期间确定)。此参数只能在服务器启动时设置。 + +运行备用服务器时,您必须将此参数设置为与主服务器上相同或更高的值。否则,备用服务器中将不允许查询。 + +`超级用户保留连接`(`整数`)[](<>) + +确定为 PostgreSQL 超级用户的连接保留的连接“槽”的数量。最多[最大限度\_连接](runtime-config-connection.html#GUC-MAX-CONNECTIONS)连接可以同时处于活动状态。每当活动并发连接数至少为`最大连接数`减`超级用户保留连接`, 新连接将只接受超级用户,不会接受新的复制连接。 + +默认值为三个连接。该值必须小于`最大连接数`.此参数只能在服务器启动时设置。 + +`unix_socket_directories`(`细绳`)[](<>) + +指定服务器要侦听来自客户端应用程序的连接的 Unix 域套接字的目录。可以通过列出以逗号分隔的多个目录来创建多个套接字。条目之间的空格被忽略;如果您需要在名称中包含空格或逗号,请用双引号将目录名称括起来。空值指定不侦听任何 Unix 域套接字,在这种情况下,只能使用 TCP/IP 套接字连接到服务器。 + +开头的值`@`指定应在抽象命名空间中创建一个 Unix 域套接字(目前在 Linux 和 Windows 上支持)。在这种情况下,这个值不指定一个“目录”,而是一个前缀,实际套接字名称的计算方式与文件系统命名空间相同。虽然抽象套接字名称前缀可以自由选择,因为它不是文件系统位置,但约定仍然使用类似文件系统的值,例如`@/tmp`. + +默认值通常是`/tmp`,但可以在构建时更改。在 Windows 上,默认值为空,这意味着默认情况下不创建 Unix 域套接字。此参数只能在服务器启动时设置。 + +除了套接字文件本身,它被命名为`.s.PGSQL.*`呸呸呸`*`在哪里*`呸呸呸`*是服务器的端口号,一个普通的文件名为`.s.PGSQL.*`呸呸呸`*。锁`将在每个`unix_socket_directories`目录。这两个文件都不应该被手动删除。对于抽象命名空间中的套接字,不会创建锁定文件。 + +`unix_socket_group`(`细绳`)[](<>) + +设置 Unix 域套接字的所属组。(套接字的拥有用户始终是启动服务器的用户。)结合参数`unix_socket_permissions`这可以用作 Unix 域连接的附加访问控制机制。默认情况下,这是空字符串,它使用服务器用户的默认组。此参数只能在服务器启动时设置。 + +Windows 不支持此参数。任何设置都将被忽略。此外,抽象命名空间中的套接字没有文件所有者,因此在这种情况下也会忽略此设置。 + +`unix_socket_permissions`(`整数`)[](<>) + +设置 Unix 域套接字的访问权限。Unix 域套接字使用通常的 Unix 文件系统权限集。参数值应为以`修改`和`掩码`系统调用。(要使用惯用的八进制格式,数字必须以`0`(零)。) + +默认权限是`0777`,意味着任何人都可以连接。合理的选择是`0770`(仅限用户和组,另请参阅`unix_socket_group`) 和`0700`(仅限用户)。(请注意,对于 Unix 域套接字,只有写入权限很重要,因此设置或撤销读取或执行权限是没有意义的。) + +这种访问控制机制独立于在[第 21 章](client-authentication.html). + +此参数只能在服务器启动时设置。 + +此参数与完全忽略套接字权限的系统(尤其是 Solaris 10 及更高版本的 Solaris)无关。在那里,可以通过指向来实现类似的效果`unix_socket_directories`到具有仅限于所需受众的搜索权限的目录。 + +抽象命名空间中的套接字没有文件权限,因此在这种情况下也会忽略此设置。 + +`你好`(`布尔值`)[](<>) + +启用通过 Bonjour 宣传服务器的存在。默认为关闭。此参数只能在服务器启动时设置。 + +`bonjour_name`(`细绳`)[](<>) + +指定 Bonjour 服务名称。如果此参数设置为空字符串,则使用计算机名称`''`(这是默认设置)。如果服务器未在 Bonjour 支持下编译,则忽略此参数。此参数只能在服务器启动时设置。 + +`tcp_keepalives_idle`(`整数`)[](<>) + +指定没有网络活动的时间量,在此之后操作系统应向客户端发送 TCP keepalive 消息。如果指定此值没有单位,则以秒为单位。值 0(默认值)选择操作系统的默认值。此参数仅在支持的系统上受支持`TCP_KEEPIDLE`或等效的套接字选项,在 Windows 上;在其他系统上,它必须为零。在通过 Unix 域套接字连接的会话中,此参数被忽略并始终读取为零。 + +### 笔记 + +在 Windows 上,将值设置为 0 会将此参数设置为 2 小时,因为 Windows 不提供读取系统默认值的方法。 + +`tcp_keepalives_interval`(`整数`)[](<>) + +指定客户端未确认的 TCP keepalive 消息应在多长时间后重新传输。如果指定此值没有单位,则以秒为单位。值 0(默认值)选择操作系统的默认值。此参数仅在支持的系统上受支持`TCP_KEEPINTVL`或等效的套接字选项,在 Windows 上;在其他系统上,它必须为零。在通过 Unix 域套接字连接的会话中,此参数被忽略并始终读取为零。 + +### 笔记 + +在 Windows 上,将值设置为 0 会将此参数设置为 1 秒,因为 Windows 不提供读取系统默认值的方法。 + +`tcp_keepalives_count`(`整数`)[](<>) + +指定在服务器与客户端的连接被视为死之前可能丢失的 TCP keepalive 消息的数量。值 0(默认值)选择操作系统的默认值。此参数仅在支持的系统上受支持`TCP_KEEPCNT`或等效的插座选项;在其他系统上,它必须为零。在通过 Unix 域套接字连接的会话中,此参数被忽略并始终读取为零。 + +### 笔记 + +Windows 不支持此参数,必须为零。 + +`tcp_user_timeout`(`整数`)[](<>) + +指定在强制关闭 TCP 连接之前,传输的数据可能保持未确认的时间量。如果指定此值没有单位,则以毫秒为单位。值 0(默认值)选择操作系统的默认值。此参数仅在支持的系统上受支持`TCP_USER_TIMEOUT`;在其他系统上,它必须为零。在通过 Unix 域套接字连接的会话中,此参数被忽略并始终读取为零。 + +### 笔记 + +Windows 不支持此参数,必须为零。 + +`client_connection_check_interval`(`整数`)[](<>) + +设置在运行查询时可选检查客户端仍然连接的时间间隔。检查是通过轮询套接字来执行的,如果内核报告连接已关闭,则可以更快地中止长时间运行的查询。 + +此选项目前仅在支持非标准的系统上可用`轮询`扩展至`轮询`系统调用,包括 Linux。 + +如果指定的值没有单位,则以毫秒为单位。默认值为`0`,禁用连接检查。在没有连接检查的情况下,服务器只会在下一次与套接字交互时,在等待、接收或发送数据时检测到连接丢失。 + +为了让内核本身在包括网络故障在内的所有场景中可靠地检测丢失的 TCP 连接,可能还需要调整操作系统的 TCP keepalive 设置,或者[tcp\_保活\_idle](runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE),[tcp_keepalives_interval](runtime-config-connection.html#GUC-TCP-KEEPALIVES-INTERVAL)and[tcp_keepalives_count](runtime-config-connection.html#GUC-TCP-KEEPALIVES-COUNT)settings of PostgreSQL. + +### 20.3.2. Authentication + +`authentication_timeout`(`integer`)[](<>) [](<>) [](<>) + +Maximum amount of time allowed to complete client authentication. If a would-be client has not completed the authentication protocol in this much time, the server closes the connection. This prevents hung clients from occupying a connection indefinitely. If this value is specified without units, it is taken as seconds. The default is one minute (`1m`). This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`password_encryption`(`enum`)[](<>) + +指定密码时[创建角色](sql-createrole.html)要么[改变角色](sql-alterrole.html),此参数确定用于加密密码的算法。可能的值为`scram-sha-256`,它将使用 SCRAM-SHA-256 加密密码,并且`md5`,它将密码存储为 MD5 哈希值。默认是`scram-sha-256`. + +请注意,旧客户端可能缺乏对 SCRAM 身份验证机制的支持,因此无法使用使用 SCRAM-SHA-256 加密的密码。看[第 21.5 节](auth-password.html)更多细节。 + +`krb_server_keyfile`(`细绳`)[](<>) + +设置服务器的 Kerberos 密钥文件的位置。默认是`文件:/usr/local/pgsql/etc/krb5.keytab`(其中目录部分是指定为`系统配置目录`在构建时;采用`pg_config --sysconfdir`来确定)。如果此参数设置为空字符串,则忽略它并使用与系统相关的默认值。该参数只能在`postgresql.conf`文件或在服务器命令行上。看[第 21.6 节](gssapi-auth.html)了解更多信息。 + +`krb_caseins_users`(`布尔值`)[](<>) + +设置 GSSAPI 用户名是否应不区分大小写。默认是`离开`(区分大小写)。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`db_user_namespace`(`布尔值`)[](<>) + +此参数启用每个数据库的用户名。默认情况下它是关闭的。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +如果打开,您应该将用户创建为*`用户名@dbname`*.什么时候*`用户名`*由连接的客户端传递,`@`并且数据库名称被附加到用户名,并且该数据库特定的用户名由服务器查找。请注意,当您创建名称包含的用户时`@`在 SQL 环境中,您需要引用用户名。 + +启用此参数后,您仍然可以创建普通的全局用户。只需附加`@`在客户端中指定用户名时,例如,`乔@`. The`@`will be stripped off before the user name is looked up by the server. + +`db_user_namespace`causes the client's and server's user name representation to differ. Authentication checks are always done with the server's user name so authentication methods must be configured for the server's user name, not the client's. Because`md5`uses the user name as salt on both the client and server,`md5`cannot be used with`db_user_namespace`. + +### Note + +This feature is intended as a temporary measure until a complete solution is found. At that time, this option will be removed. + +### 20.3.3. SSL + +See[Section 19.9](ssl-tcp.html)for more information about setting up SSL. + +`ssl`(`boolean`)[](<>) + +Enables SSL connections. This parameter can only be set in the`postgresql.conf`file or on the server command line. The default is`off`. + +`ssl_ca_file`(`string`)[](<>) + +指定包含 SSL 服务器证书颁发机构 (CA) 的文件的名称。相对路径是相对于数据目录的。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认为空,表示不加载 CA 文件,不进行客户端证书验证。 + +`ssl_cert_file`(`细绳`)[](<>) + +指定包含 SSL 服务器证书的文件的名称。相对路径是相对于数据目录的。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认是`服务器.crt`. + +`ssl_crl_file`(`细绳`)[](<>) + +指定包含 SSL 客户端证书吊销列表 (CRL) 的文件的名称。相对路径是相对于数据目录的。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认为空,表示不加载 CRL 文件(除非[ssl_crl\_目录](runtime-config-connection.html#GUC-SSL-CRL-DIR)已设置)。 + +`ssl_crl_dir`(`细绳`)[](<>) + +指定包含 SSL 客户端证书吊销列表 (CRL) 的目录的名称。相对路径是相对于数据目录的。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认为空,表示不使用 CRL(除非[ssl_crl\_文件](runtime-config-connection.html#GUC-SSL-CRL-FILE)已设置)。 + +需要使用 OpenSSL 命令准备目录`openssl 重新散列`要么`c_rehash`.有关详细信息,请参阅其文档。 + +使用此设置时,指定目录中的 CRL 在连接时按需加载。新的 CRL 可以添加到目录中并立即使用。这不像[ssl_crl\_文件](runtime-config-connection.html#GUC-SSL-CRL-FILE),这会导致在服务器启动时或重新加载配置时加载文件中的 CRL。两种设置可以一起使用。 + +`ssl_key_file`(`细绳`)[](<>) + +指定包含 SSL 服务器私钥的文件的名称。相对路径是相对于数据目录的。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认是`服务器密钥`. + +`ssl_ciphers`(`细绳`)[](<>) + +指定允许 SSL 连接使用的 SSL 密码套件列表。有关此设置的语法和受支持值的列表,请参阅 OpenSSL 包中的密码手册页。只有使用 TLS 1.2 及更低版本的连接会受到影响。当前没有控制 TLS 1.3 版连接使用的密码选择的设置。默认值为`高:中:+3DES:!aNULL`.除非您有特定的安全要求,否则默认值通常是一个合理的选择。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +默认值说明: + +`高的` + +使用来自的密码的密码套件`高的`组(例如,AES、Camellia、3DES) + +`中等的` + +使用来自的密码的密码套件`中等的`组(例如,RC4、SEED) + +`+3DES` + +OpenSSL 的默认顺序`高的`是有问题的,因为它对 3DES 的要求高于 AES128。这是错误的,因为 3DES 提供的安全性低于 AES128,而且速度也慢得多。`+3DES`在所有其他之后重新排序`高的`和`中等的`密码。 + +`!aNULL` + +禁用不进行身份验证的匿名密码套件。此类密码套件易受 MITM 攻击,因此不应使用。 + +可用的密码套件详细信息因 OpenSSL 版本而异。使用命令`openssl 密码 -v 'HIGH:MEDIUM:+3DES:!aNULL'`查看当前安装的 OpenSSL 版本的实际详细信息。请注意,此列表在运行时根据服务器密钥类型进行过滤。 + +`ssl_prefer_server_ciphers`(`布尔值`)[](<>) + +指定是否使用服务器的 SSL 密码首选项,而不是客户端的。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认是`在`. + +较旧的 PostgreSQL 版本没有此设置,并且始终使用客户端的首选项。此设置主要是为了向后兼容这些版本。使用服务器的首选项通常会更好,因为服务器配置得当的可能性更大。 + +`ssl_ecdh_curve`(`细绳`)[](<>) + +指定要在 ECDH 密钥交换中使用的曲线的名称。它需要得到所有连接的客户端的支持。它不需要与服务器的 Elliptic Curve 键使用的曲线相同。该参数只能在`postgresql.conf`文件或在服务器命令行上。默认是`素数256v1`. + +最常见曲线的 OpenSSL 名称是:`素数256v1`(NIST P-256),`secp384r1`(NIST P-384),`secp521r1`(NIST P-521)。可用曲线的完整列表可以使用命令显示`openssl ecparam -list_curves`.不过,并非所有这些都可以在 TLS 中使用。 + +`ssl_min_protocol_version`(`枚举`)[](<>) + +设置要使用的最低 SSL/TLS 协议版本。当前有效值为:`TLSv1`,`TLSv1.1`,`TLSv1.2`,`TLSv1.3`.旧版本的 OpenSSL 库不支持所有值;如果选择了不受支持的设置,则会引发错误。TLS 1.0 之前的协议版本,即 SSL 版本 2 和 3,始终处于禁用状态。 + +默认是`TLSv1.2`,在撰写本文时满足行业最佳实践。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`ssl_max_protocol_version`(`枚举`)[](<>) + +Sets the maximum SSL/TLS protocol version to use. Valid values are as for[ssl_min_protocol_version](runtime-config-connection.html#GUC-SSL-MIN-PROTOCOL-VERSION), with addition of an empty string, which allows any protocol version. The default is to allow any version. Setting the maximum protocol version is mainly useful for testing or if some component has issues working with a newer protocol. + +This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`ssl_dh_params_file`(`string`)[](<>) + +Specifies the name of the file containing Diffie-Hellman parameters used for so-called ephemeral DH family of SSL ciphers. The default is empty, in which case compiled-in default DH parameters used. Using custom DH parameters reduces the exposure if an attacker manages to crack the well-known compiled-in DH parameters. You can create your own DH parameters file with the command`openssl dhparam -out dhparams.pem 2048`. + +This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`ssl_passphrase_command`(`string`)[](<>) + +Sets an external command to be invoked when a passphrase for decrypting an SSL file such as a private key needs to be obtained. By default, this parameter is empty, which means the built-in prompting mechanism is used. + +The command must print the passphrase to the standard output and exit with code 0. In the parameter value,`%p`替换为提示字符串。(写`%%`对于文字`%`.) 请注意,提示字符串可能包含空格,因此请务必充分引用。如果存在,则从输出末尾删除单个换行符。 + +该命令实际上不必提示用户输入密码。它可以从文件中读取它,从钥匙串工具中获取它,或类似的。用户有责任确保选择的机制足够安全。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`ssl_passphrase_command_supports_reload`(`布尔值`)[](<>) + +该参数决定密码命令是否由`ssl_passphrase_command`如果密钥文件需要密码短语,也将在配置重新加载期间调用。如果此参数关闭(默认),则`ssl_passphrase_command`在重新加载期间将被忽略,如果需要密码短语,则不会重新加载 SSL 配置。该设置适用于需要 TTY 进行提示的命令,该命令在服务器运行时可能不可用。例如,如果密码是从文件中获取的,则将此参数设置为 on 可能是合适的。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 diff --git a/docs/X/runtime-config-developer.md b/docs/en/runtime-config-developer.md similarity index 100% rename from docs/X/runtime-config-developer.md rename to docs/en/runtime-config-developer.md diff --git a/docs/en/runtime-config-developer.zh.md b/docs/en/runtime-config-developer.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..244879d178e5737a563bc1f8fc8426e935af98ad --- /dev/null +++ b/docs/en/runtime-config-developer.zh.md @@ -0,0 +1,162 @@ +## 20.17. Developer Options + +The following parameters are intended for developer testing, and should never be used on a production database. However, some of them can be used to assist with the recovery of severely damaged databases. As such, they have been excluded from the sample`postgresql.conf`file. Note that many of these parameters require special source compilation flags to work at all. + +`allow_system_table_mods`(`boolean`)[](<>) + +Allows modification of the structure of system tables as well as certain other risky actions on system tables. This is otherwise not allowed even for superusers. Ill-advised use of this setting can cause irretrievable data loss or seriously corrupt the database system. Only superusers can change this setting. + +`backtrace_functions`(`string`)[](<>) + +This parameter contains a comma-separated list of C function names. If an error is raised and the name of the internal C function where the error happens matches a value in the list, then a backtrace is written to the server log together with the error message. This can be used to debug specific areas of the source code. + +Backtrace support is not available on all platforms, and the quality of the backtraces depends on compilation options. + +This parameter can only be set by superusers. + +`debug_discard_caches`(`integer`)[](<>) + +When set to`1`, each system catalog cache entry is invalidated at the first possible opportunity, whether or not anything that would render it invalid really occurred. Caching of system catalogs is effectively disabled as a result, so the server will run extremely slowly. Higher values run the cache invalidation recursively, which is even slower and only useful for testing the caching logic itself. The default value of`0`selects normal catalog caching behavior. + +This parameter can be very helpful when trying to trigger hard-to-reproduce bugs involving concurrent catalog changes, but it is otherwise rarely needed. See the source code files`inval.c`and`pg_config_manual.h`详情。 + +支持此参数时`DISCARD_CACHES_ENABLED`在编译时定义(使用配置选项时自动发生`--enable-cassert`)。在生产构建中,它的价值将永远是`0`并尝试将其设置为另一个值会引发错误。 + +`force_parallel_mode`(`枚举`)[](<>) + +即使在预期没有性能优势的情况下,也允许使用并行查询进行测试。的允许值`force_parallel_mode`是`离开`(仅在预期会提高性能时才使用并行模式),`在`(对所有被认为是安全的查询强制并行查询),以及`回归`(喜欢`在`,但具有如下所述的其他行为更改)。 + +更具体地说,将此值设置为`在`将添加一个`收集`节点到看起来安全的任何查询计划的顶部,以便查询在并行工作程序内部运行。即使并行工作者不可用或无法使用,诸如启动子事务之类在并行查询上下文中被禁止的操作也将被禁止,除非规划器认为这会导致查询失败。如果设置此选项时出现失败或意外结果,可能需要标记查询使用的某些功能`并行不安全`(或者,可能,`并行受限`)。 + +将此值设置为`回归`具有与将其设置为相同的所有效果`在`加上一些旨在促进自动化回归测试的附加效果。通常,来自并行工作者的消息包含一个上下文行,指示该内容,但设置为`回归`抑制此行,以便输出与非并行执行中的相同。此外,该`收集`通过此设置添加到计划的节点隐藏在`解释`输出,以便输出与打开此设置时获得的内容相匹配`离开`. + +`忽略系统索引`(`布尔值`)[](<>) + +读取系统表时忽略系统索引(但修改表时仍会更新索引)。这在从损坏的系统索引中恢复时很有用。此参数在会话开始后无法更改。 + +`post_auth_delay`(`整数`)[](<>) + +新服务器进程在执行身份验证过程后启动的延迟时间。这是为了让开发人员有机会使用调试器附加到服务器进程。如果指定此值没有单位,则以秒为单位。零值(默认值)禁用延迟。此参数在会话开始后无法更改。 + +`pre_auth_delay`(`整数`)[](<>) + +在新的服务器进程被派生之后,在它执行身份验证过程之前延迟的时间量。这是为了让开发人员有机会使用调试器附加到服务器进程,以追踪身份验证中的不当行为。如果指定此值没有单位,则以秒为单位。零值(默认值)禁用延迟。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`跟踪通知`(`布尔值`)[](<>) + +为`听`和`通知`命令。[客户\_分钟\_消息](runtime-config-client.html#GUC-CLIENT-MIN-MESSAGES)或者[日志\_分钟\_消息](runtime-config-logging.html#GUC-LOG-MIN-MESSAGES)一定是`调试1`或更低以将此输出分别发送到客户端或服务器日志。 + +`trace_recovery_messages`(`枚举`)[](<>) + +启用与恢复相关的调试输出的记录,否则这些输出不会被记录。此参数允许用户覆盖的正常设置[日志\_分钟\_消息](runtime-config-logging.html#GUC-LOG-MIN-MESSAGES),但仅适用于特定消息。这旨在用于调试热备。有效值为`调试5`,`调试4`,`调试3`,`调试2`,`调试1`, 和`日志`.默认,`日志`, 根本不影响日志记录决策。其他值会导致记录该优先级或更高优先级的与恢复相关的调试消息,就好像它们有`日志`优先;对于常见的设置`log_min_messages`这导致无条件地将它们发送到服务器日志。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`跟踪排序`(`布尔值`)[](<>) + +如果打开,则在排序操作期间发出有关资源使用情况的信息。此参数仅在以下情况下可用`TRACE_SORT`宏是在编译 PostgreSQL 时定义的。(然而,`TRACE_SORT`当前默认定义。) + +`跟踪锁`(`布尔值`)[](<>) + +如果打开,则发出有关锁使用情况的信息。转储的信息包括锁定操作的类型、锁定的类型以及被锁定或解锁对象的唯一标识符。还包括已在此对象上授予的锁定类型以及在此对象上等待的锁定类型的位掩码。对于每种锁类型,授权锁和等待锁的数量以及总数也会被转储。此处显示了日志文件输出的示例: + +``` +LOG: LockAcquire: new: lock(0xb7acd844) id(24688,24696,0,0,0,1) + grantMask(0) req(0,0,0,0,0,0,0)=0 grant(0,0,0,0,0,0,0)=0 + wait(0) type(AccessShareLock) +LOG: GrantLock: lock(0xb7acd844) id(24688,24696,0,0,0,1) + grantMask(2) req(1,0,0,0,0,0,0)=1 grant(1,0,0,0,0,0,0)=1 + wait(0) type(AccessShareLock) +LOG: UnGrantLock: updated: lock(0xb7acd844) id(24688,24696,0,0,0,1) + grantMask(0) req(0,0,0,0,0,0,0)=0 grant(0,0,0,0,0,0,0)=0 + wait(0) type(AccessShareLock) +LOG: CleanUpLock: deleting: lock(0xb7acd844) id(24688,24696,0,0,0,1) + grantMask(0) req(0,0,0,0,0,0,0)=0 grant(0,0,0,0,0,0,0)=0 + wait(0) type(INVALID) +``` + +被倾倒的结构的详细信息可以在`src/include/storage/lock.h`. + +此参数仅在以下情况下可用`LOCK_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`trace_lwlocks`(`布尔值`)[](<>) + +如果打开,则发出有关轻量级锁使用情况的信息。轻量级锁主要用于提供对共享内存数据结构访问的互斥。 + +此参数仅在以下情况下可用`LOCK_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`跟踪用户锁`(`布尔值`)[](<>) + +如果打开,则发出有关用户锁使用情况的信息。输出与 for 相同`跟踪锁`, 仅适用于咨询锁。 + +此参数仅在以下情况下可用`LOCK_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`trace_lock_oidmin`(`整数`)[](<>) + +如果设置,则不要跟踪低于此 OID 的表的锁(用于避免在系统表上输出)。 + +此参数仅在以下情况下可用`LOCK_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`trace_lock_table`(`整数`)[](<>) + +无条件地跟踪此表 (OID) 上的锁。 + +此参数仅在以下情况下可用`LOCK_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`调试死锁`(`布尔值`)[](<>) + +如果设置,则在发生死锁超时时转储有关所有当前锁的信息。 + +此参数仅在以下情况下可用`LOCK_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`log_btree_build_stats`(`布尔值`)[](<>) + +如果设置,则记录各种 B 树操作的系统资源使用统计信息(内存和 CPU)。 + +此参数仅在以下情况下可用`BTREE_BUILD_STATS`宏是在编译 PostgreSQL 时定义的。 + +`wal_consistency_checking`(`细绳`)[](<>) + +此参数旨在用于检查 WAL 重做例程中的错误。启用后,与 WAL 记录一起修改的任何缓冲区的整页图像都将添加到记录中。如果该记录随后被重放,系统将首先应用每条记录,然后测试该记录修改的缓冲区是否与存储的图像匹配。在某些情况下(例如提示位),微小的变化是可以接受的,并且会被忽略。任何意外的差异都会导致致命错误,从而终止恢复。 + +此设置的默认值为空字符串,即禁用该功能。可以设置为`全部`检查所有记录,或查看以逗号分隔的资源管理器列表,以仅检查源自这些资源管理器的记录。目前,支持的资源管理器是`堆`,`堆2`,`btree`,`哈希`,`杜松子酒`,`要旨`,`顺序`,`专家组`,`布林`, 和`通用的`.只有超级用户可以更改此设置。 + +`wal_debug`(`布尔值`)[](<>) + +如果打开,则发出与 WAL 相关的调试输出。此参数仅在以下情况下可用`WAL_DEBUG`宏是在编译 PostgreSQL 时定义的。 + +`忽略校验和失败`(`布尔值`)[](<>) + +仅在以下情况下有效[数据校验和](app-initdb.html#APP-INITDB-DATA-CHECKSUMS)已启用。 + +在读取期间检测到校验和失败通常会导致 PostgreSQL 报告错误,中止当前事务。环境`忽略校验和失败`到 on 会使系统忽略故障(但仍报告警告),并继续处理。这种行为可能*导致崩溃、传播或隐藏损坏或其他严重问题*.但是,如果块头仍然正常,它可能允许您克服错误并检索可能仍存在于表中的未损坏元组。如果标头损坏,即使启用此选项也会报告错误。默认设置是`离开`,并且只能由超级用户更改。 + +`zero_damaged_pa​​ges`(`布尔值`)[](<>) + +检测到损坏的页头通常会导致 PostgreSQL 报告错误,中止当前事务。环境`zero_damaged_pa​​ges`设为 on 会导致系统改为报告警告,将内存中损坏的页面清零,然后继续处理。这种行为*会破坏数据*,即损坏页面上的所有行。但是,它确实允许您克服错误并从表中可能存在的任何未损坏页面中检索行。如果由于硬件或软件错误而发生损坏,它对于恢复数据很有用。在您放弃从表的损坏页面中恢复数据的希望之前,您通常不应该设置此选项。清零页面不会强制到磁盘,因此建议在再次关闭此参数之前重新创建表或索引。默认设置是`离开`,并且只能由超级用户更改。 + +`忽略无效页面`(`布尔值`)[](<>) + +如果设置为`离开`(默认),在恢复期间检测到引用无效页面的 WAL 记录会导致 PostgreSQL 引发 PANIC 级别的错误,从而中止恢复。环境`忽略无效页面`到`在`导致系统忽略 WAL 记录中的无效页面引用(但仍报告警告),并继续恢复。这种行为可能*导致崩溃、数据丢失、传播或隐藏损坏或其他严重问题*.但是,它可能允许您通过 PANIC 级别的错误,完成恢复,并导致服务器启动。该参数只能在服务器启动时设置。它仅在恢复或待机模式下有效。 + +`jit_debugging_support`(`布尔值`)[](<>) + +如果 LLVM 具有所需的功能,请将生成的函数注册到 GDB。这使得调试更容易。默认设置是`离开`.此参数只能在服务器启动时设置。 + +`jit_dump_bitcode`(`布尔值`)[](<>) + +将生成的 LLVM IR 写入文件系统内部[数据\_目录](runtime-config-file-locations.html#GUC-DATA-DIRECTORY).这仅对处理 JIT 实现的内部有用。默认设置是`离开`.此参数只能由超级用户更改。 + +`jit_expressions`(`布尔值`)[](<>) + +当激活 JIT 编译时,确定表达式是否被 JIT 编译(参见[第 32.2 节](jit-decision.html))。默认是`在`. + +`jit_profiling_support`(`布尔值`)[](<>) + +如果 LLVM 具有所需的功能,则发出允许 perf 分析 JIT 生成的函数所需的数据。这会将文件写入`~/.debug/jit/`;用户负责在需要时执行清理。默认设置是`离开`.此参数只能在服务器启动时设置。 + +`jit_tuple_deforming`(`布尔值`)[](<>) + +确定元组变形是否为 JIT 编译,当激活 JIT 编译时(参见[第 32.2 节](jit-decision.html))。默认是`在`. + +`remove_temp_files_after_crash`(`布尔值`)[](<>) + +当设置为`在`,这是默认设置,PostgreSQL 会在后端崩溃后自动删除临时文件。如果禁用,文件将被保留并可用于调试,例如。然而,反复崩溃可能会导致无用文件的积累。该参数只能在`postgresql.conf`文件或在服务器命令行上。 diff --git a/docs/X/runtime-config-error-handling.md b/docs/en/runtime-config-error-handling.md similarity index 100% rename from docs/X/runtime-config-error-handling.md rename to docs/en/runtime-config-error-handling.md diff --git a/docs/en/runtime-config-error-handling.zh.md b/docs/en/runtime-config-error-handling.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e89dd8925355ba0c4b2ed2ed76fabfa47a0cfaec --- /dev/null +++ b/docs/en/runtime-config-error-handling.zh.md @@ -0,0 +1,27 @@ +## 20.14. Error Handling + +`exit_on_error`(`boolean`)[](<>) + +If on, any error will terminate the current session. By default, this is set to off, so that only FATAL errors will terminate the session. + +`restart_after_crash`(`boolean`)[](<>) + +When set to on, which is the default, PostgreSQL will automatically reinitialize after a backend crash. Leaving this value set to on is normally the best way to maximize the availability of the database. However, in some circumstances, such as when PostgreSQL is being invoked by clusterware, it may be useful to disable the restart so that the clusterware can gain control and take any actions it deems appropriate. + +This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`data_sync_retry`(`boolean`)[](<>) + +When set to off, which is the default, PostgreSQL will raise a PANIC-level error on failure to flush modified data files to the file system. This causes the database server to crash. This parameter can only be set at server start. + +在某些操作系统上,在回写失败后,内核页面缓存中的数据状态是未知的。在某些情况下,它可能已被完全遗忘,因此重试不安全;第二次尝试可能会被报告为成功,而实际上数据已经丢失。在这些情况下,避免数据丢失的唯一方法是在报告任何故障后从 WAL 中恢复,最好是在调查故障的根本原因并更换任何故障硬件之后。 + +如果设置为 on,PostgreSQL 将改为报告错误但继续运行,以便可以在以后的检查点重试数据刷新操作。只有在调查了操作系统对缓冲数据的处理后才能将其设置为 on,以防写回失败。 + +`recovery_init_sync_method`(`枚举`)[](<>) + +当设置为`同步`,这是默认设置,PostgreSQL 将在崩溃恢复开始之前递归地打开并同步数据目录中的所有文件。文件搜索将遵循 WAL 目录和每个配置的表空间的符号链接(但不是任何其他符号链接)。这是为了确保在重放更改之前,所有 WAL 和数据文件都持久地存储在磁盘上。这适用于启动未完全关闭的数据库集群,包括使用 pg 创建的副本\_基本备份。 + +在 Linux 上,`同步文件`可以改为使用,要求操作系统同步包含数据目录、WAL 文件和每个表空间的整个文件系统(但不能通过符号链接访问任何其他文件系统)。这可能比`同步`设置,因为它不需要一个一个地打开每个文件。另一方面,如果文件系统由修改大量文件的其他应用程序共享,则可能会更慢,因为这些文件也将写入磁盘。此外,在 5.8 之前的 Linux 版本上,写入数据到磁盘时遇到的 I/O 错误可能不会报告给 PostgreSQL,并且相关的错误消息可能只出现在内核日志中。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 diff --git a/docs/X/runtime-config-replication.md b/docs/en/runtime-config-replication.md similarity index 100% rename from docs/X/runtime-config-replication.md rename to docs/en/runtime-config-replication.md diff --git a/docs/en/runtime-config-replication.zh.md b/docs/en/runtime-config-replication.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f60721db5bd11d639f47475d90226e50dbe27bbe --- /dev/null +++ b/docs/en/runtime-config-replication.zh.md @@ -0,0 +1,199 @@ +## 20.6. Replication + +[20.6.1. Sending Servers](runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SENDER) + +[20.6.2. Primary Server](runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-PRIMARY) + +[20.6.3. Standby Servers](runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY) + +[20.6.4. Subscribers](runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-SUBSCRIBER) + +These settings control the behavior of the built-in*streaming replication*feature (see[Section 27.2.5](warm-standby.html#STREAMING-REPLICATION)). Servers will be either a primary or a standby server. Primaries can send data, while standbys are always receivers of replicated data. When cascading replication (see[Section 27.2.7](warm-standby.html#CASCADING-REPLICATION)) is used, standby servers can also be senders, as well as receivers. Parameters are mainly for sending and standby servers, though some parameters have meaning only on the primary server. Settings may vary across the cluster without problems if that is required. + +### 20.6.1. Sending Servers + +These parameters can be set on any server that is to send replication data to one or more standby servers. The primary is always a sending server, so these parameters must always be set on the primary. The role and meaning of these parameters does not change after a standby becomes the primary. + +`max_wal_senders`(`integer`)[](<>) + +Specifies the maximum number of concurrent connections from standby servers or streaming base backup clients (i.e., the maximum number of simultaneously running WAL sender processes). The default is`10`. The value`0`means replication is disabled. Abrupt disconnection of a streaming client might leave an orphaned connection slot behind until a timeout is reached, so this parameter should be set slightly higher than the maximum number of expected clients so disconnected clients can immediately reconnect. This parameter can only be set at server start. Also,`wal_level`must be set to`replica`or higher to allow connections from standby servers. + +When running a standby server, you must set this parameter to the same or higher value than on the primary server. Otherwise, queries will not be allowed in the standby server. + +`max_replication_slots`(`integer`)[](<>) + +Specifies the maximum number of replication slots (see[Section 27.2.6](warm-standby.html#STREAMING-REPLICATION-SLOTS)) that the server can support. The default is 10. This parameter can only be set at server start. Setting it to a lower value than the number of currently existing replication slots will prevent the server from starting. Also,`wal_level`must be set to`replica`or higher to allow replication slots to be used. + +On the subscriber side, specifies how many replication origins (see[Chapter 50](replication-origins.html)) can be tracked simultaneously, effectively limiting how many logical replication subscriptions can be created on the server. Setting it to a lower value than the current number of tracked replication origins (reflected in[pg_replication_origin_status](view-pg-replication-origin-status.html), not[pg_replication_origin](catalog-pg-replication-origin.html)) will prevent the server from starting. + +`wal_keep_size`(`integer`)[](<>) + +指定保存在`pg_wal`目录,以防备用服务器需要获取它们以进行流复制。如果连接到发送服务器的备用服务器落后超过`wal_keep_size`兆字节,发送服务器可能会删除备用服务器仍需要的 WAL 段,在这种情况下,复制连接将被终止。结果,下游连接最终也会失败。(但是,如果 WAL 归档正在使用,备用服务器可以通过从归档中获取段来恢复。) + +这仅设置保留在`pg_wal`;系统可能需要为 WAL 归档或从检查点恢复保留更多段。如果`wal_keep_size`为零(默认值),系统不会为备用目的保留任何额外的段,因此备用服务器可用的旧 WAL 段的数量是前一个检查点的位置和 WAL 归档状态的函数。如果此值指定为不带单位,则以兆字节为单位。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`max_slot_wal_keep_size`(`整数`)[](<>) + +指定 WAL 文件的最大大小[复制槽](warm-standby.html#STREAMING-REPLICATION-SLOTS)被允许保留在`pg_wal`检查点时的目录。如果`max_slot_wal_keep_size`是 -1(默认值),复制槽可以保留无限数量的 WAL 文件。否则,如果重新启动\_复制槽的 lsn 落后于当前 LSN 超过给定大小,由于删除了所需的 WAL 文件,使用该槽的备用可能不再能够继续复制。您可以在中查看复制槽的 WAL 可用性[皮克\_复制\_插槽](view-pg-replication-slots.html). If this value is specified without units, it is taken as megabytes. This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`wal_sender_timeout`(`integer`)[](<>) + +Terminate replication connections that are inactive for longer than this amount of time. This is useful for the sending server to detect a standby crash or network outage. If this value is specified without units, it is taken as milliseconds. The default value is 60 seconds. A value of zero disables the timeout mechanism. + +With a cluster distributed across multiple geographic locations, using different values per location brings more flexibility in the cluster management. A smaller value is useful for faster failure detection with a standby having a low-latency network connection, and a larger value helps in judging better the health of a standby if located on a remote location, with a high-latency network connection. + +`track_commit_timestamp`(`boolean`)[](<>) + +Record commit time of transactions. This parameter can only be set in`postgresql.conf`file or on the server command line. The default value is`off`. + +### 20.6.2. Primary Server + +These parameters can be set on the primary server that is to send replication data to one or more standby servers. Note that in addition to these parameters,[wal_level](runtime-config-wal.html#GUC-WAL-LEVEL)must be set appropriately on the primary server, and optionally WAL archiving can be enabled as well (see[Section 20.5.3](runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING)). The values of these parameters on standby servers are irrelevant, although you may wish to set them there in preparation for the possibility of a standby becoming the primary. + +`synchronous_standby_names`(`string`)[](<>) + +指定可以支持的备用服务器列表*同步复制*,如中所述[第 27.2.8 节](warm-standby.html#SYNCHRONOUS-REPLICATION).将有一个或多个活动同步备用;在这些备用服务器确认收到其数据后,将允许等待提交的事务继续进行。同步备用数据库将是其名称出现在此列表中的那些,并且它们既是当前连接的又是实时流数据的(如状态所示`流媒体`在里面[`pg_stat_replication`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW)看法)。指定多个同步备用可以实现非常高的可用性并防止数据丢失。 + +用于此目的的备用服务器的名称是`应用名称`备用设备的设置,如备用设备的连接信息中所设置。在物理复制备用的情况下,这应该在`primary_conninfo`环境;默认是设置[簇\_姓名](runtime-config-logging.html#GUC-CLUSTER-NAME)如果设置,否则`接收器`.对于逻辑复制,可以在订阅的连接信息中设置,默认为订阅名。对于其他复制流消费者,请查阅他们的文档。 + +此参数使用以下任一语法指定备用服务器列表: + +``` +[FIRST] num_sync ( standby_name [, ...] ) +ANY num_sync ( standby_name [, ...] ) +standby_name [, ...] +``` + +在哪里*`num_sync`*是事务需要等待回复的同步备用数据库的数量,并且*`备用名称`*是备用服务器的名称。`第一的`和`任何`指定从列出的服务器中选择同步备用服务器的方法。 + +关键字`第一的`,再加上*`num_sync`*,指定基于优先级的同步复制,并使事务提交等到它们的 WAL 记录被复制到*`num_sync`*根据优先级选择同步备用。例如,一个设置`前 3 (s1, s2, s3, s4)`将导致每个提交等待来自备用服务器选择的三个更高优先级备用服务器的回复`s1`,`s2`,`s3`和`s4`.名称在列表中较早出现的备用数据库具有更高的优先级,并将被视为同步的。此列表后面出现的其他备用服务器代表潜在的同步备用服务器。如果任何当前同步备用数据库由于某种原因断开连接,它将立即被下一个最高优先级备用数据库替换。关键字`第一的`是可选的。 + +关键字`任何`,再加上*`num_sync`*, 指定基于仲裁的同步复制并使事务提交等到它们的 WAL 记录被复制到*至少* *`num_sync`*列出的备用。例如,一个设置`任意 3 个(s1、s2、s3、s4)`将导致每个提交在至少任何三个备用`s1`,`s2`,`s3`和`s4`回复。 + +`第一的`和`任何`不区分大小写。如果这些关键字用作备用服务器的名称,它的*`备用名称`*必须双引号。 + +第三种语法在 PostgreSQL 版本 9.6 之前使用并且仍然受支持。它与第一个语法相同`第一的`和*`num_sync`*等于 1。例如,第 1 个 (s1, s2)`和`s1, s2`具有相同的含义:要么`s1`或者`s2`被选为同步备用。`特殊条目 + +\*`匹配任何备用名称。`没有强制备用名称唯一性的机制。 + +在重复的情况下,匹配的备用数据库之一将被视为更高的优先级,但究竟哪一个是不确定的。 + +### 笔记 + +每个*`备用名称`*应该具有有效 SQL 标识符的形式,除非它是`*`.如有必要,您可以使用双引号。但请注意*`备用名称`*s 与备用应用程序名称进行比较,不区分大小写,无论是否双引号。 + +如果此处未指定同步备用名称,则不会启用同步复制,并且事务提交不会等待复制。这是默认配置。即使启用了同步复制,也可以通过设置单个事务来配置不等待复制[同步\_犯罪](runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT)参数为`当地的`或者`离开`. + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`vacuum_defer_cleanup_age`(`整数`)[](<>) + +指定交易数量`真空`和 HOT 更新将推迟对死行版本的清理。默认值为零事务,这意味着可以尽快删除死行版本,也就是说,只要它们不再对任何打开的事务可见。您可能希望在支持热备用服务器的主服务器上将其设置为非零值,如中所述[第 27.4 节](hot-standby.html).这允许有更多时间完成备用数据库上的查询,而不会因提前清理行而引发冲突。但是,由于该值是根据主服务器上发生的写入事务数来衡量的,因此很难预测有多少额外的宽限时间可用于备用查询。该参数只能在`postgresql.conf`file or on the server command line. + +You should also consider setting`hot_standby_feedback`on standby server(s) as an alternative to using this parameter. + +This does not prevent cleanup of dead rows which have reached the age specified by`old_snapshot_threshold`. + +### 20.6.3. Standby Servers + +These settings control the behavior of a[standby server](warm-standby.html#STANDBY-SERVER-OPERATION)that is to receive replication data. Their values on the primary server are irrelevant. + +`primary_conninfo`(`string`)[](<>) + +Specifies a connection string to be used for the standby server to connect with a sending server. This string is in the format described in[Section 34.1.1](libpq-connect.html#LIBPQ-CONNSTRING). If any option is unspecified in this string, then the corresponding environment variable (see[Section 34.15](libpq-envars.html)) is checked. If the environment variable is not set either, then defaults are used. + +The connection string should specify the host name (or address) of the sending server, as well as the port number if it is not the same as the standby server's default. Also specify a user name corresponding to a suitably-privileged role on the sending server (see[Section 27.2.5.1](warm-standby.html#STREAMING-REPLICATION-AUTHENTICATION)). A password needs to be provided too, if the sender demands password authentication. It can be provided in the`primary_conninfo`string, or in a separate`~/.pgpass`file on the standby server (use`replication`as the database name). Do not specify a database name in the`primary_conninfo`细绳。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。如果在 WAL 接收器进程运行时更改了此参数,则该进程会发出关闭信号并期望以新设置重新启动(除非`primary_conninfo`是一个空字符串)。如果服务器未处于待机模式,此设置无效。 + +`primary_slot_name`(`细绳`)[](<>) + +可选地指定一个现有的复制槽,当通过流复制连接到发送服务器以控制上游节点上的资源删除时(请参阅[第 27.2.6 节](warm-standby.html#STREAMING-REPLICATION-SLOTS))。该参数只能在`postgresql.conf`文件或在服务器命令行上。如果在 WAL 接收器进程正在运行时更改了此参数,则该进程会发出关闭信号,并希望以新设置重新启动。此设置无效,如果`primary_conninfo`未设置或服务器未处于待机模式。 + +`提升触发器文件`(`细绳`)[](<>) + +指定一个触发器文件,其存在结束备用数据库中的恢复。即使未设置此值,您仍然可以使用`pg_ctl 提升`或打电话`pg_promote()`.该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`热备`(`布尔值`)[](<>) + +指定您是否可以在恢复期间连接和运行查询,如中所述[第 27.4 节](hot-standby.html).默认值为`在`.此参数只能在服务器启动时设置。它仅在存档恢复或待机模式下有效。 + +`max_standby_archive_delay`(`整数`)[](<>) + +当 Hot Standby 处于活动状态时,此参数确定备用服务器在取消与即将应用的 WAL 条目冲突的备用查询之前应等待多长时间,如中所述[第 27.4.2 节](hot-standby.html#HOT-STANDBY-CONFLICT).`max_standby_archive_delay`当从 WAL 存档中读取 WAL 数据时适用(因此不是最新的)。如果指定此值没有单位,则以毫秒为单位。默认值为 30 秒。值 -1 允许备用数据库永远等待冲突查询完成。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +注意`max_standby_archive_delay`与取消前查询可以运行的最长时间不同;相反,它是允许应用任何一个 WAL 段数据的最大总时间。因此,如果一个查询在 WAL 段的早期导致了显着延迟,那么后续的冲突查询将具有更少的宽限时间。 + +`max_standby_streaming_delay`(`整数`)[](<>) + +当 Hot Standby 处于活动状态时,此参数确定备用服务器在取消与即将应用的 WAL 条目冲突的备用查询之前应等待多长时间,如中所述[第 27.4.2 节](hot-standby.html#HOT-STANDBY-CONFLICT).`max_standby_streaming_delay`当通过流复制接收 WAL 数据时适用。如果指定此值没有单位,则以毫秒为单位。默认值为 30 秒。值 -1 允许备用数据库永远等待冲突查询完成。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +注意`max_standby_streaming_delay`与取消前查询可以运行的最长时间不同;相反,它是从主服务器接收到 WAL 数据后允许应用的最大总时间。因此,如果一个查询导致了明显的延迟,那么后续的冲突查询将有更少的宽限时间,直到备用服务器再次赶上。 + +`wal_receiver_create_temp_slot`(`布尔值`)[](<>) + +指定当没有配置要使用的永久复制槽时(使用[基本的\_投币口\_姓名](runtime-config-replication.html#GUC-PRIMARY-SLOT-NAME))。默认为关闭。该参数只能在`postgresql.conf`文件或在服务器命令行上。如果在 WAL 接收器进程正在运行时更改了此参数,则该进程会发出关闭信号,并希望以新设置重新启动。 + +`wal_receiver_status_interval`(`整数`)[](<>) + +指定备用服务器上的 WAL 接收器进程向主备用服务器或上游备用服务器发送有关复制进度的信息的最小频率,可以使用[`pg_stat_replication`](monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-VIEW)看法。备用数据库将报告它写入的最后一个预写日志位置、它刷新到磁盘的最后一个位置以及它应用的最后一个位置。此参数的值是报告之间的最长时间。每次写入或刷新位置更改时发送更新,或者如果设置为非零值,则按照此参数指定的频率发送更新。还有其他情况会在忽略此参数的情况下发送更新;例如,当现有 WAL 的处理完成或当`同步提交`设定为`远程应用`.因此,应用位置可能会稍微落后于真实位置。如果指定此值没有单位,则以秒为单位。默认值为 10 秒。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`hot_standby_feedback`(`布尔值`)[](<>) + +指定热备用是否将有关当前在备用上执行的查询的反馈发送到主备用或上游备用。此参数可用于消除因清除记录而导致的查询取消,但可能会导致某些工作负载的主数据库膨胀。反馈消息的发送频率不会超过一次`wal_receiver_status_interval`.默认值为`离开`.该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +如果级联复制正在使用,则反馈将向上传递,直到最终到达主节点。除了向上游传递之外,备用服务器不使用他们收到的反馈。 + +此设置不会覆盖`old_snapshot_threshold`在初级;备用数据库上超过主数据库年龄阈值的快照可能会变得无效,从而导致备用数据库上的事务取消。这是因为`old_snapshot_threshold`旨在为死行可能导致膨胀的时间提供绝对限制,否则会因为备用配置而被违反。 + +`wal_receiver_timeout`(`整数`)[](<>) + +终止超过此时间不活动的复制连接。这对于接收备用服务器检测主节点崩溃或网络中断很有用。如果指定此值没有单位,则以毫秒为单位。默认值为 60 秒。零值禁用超时机制。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`wal_retrieve_retry_interval`(`整数`)[](<>) + +指定当 WAL 数据不能从任何源(流复制、本地`pg_wal`or WAL archive) before trying again to retrieve WAL data. If this value is specified without units, it is taken as milliseconds. The default value is 5 seconds. This parameter can only be set in the`postgresql.conf`file or on the server command line. + +This parameter is useful in configurations where a node in recovery needs to control the amount of time to wait for new WAL data to be available. For example, in archive recovery, it is possible to make the recovery more responsive in the detection of a new WAL log file by reducing the value of this parameter. On a system with low WAL activity, increasing it reduces the amount of requests necessary to access WAL archives, something useful for example in cloud environments where the amount of times an infrastructure is accessed is taken into account. + +`recovery_min_apply_delay`(`integer`)[](<>) + +By default, a standby server restores WAL records from the sending server as soon as possible. It may be useful to have a time-delayed copy of the data, offering opportunities to correct data loss errors. This parameter allows you to delay recovery by a specified amount of time. For example, if you set this parameter to`5min`, the standby will replay each transaction commit only when the system time on the standby is at least five minutes past the commit time reported by the primary. If this value is specified without units, it is taken as milliseconds. The default is zero, adding no delay. + +It is possible that the replication delay between servers exceeds the value of this parameter, in which case no delay is added. Note that the delay is calculated between the WAL time stamp as written on primary and the current time on the standby. Delays in transfer because of network lag or cascading replication configurations may reduce the actual wait time significantly. If the system clocks on primary and standby are not synchronized, this may lead to recovery applying records earlier than expected; but that is not a major issue because useful settings of this parameter are much larger than typical time deviations between servers. + +The delay occurs only on WAL records for transaction commits. Other records are replayed as quickly as possible, which is not a problem because MVCC visibility rules ensure their effects are not visible until the corresponding commit record is applied. + +The delay occurs once the database in recovery has reached a consistent state, until the standby is promoted or triggered. After that the standby will end recovery without further waiting. + +This parameter is intended for use with streaming replication deployments; however, if the parameter is specified it will be honored in all cases except crash recovery.`hot_standby_feedback`will be delayed by use of this feature which could lead to bloat on the primary; use both together with care. + +### Warning + +Synchronous replication is affected by this setting when`synchronous_commit`is set to`remote_apply`; every`COMMIT`will need to wait to be applied. + +This parameter can only be set in the`postgresql.conf`file or on the server command line. + +### 20.6.4. Subscribers + +These settings control the behavior of a logical replication subscriber. Their values on the publisher are irrelevant. + +Note that`wal_receiver_timeout`,`wal_receiver_status_interval`and`wal_retrieve_retry_interval`configuration parameters affect the logical replication workers as well. + +`max_logical_replication_workers`(`int`)[](<>) + +Specifies maximum number of logical replication workers. This includes both apply workers and table synchronization workers. + +Logical replication workers are taken from the pool defined by`max_worker_processes`. + +The default value is 4. This parameter can only be set at server start. + +`max_sync_workers_per_subscription`(`integer`)[](<>) + +Maximum number of synchronization workers per subscription. This parameter controls the amount of parallelism of the initial data copy during the subscription initialization or when new tables are added. + +Currently, there can be only one synchronization worker per table. + +The synchronization workers are taken from the pool defined by`max_logical_replication_workers`. + +The default value is 2. This parameter can only be set in the`postgresql.conf`file or on the server command line. diff --git a/docs/X/runtime-config-wal.md b/docs/en/runtime-config-wal.md similarity index 100% rename from docs/X/runtime-config-wal.md rename to docs/en/runtime-config-wal.md diff --git a/docs/en/runtime-config-wal.zh.md b/docs/en/runtime-config-wal.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..aa99f55d1dfbdb244832e97fef177570dd701d4f --- /dev/null +++ b/docs/en/runtime-config-wal.zh.md @@ -0,0 +1,296 @@ +## 20.5. Write Ahead Log + +[20.5.1. Settings](runtime-config-wal.html#RUNTIME-CONFIG-WAL-SETTINGS) + +[20.5.2. Checkpoints](runtime-config-wal.html#RUNTIME-CONFIG-WAL-CHECKPOINTS) + +[20.5.3. Archiving](runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVING) + +[20.5.4. Archive Recovery](runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY) + +[20.5.5. Recovery Target](runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET) + +For additional information on tuning these settings, see[Section 30.5](wal-configuration.html). + +### 20.5.1. Settings + +`wal_level`(`enum`)[](<>) + +`wal_level`determines how much information is written to the WAL. The default value is`replica`, which writes enough data to support WAL archiving and replication, including running read-only queries on a standby server.`minimal`removes all logging except the information required to recover from a crash or immediate shutdown. Finally,`logical`adds information necessary to support logical decoding. Each level includes the information logged at all lower levels. This parameter can only be set at server start. + +In`minimal`level, no information is logged for permanent relations for the remainder of a transaction that creates or rewrites them. This can make operations much faster (see[Section 14.4.7](populate.html#POPULATE-PITR)). Operations that initiate this optimization include: + +| `ALTER ... SET TABLESPACE` | +| -------------------------- | +| `CLUSTER` | +| `CREATE TABLE` | +| `刷新物化视图`(没有`同时`) | +| `重新索引` | +| `截短` | + +但是最小 WAL 不包含足够的信息来从基本备份和 WAL 日志中重建数据,所以`复制品`必须使用或更高版本才能启用 WAL 归档([档案\_模式](runtime-config-wal.html#GUC-ARCHIVE-MODE)) 和流复制。注意改变`wal_level`到`最小的`使之前进行的任何基础备份无法用于存档恢复和备用服务器,这可能会导致数据丢失。 + +在`合乎逻辑的`级别,记录相同的信息`复制品`,加上允许从 WAL 中提取逻辑更改集所需的信息。使用水平`合乎逻辑的`将增加 WAL 卷,特别是如果配置了许多表`副本身份已满`和许多`更新`和`删除`语句被执行。 + +In releases prior to 9.6, this parameter also allowed the values`archive`and`hot_standby`. These are still accepted but mapped to`replica`. + +`fsync`(`boolean`)[](<>) + +If this parameter is on, the PostgreSQL server will try to make sure that updates are physically written to disk, by issuing`fsync()`system calls or various equivalent methods (see[wal_sync_method](runtime-config-wal.html#GUC-WAL-SYNC-METHOD)). This ensures that the database cluster can recover to a consistent state after an operating system or hardware crash. + +While turning off`fsync`is often a performance benefit, this can result in unrecoverable data corruption in the event of a power failure or system crash. Thus it is only advisable to turn off`fsync`if you can easily recreate your entire database from external data. + +Examples of safe circumstances for turning off`fsync`include the initial loading of a new database cluster from a backup file, using a database cluster for processing a batch of data after which the database will be thrown away and recreated, or for a read-only database clone which gets recreated frequently and is not used for failover. High quality hardware alone is not a sufficient justification for turning off`fsync`. + +更换时可靠恢复`同步`从关闭到打开,必须强制内核中所有修改的缓冲区到持久存储。这可以在集群关闭或`同步`通过运行开启`initdb --sync-only`, 跑步`同步`、卸载文件系统或重新启动服务器。 + +在许多情况下,关闭[同步\_犯罪](runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT)对于非关键事务可以提供关闭的许多潜在性能优势`同步`,没有数据损坏的伴随风险。 + +`同步`只能设置在`postgresql.conf`文件或在服务器命令行上。如果关闭这个参数,也要考虑关闭[满的\_页\_写](runtime-config-wal.html#GUC-FULL-PAGE-WRITES). + +`同步提交`(`枚举`)[](<>) + +指定在数据库服务器向客户端返回“成功”指示之前必须完成多少 WAL 处理。有效值为`远程应用`,`在`(默认),`远程写入`,`当地的`, 和`离开`. + +如果`同步待机名称`为空,唯一有意义的设置是`在`和`离开`;`远程应用`,`远程写入`和`当地的`都提供相同的本地同步级别`在`.所有非本地行为`离开`模式是等待 WAL 本地刷新到磁盘。在`离开`模式下,没有等待,因此在向客户端报告成功与稍后保证事务可以安全防止服务器崩溃之间可能会有延迟。(最大延迟三倍[wal_writer_delay](runtime-config-wal.html#GUC-WAL-WRITER-DELAY).) Unlike[fsync](runtime-config-wal.html#GUC-FSYNC), setting this parameter to`off`does not create any risk of database inconsistency: an operating system or database crash might result in some recent allegedly-committed transactions being lost, but the database state will be just the same as if those transactions had been aborted cleanly. So, turning`synchronous_commit`off can be a useful alternative when performance is more important than exact certainty about the durability of a transaction. For more discussion see[Section 30.4](wal-async-commit.html). + +If[synchronous_standby_names](runtime-config-replication.html#GUC-SYNCHRONOUS-STANDBY-NAMES)is non-empty,`synchronous_commit`also controls whether transaction commits will wait for their WAL records to be processed on the standby server(s). + +When set to`remote_apply`, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and applied it, so that it has become visible to queries on the standby(s), and also written to durable storage on the standbys. This will cause much larger commit delays than previous settings since it waits for WAL replay. When set to`on`, commits wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and flushed it to durable storage. This ensures the transaction will not be lost unless both the primary and all synchronous standbys suffer corruption of their database storage. When set to`remote_write`, commits will wait until replies from the current synchronous standby(s) indicate they have received the commit record of the transaction and written it to their file systems. This setting ensures data preservation if a standby instance of PostgreSQL crashes, but not if the standby suffers an operating-system-level crash because the data has not necessarily reached durable storage on the standby. The setting`local`causes commits to wait for local flush to disk, but not for replication. This is usually not desirable when synchronous replication is in use, but is provided for completeness. + +This parameter can be changed at any time; the behavior for any one transaction is determined by the setting in effect when it commits. It is therefore possible, and useful, to have some transactions commit synchronously and others asynchronously. For example, to make a single multistatement transaction commit asynchronously when the default is the opposite, issue`SET LOCAL synchronous_commit TO OFF`within the transaction. + +[Table 20.1](runtime-config-wal.html#SYNCHRONOUS-COMMIT-MATRIX)summarizes the capabilities of the`synchronous_commit`settings. + +**Table 20.1. synchronous_commit Modes** + +| synchronous_commit setting | local durable commit | standby durable commit after PG crash | standby durable commit after OS crash | standby query consistency | +| -------------------------- | -------------------- | ------------------------------------- | ------------------------------------- | ------------------------- | +| remote_apply | • | • | • | • | +| on | • | • | • | | +| 偏僻的\_写 | • | • | | | +| 当地的 | • | | | | +| 离开 | | | | | + +`wal_sync_method`(`枚举`)[](<>) + +用于强制 WAL 更新到磁盘的方法。如果`同步`关闭则此设置无关紧要,因为 WAL 文件更新根本不会被强制退出。可能的值为: + +- `open_datasync`(写 WAL 文件`打开()`选项`O_DSYNC`) + +- `数据同步`(称呼`数据同步()`在每次提交时) + +- `同步`(称呼`同步()`在每次提交时) + +- `fsync_writethrough`(call`fsync()`at each commit, forcing write-through of any disk write cache) + +- `open_sync`(write WAL files with`open()`option`O_SYNC`) + + The`open_`\*options also use`O_DIRECT`if available. Not all of these choices are available on all platforms. The default is the first method in the above list that is supported by the platform, except that`fdatasync`is the default on Linux and FreeBSD. The default is not necessarily ideal; it might be necessary to change this setting or other aspects of your system configuration in order to create a crash-safe configuration or achieve optimal performance. These aspects are discussed in[Section 30.1](wal-reliability.html). This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`full_page_writes`(`boolean`)[](<>) + +When this parameter is on, the PostgreSQL server writes the entire content of each disk page to WAL during the first modification of that page after a checkpoint. This is needed because a page write that is in process during an operating system crash might be only partially completed, leading to an on-disk page that contains a mix of old and new data. The row-level change data normally stored in WAL will not be enough to completely restore such a page during post-crash recovery. Storing the full page image guarantees that the page can be correctly restored, but at the price of increasing the amount of data that must be written to WAL. (Because WAL replay always starts from a checkpoint, it is sufficient to do this during the first change of each page after a checkpoint. Therefore, one way to reduce the cost of full-page writes is to increase the checkpoint interval parameters.) + +Turning this parameter off speeds normal operation, but might lead to either unrecoverable data corruption, or silent data corruption, after a system failure. The risks are similar to turning off`fsync`, though smaller, and it should be turned off only based on the same circumstances recommended for that parameter. + +Turning off this parameter does not affect use of WAL archiving for point-in-time recovery (PITR) (see[第 26.3 节](continuous-archiving.html))。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。默认是`在`. + +`wal_log_hints`(`布尔值`)[](<>) + +当这个参数是`在`,PostgreSQL 服务器在检查点之后对该页面的第一次修改期间将每个磁盘页面的全部内容写入 WAL,即使对于所谓的提示位的非关键修改也是如此。 + +如果启用了数据校验和,提示位更新总是被 WAL 记录并且这个设置被忽略。如果您的数据库启用了数据校验和,您可以使用此设置来测试会发生多少额外的 WAL 日志记录。 + +此参数只能在服务器启动时设置。默认值为`离开`. + +`wal_compression`(`布尔值`)[](<>) + +当这个参数是`在`, PostgreSQL 服务器压缩整页图像写入 WAL 时[满的\_页\_写](runtime-config-wal.html#GUC-FULL-PAGE-WRITES)正在或正在进行基本备份。在 WAL 重放期间将解压缩压缩的页面图像。默认值为`离开`.只有超级用户可以更改此设置。 + +打开这个参数可以减少 WAL 的体积,而不会增加不可恢复的数据损坏的风险,但代价是在 WAL 日志记录期间的压缩和 WAL 重放期间的解压缩上花费了一些额外的 CPU。 + +`wal_init_zero`(`布尔值`)[](<>) + +如果设置为`在`(默认),此选项导致新的 WAL 文件用零填充。在某些文件系统上,这可以确保在我们需要写入 WAL 记录之前分配空间。然而,*写时复制*(COW) 文件系统可能无法从这种技术中受益,因此可以选择跳过不必要的工作。如果设置为`离开`, 创建文件时只写入最后一个字节,使其具有预期的大小。 + +`wal_recycle`(`布尔值`)[](<>) + +如果设置为`在`(默认值),此选项会通过重命名 WAL 文件来回收它们,从而避免创建新文件的需要。在 COW 文件系统上,创建新文件系统可能更快,因此可以选择禁用此行为。 + +`wal_buffers`(`整数`)[](<>) + +用于尚未写入磁盘的 WAL 数据的共享内存量。默认设置 -1 选择等于 1/32(约 3%)的大小[共享\_缓冲区](runtime-config-resource.html#GUC-SHARED-BUFFERS), but not less than`64kB`nor more than the size of one WAL segment, typically`16MB`. This value can be set manually if the automatic choice is too large or too small, but any positive value less than`32kB`will be treated as`32kB`. If this value is specified without units, it is taken as WAL blocks, that is`XLOG_BLCKSZ`bytes, typically 8kB. This parameter can only be set at server start. + +The contents of the WAL buffers are written out to disk at every transaction commit, so extremely large values are unlikely to provide a significant benefit. However, setting this value to at least a few megabytes can improve write performance on a busy server where many clients are committing at once. The auto-tuning selected by the default setting of -1 should give reasonable results in most cases. + +`wal_writer_delay`(`integer`)[](<>) + +Specifies how often the WAL writer flushes WAL, in time terms. After flushing WAL the writer sleeps for the length of time given by`wal_writer_delay`, unless woken up sooner by an asynchronously committing transaction. If the last flush happened less than`wal_writer_delay`ago and less than`wal_writer_flush_after`worth of WAL has been produced since, then WAL is only written to the operating system, not flushed to disk. If this value is specified without units, it is taken as milliseconds. The default value is 200 milliseconds (`200ms`). Note that on many systems, the effective resolution of sleep delays is 10 milliseconds; setting`wal_writer_delay`to a value that is not a multiple of 10 might have the same results as setting it to the next higher multiple of 10. This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`wal_writer_flush_after`(`整数`)[](<>) + +指定 WAL writer 刷新 WAL 的频率,以数量为单位。如果最后一次刷新发生少于`wal_writer_delay`以前且小于`wal_writer_flush_after`价值 WAL 已经产生,然后 WAL 只写入操作系统,而不是刷新到磁盘。如果`wal_writer_flush_after`设定为`0`然后 WAL 数据总是立即刷新。如果这个值没有指定单位,则将其视为 WAL 块,即`XLOG_BLCKSZ`字节,通常为 8kB。默认是`1MB`.该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`wal_skip_threshold`(`整数`)[](<>) + +什么时候`wal_level`是`最小的`并且在创建或重写永久关系后提交事务,此设置决定如何持久化新数据。如果数据小于此设置,则将其写入 WAL 日志;否则,请使用受影响文件的 fsync。根据存储的属性,如果此类提交会减慢并发事务,则提高或降低此值可能会有所帮助。如果此值指定为不带单位,则以千字节为单位。默认值为 2 MB (`2MB`)。 + +`提交延迟`(`integer`)[](<>) + +Setting`commit_delay`adds a time delay before a WAL flush is initiated. This can improve group commit throughput by allowing a larger number of transactions to commit via a single WAL flush, if system load is high enough that additional transactions become ready to commit within the given interval. However, it also increases latency by up to the`commit_delay`for each WAL flush. Because the delay is just wasted if no other transactions become ready to commit, a delay is only performed if at least`commit_siblings`other transactions are active when a flush is about to be initiated. Also, no delays are performed if`fsync`is disabled. If this value is specified without units, it is taken as microseconds. The default`commit_delay`is zero (no delay). Only superusers can change this setting. + +In PostgreSQL releases prior to 9.3,`commit_delay`behaved differently and was much less effective: it affected only commits, rather than all WAL flushes, and waited for the entire configured delay even if the WAL flush was completed sooner. Beginning in PostgreSQL 9.3, the first process that becomes ready to flush waits for the configured interval, while subsequent processes wait only until the leader completes the flush operation. + +`commit_siblings`(`integer`)[](<>) + +Minimum number of concurrent open transactions to require before performing the`commit_delay`delay. A larger value makes it more probable that at least one other transaction will become ready to commit during the delay interval. The default is five transactions. + +### 20.5.2. Checkpoints + +`checkpoint_timeout`(`integer`)[](<>) + +Maximum time between automatic WAL checkpoints. If this value is specified without units, it is taken as seconds. The valid range is between 30 seconds and one day. The default is five minutes (`5分钟`)。增加此参数可以增加崩溃恢复所需的时间。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`checkpoint_completion_target`(`浮点`)[](<>) + +指定检查点完成的目标,作为检查点之间总时间的一部分。默认值为 0.9,它将检查点分布在几乎所有可用的时间间隔中,提供相当一致的 I/O 负载,同时也为检查点完成开销留出一些时间。不建议减小此参数,因为它会导致检查点更快地完成。这导致检查点期间的 I/O 速率更高,然后在检查点完成和下一个计划检查点之间的 I/O 时间段较少。该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`checkpoint_flush_after`(`整数`)[](<>) + +每当在执行检查点时写入超过此数量的数据时,尝试强制操作系统将这些写入发出到底层存储。这样做将限制内核页面缓存中的脏数据量,从而减少在`同步`在检查点结束时发出,或者当操作系统在后台大批量写回数据时发出。通常这会大大减少事务延迟,但也有一些情况,尤其是工作负载大于[共享\_缓冲区](runtime-config-resource.html#GUC-SHARED-BUFFERS),但小于操作系统的页面缓存,性能可能会降低。此设置在某些平台上可能无效。如果指定这个值没有单位,它被视为块,即`BLCKSZ`字节,通常为 8kB。有效范围介于`0`,它禁用强制写回,并且`2MB`.默认是`256kB`在 Linux 上,`0`别处。(如果`BLCKSZ`不是 8kB,默认值和最大值与之成比例。)此参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`checkpoint_warning`(`整数`)[](<>) + +如果由填充 WAL 段文件引起的检查点发生的时间比这个时间更近,则向服务器日志写入一条消息(这表明`max_wal_size`应该提高)。如果指定此值没有单位,则以秒为单位。默认值为 30 秒(`30 多岁`)。零禁用警告。如果出现以下情况,则不会生成警告`checkpoint_timeout`小于`checkpoint_warning`.该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`max_wal_size`(`整数`)[](<>) + +在自动检查点期间让 WAL 增长的最大大小。这是一个软限制;WAL 大小可以超过`max_wal_size`在特殊情况下,如重载、故障`归档命令`,或高`wal_keep_尺码`背景如果指定此值时没有单位,则将其视为兆字节。默认值为1GB。增加此参数可以增加崩溃恢复所需的时间。此参数只能在`postgresql。形态`文件或在服务器命令行上。 + +`最小尺寸`(`整数`)[](<>) + +只要WAL磁盘使用率低于此设置,旧的WAL文件就会被回收,以便将来在检查点使用,而不是删除。这可以用来确保保留足够的WAL空间,以处理WAL使用的峰值,例如在运行大批量作业时。如果指定此值时没有单位,则将其视为兆字节。默认值为80MB。此参数只能在`postgresql。形态`文件或在服务器命令行上。 + +### 20.5.3.存档 + +`存档模式`(`枚举`)[](<>) + +什么时候`存档模式`如果启用,则通过设置将已完成的WA段发送到存档存储[档案文件\_命令](runtime-config-wal.html#GUC-ARCHIVE-COMMAND).除了`关`,要禁用,有两种模式:`在…上`和`总是`.在正常运行期间,两种模式之间没有差异,但当设置为`总是`WAL 归档程序也在归档恢复或待机模式期间启用。在`总是`模式下,所有从存档中恢复的文件或通过流复制流式传输的所有文件都将被存档(再次)。看[第 27.2.9 节](warm-standby.html#CONTINUOUS-ARCHIVING-IN-STANDBY)详情。 + +`归档模式`和`归档命令`是单独的变量,因此`归档命令`可以在不离开存档模式的情况下进行更改。此参数只能在服务器启动时设置。`归档模式`无法启用时`wal_level`被设定为`最小的`. + +`归档命令`(`细绳`)[](<>) + +要执行以归档已完成的 WAL 文件段的本地 shell 命令。任何`%p`字符串中的替换为要归档的文件的路径名,以及任何`%F`仅由文件名替换。(路径名是相对于服务器的工作目录,即集群的数据目录。)使用`%%`嵌入一​​个实际的`%`命令中的字符。该命令只有在成功时才返回零退出状态,这一点很重要。有关更多信息,请参阅[Section 26.3.1](continuous-archiving.html#BACKUP-ARCHIVING-WAL). + +This parameter can only be set in the`postgresql.conf`file or on the server command line. It is ignored unless`archive_mode`was enabled at server start. If`archive_command`is an empty string (the default) while`archive_mode`is enabled, WAL archiving is temporarily disabled, but the server continues to accumulate WAL segment files in the expectation that a command will soon be provided. Setting`archive_command`to a command that does nothing but return true, e.g.,`/bin/true`(`REM`on Windows), effectively disables archiving, but also breaks the chain of WAL files needed for archive recovery, so it should only be used in unusual circumstances. + +`archive_timeout`(`integer`)[](<>) + +The[archive_command](runtime-config-wal.html#GUC-ARCHIVE-COMMAND)is only invoked for completed WAL segments. Hence, if your server generates little WAL traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To limit how old unarchived data can be, you can set`archive_timeout`to force the server to switch to a new WAL segment file periodically. When this parameter is greater than zero, the server will switch to a new segment file whenever this amount of time has elapsed since the last segment file switch, and there has been any database activity, including a single checkpoint (checkpoints are skipped if there is no database activity). Note that archived files that are closed early due to a forced switch are still the same length as completely full files. Therefore, it is unwise to use a very short`archive_timeout`— it will bloat your archive storage.`archive_timeout`settings of a minute or so are usually reasonable. You should consider using streaming replication, instead of archiving, if you want data to be copied off the primary server more quickly than that. If this value is specified without units, it is taken as seconds. This parameter can only be set in the`postgresql.conf`file or on the server command line. + +### 20.5.4. Archive Recovery + +[](<>) + +This section describes the settings that apply only for the duration of the recovery. They must be reset for any subsequent recovery you wish to perform. + +“Recovery” covers using the server as a standby or for executing a targeted recovery. Typically, standby mode would be used to provide high availability and/or read scalability, whereas a targeted recovery is used to recover from data loss. + +To start the server in standby mode, create a file called`standby.signal`[](<>)in the data directory. The server will enter recovery and will not stop recovery when the end of archived WAL is reached, but will keep trying to continue recovery by connecting to the sending server as specified by the`primary_conninfo`setting and/or by fetching new WAL segments using`restore_command`. For this mode, the parameters from this section and[Section 20.6.3](runtime-config-replication.html#RUNTIME-CONFIG-REPLICATION-STANDBY)are of interest. Parameters from[Section 20.5.5](runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET)will also be applied but are typically not useful in this mode. + +To start the server in targeted recovery mode, create a file called`recovery.signal`[](<>)in the data directory. If both`standby.signal`and`recovery.signal`files are created, standby mode takes precedence. Targeted recovery mode ends when the archived WAL is fully replayed, or when`recovery_target`is reached. In this mode, the parameters from both this section and[Section 20.5.5](runtime-config-wal.html#RUNTIME-CONFIG-WAL-RECOVERY-TARGET)will be used. + +`restore_command`(`string`)[](<>) + +The local shell command to execute to retrieve an archived segment of the WAL file series. This parameter is required for archive recovery, but optional for streaming replication. Any`%f`in the string is replaced by the name of the file to retrieve from the archive, and any`%p`is replaced by the copy destination path name on the server. (The path name is relative to the current working directory, i.e., the cluster's data directory.) Any`%r`is replaced by the name of the file containing the last valid restart point. That is the earliest file that must be kept to allow a restore to be restartable, so this information can be used to truncate the archive to just the minimum required to support restarting from the current restore.`%r`is typically only used by warm-standby configurations (see[Section 27.2](warm-standby.html)). Write`%%`to embed an actual`%`character. + +It is important for the command to return a zero exit status only if it succeeds. The command*will*be asked for file names that are not present in the archive; it must return nonzero when so asked. Examples: + +``` +restore_command = 'cp /mnt/server/archivedir/%f "%p"' +restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows +``` + +An exception is that if the command was terminated by a signal (other than SIGTERM, which is used as part of a database server shutdown) or an error by the shell (such as command not found), then recovery will abort and the server will not start up. + +This parameter can only be set in the`postgresql.conf`file or on the server command line. + +`archive_cleanup_command`(`string`)[](<>) + +This optional parameter specifies a shell command that will be executed at every restartpoint. The purpose of`归档清理命令`是提供一种机制来清理备用服务器不再需要的旧存档 WAL 文件。任何`%r`替换为包含最后一个有效重新启动点的文件的名称。那是必须是最早的文件*保留*允许还原可重新启动,因此所有文件早于`%r`可以安全移除。此信息可用于将存档截断为仅支持从当前还原重新启动所需的最小值。这[皮克\_归档清理](pgarchivecleanup.html)模块常用于`归档清理命令`对于单机配置,例如: + +``` +archive_cleanup_command = 'pg_archivecleanup /mnt/server/archivedir %r' +``` + +但是请注意,如果多个备用服务器从同一个存档目录恢复,则需要确保在任何服务器不再需要 WAL 文件之前不要删除它们。`归档清理命令`通常会在热备用配置中使用(请参阅[第 27.2 节](warm-standby.html))。写`%%`嵌入一​​个实际的`%`命令中的字符。 + +如果命令返回非零退出状态,则将写入警告日志消息。一个例外是,如果命令被一个信号或 shell 的错误(例如命令未找到)终止,则会引发致命错误。 + +该参数只能在`postgresql.conf`文件或在服务器命令行上。 + +`recovery_end_command`(`细绳`)[](<>) + +This parameter specifies a shell command that will be executed once only at the end of recovery. This parameter is optional. The purpose of the`recovery_end_command`is to provide a mechanism for cleanup following replication or recovery. Any`%r`is replaced by the name of the file containing the last valid restart point, like in[archive_cleanup_command](runtime-config-wal.html#GUC-ARCHIVE-CLEANUP-COMMAND). + +If the command returns a nonzero exit status then a warning log message will be written and the database will proceed to start up anyway. An exception is that if the command was terminated by a signal or an error by the shell (such as command not found), the database will not proceed with startup. + +This parameter can only be set in the`postgresql.conf`file or on the server command line. + +### 20.5.5. Recovery Target + +By default, recovery will recover to the end of the WAL log. The following parameters can be used to specify an earlier stopping point. At most one of`recovery_target`,`recovery_target_lsn`,`recovery_target_name`,`recovery_target_time`, or`recovery_target_xid`can be used; if more than one of these is specified in the configuration file, an error will be raised. These parameters can only be set at server start. + +```recovery_target`` = 'immediate'``` [](<>) + +This parameter specifies that recovery should end as soon as a consistent state is reached, i.e., as early as possible. When restoring from an online backup, this means the point where taking the backup ended. + +从技术上讲,这是一个字符串参数,但是`'即时'`是当前唯一允许的值。 + +`recovery_target_name`(`细绳`)[](<>) + +此参数指定命名的还原点(创建与`pg_create_restore_point()`) 恢复将继续。 + +`recovery_target_time`(`时间戳`)[](<>) + +此参数指定恢复将进行到的时间戳。精确停止点也受以下因素影响[恢复\_目标\_包括的](runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE). + +该参数的值是一个时间戳,格式与服务器接受的相同。`带时区的时间戳`数据类型,但不能使用时区缩写(除非[时区\_缩写](runtime-config-client.html#GUC-TIMEZONE-ABBREVIATIONS)变量已在配置文件中设置)。首选样式是使用与 UTC 的数字偏移量,或者您可以编写完整的时区名称,例如,`欧洲/赫尔辛基`不是`东部标准时间`. + +`recovery_target_xid`(`细绳`)[](<>) + +此参数指定恢复将继续到的事务 ID。请记住,虽然事务 ID 在事务开始时按顺序分配,但事务可以以不同的数字顺序完成。将恢复的事务是那些在指定事务之前(并且可选地包括)提交的事务。精确停止点也受以下因素影响[恢复\_目标\_包括的](runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE). + +`recovery_target_lsn`(`pg_lsn`)[](<>) + +此参数指定将进行恢复的预写日志位置的 LSN。精确停止点也受以下因素影响[恢复\_目标\_包括的](runtime-config-wal.html#GUC-RECOVERY-TARGET-INCLUSIVE).该参数使用系统数据类型解析[`pg_lsn`](datatype-pg-lsn.html). + +以下选项进一步指定恢复目标,并影响达到目标时发生的情况: + +`recovery_target_inclusive`(`布尔值`)[](<>) + +指定是否在指定的恢复目标(`在`),或者就在恢复目标之前 (`离开`)。适用于[恢复\_目标\_lsn](runtime-config-wal.html#GUC-RECOVERY-TARGET-LSN),[恢复\_目标\_时间](runtime-config-wal.html#GUC-RECOVERY-TARGET-TIME), 要么[恢复\_目标\_xid](runtime-config-wal.html#GUC-RECOVERY-TARGET-XID)被指定。此设置控制是否将具有确切目标 WAL 位置 (LSN)、提交时间或事务 ID 的事务分别包含在恢复中。默认为`在`. + +`recovery_target_timeline`(`细绳`)[](<>) + +指定恢复到特定时间线。该值可以是数字时间线 ID 或特殊值。价值`当前的`沿进行基本备份时的当前时间线进行恢复。价值`最新的`恢复到存档中找到的最新时间线,这在备用服务器中很有用。`最新的`是默认值。 + +您通常只需要在复杂的重新恢复情况下设置此参数,您需要在时间点恢复后返回到自身达到的状态。看[第 26.3.5 节](continuous-archiving.html#BACKUP-TIMELINES)供讨论。 + +`recovery_target_action`(`枚举`)[](<>) + +指定达到恢复目标后服务器应采取的操作。默认是`暂停`,这意味着恢复将被暂停。`推动`意味着恢复过程将完成,服务器将开始接受连接。最后`关闭`达到恢复目标后将停止服务器。 + +预期用途`暂停`设置是允许对数据库执行查询以检查此恢复目标是否是最理想的恢复点。暂停状态可以通过使用来恢复`pg_wal_replay_resume()`(看[表 9.89](functions-admin.html#FUNCTIONS-RECOVERY-CONTROL-TABLE)),然后导致恢复结束。如果此恢复目标不是所需的停止点,则关闭服务器,将恢复目标设置更改为稍后的目标,然后重新启动以继续恢复。 + +这`关闭`设置对于使实例在所需的确切重放点准备好很有用。该实例仍然能够重放更多 WAL 记录(实际上,它必须在下次启动时重放自上次检查点以来的 WAL 记录)。 + +请注意,因为`恢复信号`时不会被删除`recovery_target_action`被设定为`关掉`, 任何后续启动都将以立即关闭结束,除非更改配置或`恢复信号`文件被手动删除。 + +如果未设置恢复目标,则此设置无效。如果[热的\_支持](runtime-config-replication.html#GUC-HOT-STANDBY)未启用,设置`暂停`将与`关掉`.如果在促销期间达到恢复目标,则设置`暂停`将与`推动`. + +在任何情况下,如果配置了恢复目标但存档恢复在达到目标之前结束,则服务器将关闭并出现致命错误。 diff --git a/docs/X/sepgsql.md b/docs/en/sepgsql.md similarity index 100% rename from docs/X/sepgsql.md rename to docs/en/sepgsql.md diff --git a/docs/en/sepgsql.zh.md b/docs/en/sepgsql.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e1620bdfa65e7e87490e9cb609da399c55b4d0d2 --- /dev/null +++ b/docs/en/sepgsql.zh.md @@ -0,0 +1,295 @@ +## F.37. sepgsql + +[F.37.1. Overview](sepgsql.html#SEPGSQL-OVERVIEW)[F.37.2. Installation](sepgsql.html#SEPGSQL-INSTALLATION)[F.37.3. Regression Tests](sepgsql.html#SEPGSQL-REGRESSION)[F.37.4. GUC Parameters](sepgsql.html#SEPGSQL-PARAMETERS)[F.37.5. Features](sepgsql.html#SEPGSQL-FEATURES)[F.37.6. Sepgsql Functions](sepgsql.html#SEPGSQL-FUNCTIONS)[F.37.7. Limitations](sepgsql.html#SEPGSQL-LIMITATIONS)[F.37.8. External Resources](sepgsql.html#SEPGSQL-RESOURCES)[F.37.9. Author](sepgsql.html#SEPGSQL-AUTHOR) + +[](<>) + +`sepgsql`is a loadable module that supports label-based mandatory access control (MAC) based on SELinux security policy. + +### Warning + +The current implementation has significant limitations, and does not enforce mandatory access control for all actions. See[Section F.37.7](sepgsql.html#SEPGSQL-LIMITATIONS). + +### F.37.1. Overview + +This module integrates with SELinux to provide an additional layer of security checking above and beyond what is normally provided by PostgreSQL. From the perspective of SELinux, this module allows PostgreSQL to function as a user-space object manager. Each table or function access initiated by a DML query will be checked against the system security policy. This check is in addition to the usual SQL permissions checking performed by PostgreSQL. + +SELinux access control decisions are made using security labels, which are represented by strings such as`system_u:object_r:sepgsql_table_t:s0`. Each access control decision involves two labels: the label of the subject attempting to perform the action, and the label of the object on which the operation is to be performed. Since these labels can be applied to any sort of object, access control decisions for objects stored within the database can be (and, with this module, are) subjected to the same general criteria used for objects of any other type, such as files. This design is intended to allow a centralized security policy to protect information assets independent of the particulars of how those assets are stored. + +The[`SECURITY LABEL`](sql-security-label.html)statement allows assignment of a security label to a database object. + +### F.37.2. Installation + +`sepgsql`can only be used on Linux 2.6.28 or higher with SELinux enabled. It is not available on any other platform. You will also need libselinux 2.1.10 or higher and selinux-policy 3.9.13 or higher (although some distributions may backport the necessary rules into older policy versions). + +The`sestatus`command allows you to check the status of SELinux. A typical display is: + +``` +$ sestatus +SELinux status: enabled +SELinuxfs mount: /selinux +Current mode: enforcing +Mode from config file: enforcing +Policy version: 24 +Policy from config file: targeted +``` + +If SELinux is disabled or not installed, you must set that product up first before installing this module. + +To build this module, include the option`--with-selinux`in your PostgreSQL`configure`command. Be sure that the`libselinux-devel`RPM is installed at build time. + +To use this module, you must include`sepgsql`in the[shared_preload_libraries](runtime-config-client.html#GUC-SHARED-PRELOAD-LIBRARIES)parameter in`postgresql.conf`. The module will not function correctly if loaded in any other manner. Once the module is loaded, you should execute`sepgsql.sql`in each database. This will install functions needed for security label management, and assign initial security labels. + +Here is an example showing how to initialize a fresh database cluster with`sepgsql`functions and security labels installed. Adjust the paths shown as appropriate for your installation: + +``` +$ export PGDATA=/path/to/data/directory +$ initdb +$ vi $PGDATA/postgresql.conf + change + #shared_preload_libraries = '' # (change requires restart) + to + shared_preload_libraries = 'sepgsql' # (change requires restart) +$ for DBNAME in template0 template1 postgres; do + postgres --single -F -c exit_on_error=true $DBNAME \ + /dev/null + done +``` + +Please note that you may see some or all of the following notifications depending on the particular versions you have of libselinux and selinux-policy: + +``` +/etc/selinux/targeted/contexts/sepgsql_contexts: line 33 has invalid object type db_blobs +/etc/selinux/targeted/contexts/sepgsql_contexts: line 36 has invalid object type db_language +/etc/selinux/targeted/contexts/sepgsql_contexts: line 37 has invalid object type db_language +/etc/selinux/targeted/contexts/sepgsql_contexts: line 38 has invalid object type db_language +/etc/selinux/targeted/contexts/sepgsql_contexts: line 39 has invalid object type db_language +/etc/selinux/targeted/contexts/sepgsql_contexts: line 40 has invalid object type db_language +``` + +These messages are harmless and should be ignored. + +If the installation process completes without error, you can now start the server normally. + +### F.37.3. Regression Tests + +由于 SELinux 的性质,运行回归测试`sepgsql`需要几个额外的配置步骤,其中一些必须以 root 身份完成。回归测试不会由普通人运行`检查`要么`进行安装检查`命令;您必须设置配置,然后手动调用测试脚本。测试必须在`贡献/sepgsql`配置的 PostgreSQL 构建树的目录。尽管它们需要构建树,但测试旨在针对已安装的服务器执行,也就是说它们与`进行安装检查`不是`检查`. + +首先,设置`sepgsql`根据中的说明在工作数据库中[第 F.37.2 节](sepgsql.html#SEPGSQL-INSTALLATION).请注意,当前操作系统用户必须能够以超级用户身份连接到数据库而无需密码验证。 + +其次,为回归测试构建和安装策略包。这`sepgsql-regtest`policy 是一个特殊用途的策略包,它提供了一组在回归测试期间允许使用的规则。它应该从策略源文件构建`sepgsql-regtest.te`,这是使用`制作`使用 SELinux 提供的 Makefile。你需要在你的系统上找到合适的 Makefile;下面显示的路径只是一个示例。(这个 Makefile 通常由`selinux-策略开发`或者`selinux-策略`RPM。)一旦构建,使用`模块`命令,它将提供的策略包加载到内核中。如果软件包安装正确,``模块`-l`应该列出`sepgsql-regtest`作为可用的政策包: + +``` +$ cd .../contrib/sepgsql +$ make -f /usr/share/selinux/devel/Makefile +$ sudo semodule -u sepgsql-regtest.pp +$ sudo semodule -l | grep sepgsql +sepgsql-regtest 1.07 +``` + +三、开启`sepgsql_regression_test_mode`.出于安全原因,在`sepgsql-regtest`默认情况下不启用;这`sepgsql_regression_test_mode`参数启用启动回归测试所需的规则。可以使用`塞斯布尔`命令: + +``` +$ sudo setsebool sepgsql_regression_test_mode on +$ getsebool sepgsql_regression_test_mode +sepgsql_regression_test_mode --> on +``` + +四、验证你的shell是否在运行`无限制的_t`领域: + +``` +$ id -Z +unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 +``` + +看[第 F.37.8 节](sepgsql.html#SEPGSQL-RESOURCES)有关调整工作域的详细信息(如有必要)。 + +最后,运行回归测试脚本: + +``` +$ ./test_sepgsql +``` + +此脚本将尝试验证您是否已正确完成所有配置步骤,然后它将运行回归测试`sepgsql`模块。 + +完成测试后,建议您禁用`sepgsql_regression_test_mode`范围: + +``` +$ sudo setsebool sepgsql_regression_test_mode off +``` + +You might prefer to remove the`sepgsql-regtest`policy entirely: + +``` +$ sudo semodule -r sepgsql-regtest +``` + +### F.37.4. GUC Parameters + +`sepgsql.permissive`(`boolean`)[](<>) + +This parameter enables`sepgsql`to function in permissive mode, regardless of the system setting. The default is off. This parameter can only be set in the`postgresql.conf`file or on the server command line. + +When this parameter is on,`sepgsql`functions in permissive mode, even if SELinux in general is working in enforcing mode. This parameter is primarily useful for testing purposes. + +`sepgsql.debug_audit`(`boolean`)[](<>) + +This parameter enables the printing of audit messages regardless of the system policy settings. The default is off, which means that messages will be printed according to the system settings. + +The security policy of SELinux also has rules to control whether or not particular accesses are logged. By default, access violations are logged, but allowed accesses are not. + +This parameter forces all possible logging to be turned on, regardless of the system policy. + +### F.37.5. Features + +#### F.37.5.1. Controlled Object Classes + +The security model of SELinux describes all the access control rules as relationships between a subject entity (typically, a client of the database) and an object entity (such as a database object), each of which is identified by a security label. If access to an unlabeled object is attempted, the object is treated as if it were assigned the label`unlabeled_t`. + +Currently,`sepgsql`allows security labels to be assigned to schemas, tables, columns, sequences, views, and functions. When`sepgsql`is in use, security labels are automatically assigned to supported database objects at creation time. This label is called a default security label, and is decided according to the system security policy, which takes as input the creator's label, the label assigned to the new object's parent object and optionally name of the constructed object. + +A new database object basically inherits the security label of the parent object, except when the security policy has special rules known as type-transition rules, in which case a different label may be applied. For schemas, the parent object is the current database; for tables, sequences, views, and functions, it is the containing schema; for columns, it is the containing table. + +#### F.37.5.2. DML Permissions + +For tables,`db_table:select`,`db_table:insert`,`db_table:update`or`db_table:delete`are checked for all the referenced target tables depending on the kind of statement; in addition,`db_table:select`is also checked for all the tables that contain columns referenced in the`WHERE`or`RETURNING`clause, as a data source for`UPDATE`, and so on. + +Column-level permissions will also be checked for each referenced column.`db_column:select`is checked on not only the columns being read using`SELECT`, but those being referenced in other DML statements;`db_column:update`或者`db_column:插入`还将检查被修改的列`更新`或者`插入`. + +例如,考虑: + +``` +UPDATE t1 SET x = 2, y = func1(y) WHERE z = 100; +``` + +这里,`db_column:更新`将被检查`t1.x`, 因为它正在更新,`db_column:{选择更新}`将被检查`t1.y`,因为它既被更新又被引用,并且`db_column:选择`将被检查`t1.z`,因为它只被引用。`db_table:{选择更新}`也将在表级别进行检查。 + +对于序列,`db_sequence:get_value`当我们使用引用一个序列对象时检查`选择`;但是,请注意,我们目前不检查执行相应功能的权限,例如`最后一个()`. + +For views,`db_view:expand`will be checked, then any other required permissions will be checked on the objects being expanded from the view, individually. + +For functions,`db_procedure:{execute}`will be checked when user tries to execute a function as a part of query, or using fast-path invocation. If this function is a trusted procedure, it also checks`db_procedure:{entrypoint}`permission to check whether it can perform as entry point of trusted procedure. + +In order to access any schema object,`db_schema:search`permission is required on the containing schema. When an object is referenced without schema qualification, schemas on which this permission is not present will not be searched (just as if the user did not have`USAGE`privilege on the schema). If an explicit schema qualification is present, an error will occur if the user does not have the requisite permission on the named schema. + +The client must be allowed to access all referenced tables and columns, even if they originated from views which were then expanded, so that we apply consistent access control rules independent of the manner in which the table contents are referenced. + +The default database privilege system allows database superusers to modify system catalogs using DML commands, and reference or modify toast tables. These operations are prohibited when`sepgsql`is enabled. + +#### F.37.5.3. DDL Permissions + +SELinux defines several permissions to control common operations for each object type; such as creation, alter, drop and relabel of security label. In addition, several object types have special permissions to control their characteristic operations; such as addition or deletion of name entries within a particular schema. + +Creating a new database object requires`create`permission. SELinux will grant or deny this permission based on the client's security label and the proposed security label for the new object. In some cases, additional privileges are required: + +- [`CREATE DATABASE`](sql-createdatabase.html)additionally requires`getattr`permission for the source or template database. + +- Creating a schema object additionally requires`add_name`对父架构的权限。 + +- 创建表还需要创建每个单独的表列的权限,就像每个表列是一个单独的顶级对象一样。 + +- 创建一个标记为的函数`防漏`另外需要`安装`允许。(此权限也被检查时`防漏`为现有功能设置。) + + 什么时候`降低`命令被执行,`降低`将在被移除的对象上进行检查。还将检查通过间接删除的对象的权限`级联`.删除包含在特定模式(表、视图、序列和过程)中的对象还需要`删除名称`在架构上。 + + 什么时候`改变`命令被执行,`设置`将对每个对象类型的正在修改的对象进行检查,但附属对象(例如表的索引或触发器)除外,其中权限改为在父对象上检查。在某些情况下,需要额外的权限: + +- 将对象移动到新模式还需要`删除名称`旧架构的权限和`添加名称`新的许可。 + +- 设置`LEAKPROOF`attribute on a function requires`install`permission. + +- Using[`SECURITY LABEL`](sql-security-label.html)on an object additionally requires`relabelfrom`permission for the object in conjunction with its old security label and`relabelto`permission for the object in conjunction with its new security label. (In cases where multiple label providers are installed and the user tries to set a security label, but it is not managed by SELinux, only`setattr`should be checked here. This is currently not done due to implementation restrictions.) + +#### F.37.5.4. Trusted Procedures + +Trusted procedures are similar to security definer functions or setuid commands. SELinux provides a feature to allow trusted code to run using a security label different from that of the client, generally for the purpose of providing highly controlled access to sensitive data (e.g., rows might be omitted, or the precision of stored values might be reduced). Whether or not a function acts as a trusted procedure is controlled by its security label and the operating system security policy. For example: + +``` +postgres=# CREATE TABLE customer ( + cid int primary key, + cname text, + credit text + ); +CREATE TABLE +postgres=# SECURITY LABEL ON COLUMN customer.credit + IS 'system_u:object_r:sepgsql_secret_table_t:s0'; +SECURITY LABEL +postgres=# CREATE FUNCTION show_credit(int) RETURNS text + AS 'SELECT regexp_replace(credit, ''-[0-9]+$'', ''-xxxx'', ''g'') + FROM customer WHERE cid = $1' + LANGUAGE sql; +CREATE FUNCTION +postgres=# SECURITY LABEL ON FUNCTION show_credit(int) + IS 'system_u:object_r:sepgsql_trusted_proc_exec_t:s0'; +SECURITY LABEL +``` + +The above operations should be performed by an administrative user. + +``` +postgres=# SELECT * FROM customer; +ERROR: SELinux: security policy violation +postgres=# SELECT cid, cname, show_credit(cid) FROM customer; + cid | cname | show_credit +#### F.37.5.5. Dynamic Domain Transitions + + It is possible to use SELinux's dynamic domain transition feature to switch the security label of the client process, the client domain, to a new context, if that is allowed by the security policy. The client domain needs the `setcurrent` permission and also `dyntransition` from the old to the new domain. + + Dynamic domain transitions should be considered carefully, because they allow users to switch their label, and therefore their privileges, at their option, rather than (as in the case of a trusted procedure) as mandated by the system. Thus, the `dyntransition` permission is only considered safe when used to switch to a domain with a smaller set of privileges than the original one. For example: +``` + +regression=# select sepgsql_getcon(); sepgsql_getcon + +#### F.37.5.6. Miscellaneous + +We reject the[`LOAD`](sql-load.html)command across the board, because any module loaded could easily circumvent security policy enforcement. + +### F.37.6. Sepgsql Functions + +[Table F.30](sepgsql.html#SEPGSQL-FUNCTIONS-TABLE)shows the available functions. + +**Table F.30. Sepgsql Functions** + +| Function

Description | +| ----------------------------- | +| `sepgsql_getcon`() →`文本`

返回客户端域,客户端的当前安全标签。 | +| `sepgsql_setcon`(`文本`) →`布尔值`

如果安全策略允许,则将当前会话的客户端域切换到新域。它也接受`空值`输入作为转换到客户端原始域的请求。 | +| `sepgsql_mcstrans_in`(`文本`) →`文本`

如果 mcstrans 守护程序正在运行,则将给定的合格 MLS/MCS 范围转换为原始格式。 | +| `sepgsql_mcstrans_out`(`文本`) →`文本`

如果 mcstrans 守护进程正在运行,则将给定的原始 MLS/MCS 范围转换为限定格式。 | +| `sepgsql_restorecon`(`文本`) →`布尔值`

为当前数据库中的所有对象设置初始安全标签。论据可能是`空值`, or the name of a specfile to be used as alternative of the system default. | + +### F.37.7. Limitations + +Data Definition Language (DDL) Permissions + +Due to implementation restrictions, some DDL operations do not check permissions. + +Data Control Language (DCL) Permissions + +Due to implementation restrictions, DCL operations do not check permissions. + +Row-level access control + +PostgreSQL supports row-level access, but`sepgsql`does not. + +Covert channels + +`sepgsql`does not try to hide the existence of a certain object, even if the user is not allowed to reference it. For example, we can infer the existence of an invisible object as a result of primary key conflicts, foreign key violations, and so on, even if we cannot obtain the contents of the object. The existence of a top secret table cannot be hidden; we only hope to conceal its contents. + +### F.37.8. External Resources + +[SE-PostgreSQL Introduction](https://wiki.postgresql.org/wiki/SEPostgreSQL) + +This wiki page provides a brief overview, security design, architecture, administration and upcoming features. + +[SELinux User's and Administrator's Guide](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/index) + +This document provides a wide spectrum of knowledge to administer SELinux on your systems. It focuses primarily on Red Hat operating systems, but is not limited to them. + +[Fedora SELinux FAQ](https://fedoraproject.org/wiki/SELinux_FAQ) + +This document answers frequently asked questions about SELinux. It focuses primarily on Fedora, but is not limited to Fedora. + +### F.37.9. Author + +KaiGai Kohei`<[kaigai@ak.jp.nec.com](mailto:kaigai@ak.jp.nec.com)>` diff --git a/docs/X/spi-spi-prepare.md b/docs/en/spi-spi-prepare.md similarity index 100% rename from docs/X/spi-spi-prepare.md rename to docs/en/spi-spi-prepare.md diff --git a/docs/en/spi-spi-prepare.zh.md b/docs/en/spi-spi-prepare.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..0d7565dd51178f8e11fd68cd92bf0452901fa4a7 --- /dev/null +++ b/docs/en/spi-spi-prepare.zh.md @@ -0,0 +1,49 @@ +## SPI\_准备 + +SPI_prepare — 准备一个语句,但还没有执行它 + +## 概要 + +``` +SPIPlanPtr SPI_prepare(const char * command, int nargs, Oid * argtypes) +``` + +## 描述 + +`SPI_prepare`为指定的命令创建并返回一个准备好的语句,但不执行该命令。准备好的语句稍后可以使用重复执行`SPI_execute_plan`. + +当要重复执行相同或相似的命令时,通常只执行一次解析分析是有利的,并且可能更有利于重复使用该命令的执行计划。`SPI_prepare`将命令字符串转换为封装解析分析结果的预处理语句。如果发现为每个执行生成自定义计划没有帮助,prepared statement 还提供了缓存执行计划的地方。 + +一个准备好的命令可以通过写参数来泛化(`1美元`,`2美元`等)代替普通命令中的常量。然后指定参数的实际值`SPI_execute_plan`叫做。与没有参数的情况相比,这允许在更广泛的情况下使用准备好的命令。 + +返回的语句`SPI_prepare`只能在 C 函数的当前调用中使用,因为`SPI_完成`释放为此类语句分配的内存。但是使用函数可以将语句保存更长时间`SPI_keepplan`或者`SPI_saveplan`. + +## 论据 + +`常量字符 * *`命令`*` + +命令字符串 + +`整数 *`纳尔格斯`*` + +输入参数的数量(`1美元`,`2美元`, 等等。) + +`好* *`参数类型`*` + +指向包含参数数据类型的 OID 的数组的指针 + +## 返回值 + +`SPI_prepare`返回一个非空指针`SPI计划`,这是一个不透明的结构,表示准备好的语句。出错时,`空值`将被退回,并且`SPI_结果`将设置为使用的相同错误代码之一`SPI_执行`, 除了它设置为`SPI_ERROR_ARGUMENT`如果*`命令`*是`空值`, 或者如果*`纳尔格斯`*小于 0,或者如果*`纳尔格斯`*大于 0 并且*`参数类型`*是`空值`. + +## 笔记 + +如果没有定义参数,则在第一次使用时会创建一个通用计划`SPI_execute_plan`, 并用于所有后续执行。如果有参数,前几次使用`SPI_execute_plan`will generate custom plans that are specific to the supplied parameter values. After enough uses of the same prepared statement,`SPI_execute_plan`will build a generic plan, and if that is not too much more expensive than the custom plans, it will start using the generic plan instead of re-planning each time. If this default behavior is unsuitable, you can alter it by passing the`CURSOR_OPT_GENERIC_PLAN`or`CURSOR_OPT_CUSTOM_PLAN`flag to`SPI_prepare_cursor`, to force use of generic or custom plans respectively. + +Although the main point of a prepared statement is to avoid repeated parse analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes since the previous use of the prepared statement. Also, if the value of[search_path](runtime-config-client.html#GUC-SEARCH-PATH)changes from one use to the next, the statement will be re-parsed using the new`search_path`. (This latter behavior is new as of PostgreSQL 9.3.) See[PREPARE](sql-prepare.html)for more information about the behavior of prepared statements. + +This function should only be called from a connected C function. + +`SPIPlanPtr`is declared as a pointer to an opaque struct type in`spi.h`. It is unwise to try to access its contents directly, as that makes your code much more likely to break in future revisions of PostgreSQL. + +The name`SPIPlanPtr`is somewhat historical, since the data structure no longer necessarily contains an execution plan. diff --git a/docs/X/sql-alterforeigntable.md b/docs/en/sql-alterforeigntable.md similarity index 100% rename from docs/X/sql-alterforeigntable.md rename to docs/en/sql-alterforeigntable.md diff --git a/docs/en/sql-alterforeigntable.zh.md b/docs/en/sql-alterforeigntable.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e77141a7b8b335dbbc84e9cc814ece6babe22b6e --- /dev/null +++ b/docs/en/sql-alterforeigntable.zh.md @@ -0,0 +1,225 @@ +## 更改外表 + +ALTER FOREIGN TABLE — 更改外部表的定义 + +## 概要 + +``` +ALTER FOREIGN TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + action [, ... ] +ALTER FOREIGN TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + RENAME [ COLUMN ] column_name TO new_column_name +ALTER FOREIGN TABLE [ IF EXISTS ] name + RENAME TO new_name +ALTER FOREIGN TABLE [ IF EXISTS ] name + SET SCHEMA new_schema + +where action is one of: + + ADD [ COLUMN ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] + DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] + ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] + ALTER [ COLUMN ] column_name SET DEFAULT expression + ALTER [ COLUMN ] column_name DROP DEFAULT + ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL + ALTER [ COLUMN ] column_name SET STATISTICS integer + ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) + ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) + ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } + ALTER [ COLUMN ] column_name OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) + ADD table_constraint [ NOT VALID ] + VALIDATE CONSTRAINT constraint_name + DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] + DISABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE REPLICA TRIGGER trigger_name + ENABLE ALWAYS TRIGGER trigger_name + SET WITHOUT OIDS + INHERIT parent_table + NO INHERIT parent_table + OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER } + OPTIONS ( [ ADD | SET | DROP ] option ['value'] [, ... ]) +``` + +## 描述 + +`更改外表`更改现有外部表的定义。有几种子形式: + +`添加列` + +此表单使用与以下相同的语法向外表添加一个新列[`创建外表`](sql-createforeigntable.html).与将列添加到常规表的情况不同,底层存储不会发生任何事情:此操作只是声明现在可以通过外表访问某些新列。 + +`删除列 [如果存在]` + +此表单从外部表中删除一列。你需要说`级联`如果表外的任何内容取决于列;例如,视图。如果`如果存在`指定并且该列不存在,则不会引发错误。在这种情况下,将发出通知。 + +`设置数据类型` + +这种形式改变了外表的一列的类型。同样,这对任何底层存储都没有影响:此操作只是更改 PostgreSQL 认为该列具有的类型。 + +`放`/`删除默认值` + +这些表单设置或删除列的默认值。默认值仅适用于后续`插入`要么`更新`命令;它们不会导致表中已经存在的行发生变化。 + +`放`/`删除不为空` + +将列标记为允许或不允许空值。 + +`设置统计` + +这种形式为后续的每列统计收集目标设置[`分析`](sql-analyze.html)操作。见类似的形式[`更改表`](sql-altertable.html)更多细节。 + +`放 ( *`属性选项`* = *`价值`* [, ... ] )`\ +`重启 ( *`属性选项`* [, ... ] )` + +此表单设置或重置每个属性选项。见类似的形式[`更改表`](sql-altertable.html)更多细节。 + +`设置存储` + +此表单设置列的存储模式。见类似的形式[`更改表`](sql-altertable.html)更多细节。请注意,除非表的外部数据包装器选择注意存储模式,否则存储模式无效。 + +`添加 *`表约束`* [ 无效 ]` + +此表单使用与以下相同的语法向外部表添加新约束[`创建外表`](sql-createforeigntable.html).目前仅`查看`支持约束。 + +与将约束添加到常规表的情况不同,没有做任何事情来验证约束是否正确;相反,这个动作只是声明了一些新的条件应该被假定为外部表中的所有行都成立。(见讨论[`CREATE FOREIGN TABLE`](sql-createforeigntable.html).) If the constraint is marked`NOT VALID`, then it isn't assumed to hold, but is only recorded for possible future use. + +`VALIDATE CONSTRAINT` + +This form marks as valid a constraint that was previously marked as`NOT VALID`. No action is taken to verify the constraint, but future queries will assume that it holds. + +`DROP CONSTRAINT [ IF EXISTS ]` + +This form drops the specified constraint on a foreign table. If`IF EXISTS`is specified and the constraint does not exist, no error is thrown. In this case a notice is issued instead. + +`DISABLE`/`ENABLE [ REPLICA | ALWAYS ] TRIGGER` + +These forms configure the firing of trigger(s) belonging to the foreign table. See the similar form of[`ALTER TABLE`](sql-altertable.html)for more details. + +`SET WITHOUT OIDS` + +Backward compatibility syntax for removing the`oid`system column. As`oid`system columns cannot be added anymore, this never has an effect. + +`INHERIT *`parent_table`*` + +This form adds the target foreign table as a new child of the specified parent table. See the similar form of[`ALTER TABLE`](sql-altertable.html)for more details. + +`没有继承 *`父表`*` + +此表单从指定父表的子表中删除目标外部表。 + +`所有者` + +此表单将外部表的所有者更改为指定用户。 + +`选项([添加|设置|删除] *`选项`* ['*`价值`*'] [, ... ] )` + +更改外部表或其列之一的选项。`添加`,`放`, 和`降低`指定要执行的操作。`添加`如果没有明确指定操作,则假定。不允许重复的选项名称(尽管表选项和列选项具有相同的名称是可以的)。选项名称和值也使用外部数据包装库进行验证。 + +`改名` + +这`改名`表单更改外部表的名称或外部表中单个列的名称。 + +`设置架构` + +这种形式将外部表移动到另一个模式中。 + +除了`改名`和`设置架构`可以组合成多个更改的列表以并行应用。例如,可以在单个命令中添加几列和/或更改几列的类型。 + +如果命令写成`ALTER FOREIGN TABLE IF EXISTS ...`并且外表不存在,不会抛出错误。在这种情况下发出通知。 + +您必须拥有该表才能使用`更改外表`.要更改外部表的架构,您还必须具有`创建`新架构的特权。要更改所有者,您还必须是新所有者角色的直接或间接成员,并且该角色必须具有`创建`表架构的特权。(这些限制强制改变所有者不会做任何你不能通过删除和重新创建表来做的事情。但是,超级用户无论如何都可以改变任何表的所有权。)要添加列或更改列类型,您必须也有`用法`数据类型的特权。 + +## 参数 + +*`姓名`* + +要更改的现有外部表的名称(可能是模式限定的)。如果`只要`在表名之前指定,仅更改该表。如果`只要`未指定时,该表及其所有后代表(如果有)将被更改。可选地,`*`可以在表名之后指定以明确指示包含后代表。 + +*`列名`* + +新列或现有列的名称。 + +*`新列名`* + +现有列的新名称。 + +*`新名字`* + +表的新名称。 + +*`数据类型`* + +新列的数据类型,或现有列的新数据类型。 + +*`table_constraint`* + +New table constraint for the foreign table. + +*`constraint_name`* + +Name of an existing constraint to drop. + +`CASCADE` + +Automatically drop objects that depend on the dropped column or constraint (for example, views referencing the column), and in turn all objects that depend on those objects (see[Section 5.14](ddl-depend.html)). + +`RESTRICT` + +Refuse to drop the column or constraint if there are any dependent objects. This is the default behavior. + +*`trigger_name`* + +Name of a single trigger to disable or enable. + +`ALL` + +Disable or enable all triggers belonging to the foreign table. (This requires superuser privilege if any of the triggers are internally generated triggers. The core system does not add such triggers to foreign tables, but add-on code could do so.) + +`USER` + +Disable or enable all triggers belonging to the foreign table except for internally generated triggers. + +*`parent_table`* + +A parent table to associate or de-associate with this foreign table. + +*`new_owner`* + +The user name of the new owner of the table. + +*`new_schema`* + +The name of the schema to which the table will be moved. + +## Notes + +The key word`COLUMN`is noise and can be omitted. + +Consistency with the foreign server is not checked when a column is added or removed with`ADD COLUMN`or`DROP COLUMN`, 一种`非空`或者`查看`添加约束,或更改列类型`设置数据类型`.确保表定义与远程端匹配是用户的责任。 + +参考[`创建外表`](sql-createforeigntable.html)有关有效参数的进一步说明。 + +## 例子 + +要将列标记为非空: + +``` +ALTER FOREIGN TABLE distributors ALTER COLUMN street SET NOT NULL; +``` + +要更改外部表的选项: + +``` +ALTER FOREIGN TABLE myschema.distributors OPTIONS (ADD opt1 'value', SET opt2 'value2', DROP opt3 'value3'); +``` + +## 兼容性 + +表格`添加`,`降低`, 和`设置数据类型`符合 SQL 标准。其他形式是 SQL 标准的 PostgreSQL 扩展。此外,能够在单个操作中指定多个操作`更改外表`命令是一个扩展。 + +`ALTER FOREIGN TABLE DROP 列`可用于删除外部表的唯一列,留下一个零列表。这是 SQL 的扩展,它不允许零列外部表。 + +## 也可以看看 + +[创建外表](sql-createforeigntable.html),[删除外国表](sql-dropforeigntable.html) diff --git a/docs/X/sql-alterfunction.md b/docs/en/sql-alterfunction.md similarity index 100% rename from docs/X/sql-alterfunction.md rename to docs/en/sql-alterfunction.md diff --git a/docs/en/sql-alterfunction.zh.md b/docs/en/sql-alterfunction.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..056e4f5fff68e0e03f96fe2c7fcd339722e14b01 --- /dev/null +++ b/docs/en/sql-alterfunction.zh.md @@ -0,0 +1,172 @@ +## 改变功能 + +ALTER FUNCTION — 更改函数的定义 + +## 概要 + +``` +ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + action [ ... ] [ RESTRICT ] +ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + RENAME TO new_name +ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER } +ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + SET SCHEMA new_schema +ALTER FUNCTION name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] + [ NO ] DEPENDS ON EXTENSION extension_name + +where action is one of: + + CALLED ON NULL INPUT | RETURNS NULL ON NULL INPUT | STRICT + IMMUTABLE | STABLE | VOLATILE + [ NOT ] LEAKPROOF + [ EXTERNAL ] SECURITY INVOKER | [ EXTERNAL ] SECURITY DEFINER + PARALLEL { UNSAFE | RESTRICTED | SAFE } + COST execution_cost + ROWS result_rows + SUPPORT support_function + SET configuration_parameter { TO | = } { value | DEFAULT } + SET configuration_parameter FROM CURRENT + RESET configuration_parameter + RESET ALL +``` + +## 描述 + +`改变功能`改变函数的定义。 + +您必须拥有要使用的功能`改变功能`.要更改函数的架构,您还必须具有`创造`新架构的特权。要更改所有者,您还必须是新所有者角色的直接或间接成员,并且该角色必须具有`创造`对函数架构的特权。(这些限制强制改变所有者不会做任何你不能通过删除和重新创建函数来做的事情。但是,超级用户无论如何都可以改变任何函数的所有权。)参数 + +## 姓名 + +*`现有函数的名称(可选模式限定)。`* + +如果未指定参数列表,则名称在其模式中必须是唯一的。参数模式 + +*`参数的模式:`* + +在`,`出去`,`进出`, 或者`杂音`.`如果省略,则默认为`在`.注意`改变功能`实际上并没有注意`出去`参数,因为只需要输入参数来确定函数的身份。所以列出来就足够了`在`,`进出`, 和`杂音`论据。 + +*`参数名称`* + +参数的名称。注意`改变功能`实际上并不关注参数名称,因为只需要参数数据类型来确定函数的身份。 + +*`参数类型`* + +函数参数的数据类型(可选模式限定),如果有的话。 + +*`新名字`* + +函数的新名称。 + +*`新主人`* + +函数的新所有者。请注意,如果功能被标记`安全定义器`,它将随后作为新所有者执行。 + +*`新模式`* + +函数的新架构。 + +`取决于扩展 *`扩展名`*`\ +`不依赖于扩展 *`扩展名`*` + +这种形式将函数标记为依赖于扩展,或者不再依赖于该扩展,如果`不`被指定。删除扩展时会自动删除标记为依赖于扩展的函数。 + +`空输入时调用`\ +`NULL 输入返回 NULL`\ +`严格的` + +`空输入时调用`更改函数,以便在其部分或全部参数为空时调用它。`NULL 输入返回 NULL`或者`严格的`更改函数,使其在任何参数为空时都不会被调用;相反,会自动假定为空结果。看[创建函数](sql-createfunction.html)了解更多信息。 + +`不可变的`\ +`稳定的`\ +`易挥发的` + +将函数的波动性更改为指定的设置。看[创建函数](sql-createfunction.html)详情。 + +`[外部]安全调用者`\ +`[外部]安全定义器` + +更改函数是否为安全定义器。关键词`外部的`SQL 一致性被忽略。看[创建函数](sql-createfunction.html)有关此功能的更多信息。 + +`平行线` + +更改函数是否被认为对并行性安全。看[创建函数](sql-createfunction.html)详情。 + +`防漏` + +更改该功能是否被认为是防漏的。看[创建函数](sql-createfunction.html)有关此功能的更多信息。 + +`成本` *`执行成本`* + +更改函数的估计执行成本。看[创建函数](sql-createfunction.html)了解更多信息。 + +`行` *`结果行`* + +更改集合返回函数返回的估计行数。看[创建函数](sql-createfunction.html)了解更多信息。 + +`支持` *`支持功能`* + +设置或更改计划器支持功能以用于此功能。看[第 38.11 节](xfunc-optimization.html)详情。您必须是超级用户才能使用此选项。 + +此选项不能用于完全删除支持功能,因为它必须命名一个新的支持功能。采用`创建或替换函数`如果你需要这样做。 + +*`配置参数`*\ +*`价值`* + +在调用函数时添加或更改要对配置参数进行的分配。如果*`价值`*是`默认`或者,等效地,`重置`使用时,函数本地设置被删除,以便函数以其环境中存在的值执行。采用`重置全部`清除所有功能本地设置。`从当前设置`保存当前参数的值`改变功能`作为输入函数时应用的值执行。 + +看[放](sql-set.html)和[第 20 章](runtime-config.html)有关允许的参数名称和值的更多信息。 + +`严格` + +忽略以符合 SQL 标准。 + +## 例子 + +重命名函数`平方`对于类型`整数`到`平方根`: + +``` +ALTER FUNCTION sqrt(integer) RENAME TO square_root; +``` + +更改函数的所有者`平方`对于类型`整数`到`乔`: + +``` +ALTER FUNCTION sqrt(integer) OWNER TO joe; +``` + +更改函数的架构`平方`对于类型`整数`到`数学`: + +``` +ALTER FUNCTION sqrt(integer) SET SCHEMA maths; +``` + +标记功能`sqrt`for type`integer`as being dependent on the extension`mathlib`: + +``` +ALTER FUNCTION sqrt(integer) DEPENDS ON EXTENSION mathlib; +``` + +To adjust the search path that is automatically set for a function: + +``` +ALTER FUNCTION check_password(text) SET search_path = admin, pg_temp; +``` + +To disable automatic setting of`search_path`for a function: + +``` +ALTER FUNCTION check_password(text) RESET search_path; +``` + +The function will now execute with whatever search path is used by its caller. + +## Compatibility + +This statement is partially compatible with the`ALTER FUNCTION`statement in the SQL standard. The standard allows more properties of a function to be modified, but does not provide the ability to rename a function, make a function a security definer, attach configuration parameter values to a function, or change the owner, schema, or volatility of a function. The standard also requires the`RESTRICT`key word, which is optional in PostgreSQL. + +## See Also + +[CREATE FUNCTION](sql-createfunction.html),[DROP FUNCTION](sql-dropfunction.html),[ALTER PROCEDURE](sql-alterprocedure.html),[ALTER ROUTINE](sql-alterroutine.html) diff --git a/docs/X/sql-altertable.md b/docs/en/sql-altertable.md similarity index 100% rename from docs/X/sql-altertable.md rename to docs/en/sql-altertable.md diff --git a/docs/en/sql-altertable.zh.md b/docs/en/sql-altertable.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8f9e2ddec611c02a1033c07837f23d26992d3b94 --- /dev/null +++ b/docs/en/sql-altertable.zh.md @@ -0,0 +1,704 @@ +## 更改表 + +ALTER TABLE — 更改表的定义 + +## 概要 + +``` +ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + action [, ... ] +ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + RENAME [ COLUMN ] column_name TO new_column_name +ALTER TABLE [ IF EXISTS ] [ ONLY ] name [ * ] + RENAME CONSTRAINT constraint_name TO new_constraint_name +ALTER TABLE [ IF EXISTS ] name + RENAME TO new_name +ALTER TABLE [ IF EXISTS ] name + SET SCHEMA new_schema +ALTER TABLE ALL IN TABLESPACE name [ OWNED BY role_name [, ... ] ] + SET TABLESPACE new_tablespace [ NOWAIT ] +ALTER TABLE [ IF EXISTS ] name + ATTACH PARTITION partition_name { FOR VALUES partition_bound_spec | DEFAULT } +ALTER TABLE [ IF EXISTS ] name + DETACH PARTITION partition_name [ CONCURRENTLY | FINALIZE ] + +where action is one of: + + ADD [ COLUMN ] [ IF NOT EXISTS ] column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ] + DROP [ COLUMN ] [ IF EXISTS ] column_name [ RESTRICT | CASCADE ] + ALTER [ COLUMN ] column_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ USING expression ] + ALTER [ COLUMN ] column_name SET DEFAULT expression + ALTER [ COLUMN ] column_name DROP DEFAULT + ALTER [ COLUMN ] column_name { SET | DROP } NOT NULL + ALTER [ COLUMN ] column_name DROP EXPRESSION [ IF EXISTS ] + ALTER [ COLUMN ] column_name ADD GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] + ALTER [ COLUMN ] column_name { SET GENERATED { ALWAYS | BY DEFAULT } | SET sequence_option | RESTART [ [ WITH ] restart ] } [...] + ALTER [ COLUMN ] column_name DROP IDENTITY [ IF EXISTS ] + ALTER [ COLUMN ] column_name SET STATISTICS integer + ALTER [ COLUMN ] column_name SET ( attribute_option = value [, ... ] ) + ALTER [ COLUMN ] column_name RESET ( attribute_option [, ... ] ) + ALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN } + ALTER [ COLUMN ] column_name SET COMPRESSION compression_method + ADD table_constraint [ NOT VALID ] + ADD table_constraint_using_index + ALTER CONSTRAINT constraint_name [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + VALIDATE CONSTRAINT constraint_name + DROP CONSTRAINT [ IF EXISTS ] constraint_name [ RESTRICT | CASCADE ] + DISABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE TRIGGER [ trigger_name | ALL | USER ] + ENABLE REPLICA TRIGGER trigger_name + ENABLE ALWAYS TRIGGER trigger_name + DISABLE RULE rewrite_rule_name + ENABLE RULE rewrite_rule_name + ENABLE REPLICA RULE rewrite_rule_name + ENABLE ALWAYS RULE rewrite_rule_name + DISABLE ROW LEVEL SECURITY + ENABLE ROW LEVEL SECURITY + FORCE ROW LEVEL SECURITY + NO FORCE ROW LEVEL SECURITY + CLUSTER ON index_name + SET WITHOUT CLUSTER + SET WITHOUT OIDS + SET TABLESPACE new_tablespace + SET { LOGGED | UNLOGGED } + SET ( storage_parameter [= value] [, ... ] ) + RESET ( storage_parameter [, ... ] ) + INHERIT parent_table + NO INHERIT parent_table + OF type_name + NOT OF + OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER } + REPLICA IDENTITY { DEFAULT | USING INDEX index_name | FULL | NOTHING } + +and partition_bound_spec is: + +IN ( partition_bound_expr [, ...] ) | +FROM ( { partition_bound_expr | MINVALUE | MAXVALUE } [, ...] ) + TO ( { partition_bound_expr | MINVALUE | MAXVALUE } [, ...] ) | +WITH ( MODULUS numeric_literal, REMAINDER numeric_literal ) + +and column_constraint is: + +[ CONSTRAINT constraint_name ] +{ NOT NULL | + NULL | + CHECK ( expression ) [ NO INHERIT ] | + DEFAULT default_expr | + GENERATED ALWAYS AS ( generation_expr ) STORED | + GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY [ ( sequence_options ) ] | + UNIQUE index_parameters | + PRIMARY KEY index_parameters | + REFERENCES reftable [ ( refcolumn ) ] [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] + [ ON DELETE referential_action ] [ ON UPDATE referential_action ] } +[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + +and table_constraint is: + +[ CONSTRAINT constraint_name ] +{ CHECK ( expression ) [ NO INHERIT ] | + UNIQUE ( column_name [, ... ] ) index_parameters | + PRIMARY KEY ( column_name [, ... ] ) index_parameters | + EXCLUDE [ USING index_method ] ( exclude_element WITH operator [, ... ] ) index_parameters [ WHERE ( predicate ) ] | + FOREIGN KEY ( column_name [, ... ] ) REFERENCES reftable [ ( refcolumn [, ... ] ) ] + [ MATCH FULL | MATCH PARTIAL | MATCH SIMPLE ] [ ON DELETE referential_action ] [ ON UPDATE referential_action ] } +[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + +and table_constraint_using_index is: + + [ CONSTRAINT constraint_name ] + { UNIQUE | PRIMARY KEY } USING INDEX index_name + [ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ] + +index_parameters in UNIQUE, PRIMARY KEY, and EXCLUDE constraints are: + +[ INCLUDE ( column_name [, ... ] ) ] +[ WITH ( storage_parameter [= value] [, ... ] ) ] +[ USING INDEX TABLESPACE tablespace_name ] + +exclude_element in an EXCLUDE constraint is: + +{ column_name | ( expression ) } [ opclass ] [ ASC | DESC ] [ NULLS { FIRST | LAST } ] +``` + +## 描述 + +`更改表`更改现有表的定义。下面描述了几个子窗体。请注意,每个子表单所需的锁定级别可能不同。一个`访问独家`除非明确说明,否则会获取锁。当给出多个子命令时,获得的锁将是任何子命令所需的最严格的锁。 + +`添加列 [如果不存在]` + +此表单使用与以下相同的语法向表中添加一个新列[`创建表`](sql-createtable.html).如果`如果不存在`已指定且已存在具有此名称的列,则不会引发错误。 + +`删除列 [如果存在]` + +此表单从表中删除一列。涉及该列的索引和表约束也将被自动删除。如果删除列会导致统计信息仅包含单个列的数据,则引用已删除列的多变量统计信息也将被删除。你需要说`级联`如果表之外的任何内容取决于列,例如外键引用或视图。如果`如果存在`指定并且该列不存在,则不会引发错误。在这种情况下,将发出通知。 + +`设置数据类型` + +此表单更改表格列的类型。通过重新解析最初提供的表达式,涉及该列的索引和简单表约束将自动转换为使用新的列类型。可选的`整理`子句指定新列的排序规则;如果省略,则排序规则是新列类型的默认值。可选的`使用`子句指定如何从旧列值计算新列值;如果省略,则默认转换与从旧数据类型到新数据类型的赋值转换相同。一种`使用`如果没有从旧类型到新类型的隐式或赋值强制转换,则必须提供子句。 + +`放`/`删除默认值` + +这些表单设置或删除列的默认值(其中删除相当于将默认值设置为 NULL)。新的默认值仅适用于后续`插入`或者`更新`命令;它不会导致表中已经存在的行发生更改。 + +`放`/`删除不为空` + +这些形式改变了列被标记为允许空值还是拒绝空值。 + +`设置非空`只能应用于列,前提是表中的任何记录都不包含`空值`列的值。通常这是在检查期间`更改表`通过扫描整个表格;然而,如果一个有效的`查看`发现约束证明不`空值`可以存在,则跳过表扫描。 + +如果此表是分区,则无法执行`删除不为空`如果已标记,则在列上`非空`在父表中。放下`非空`来自所有分区的约束,执行`删除不为空`在父表上。即使没有`非空`constraint on the parent, such a constraint can still be added to individual partitions, if desired; that is, the children can disallow nulls even if the parent allows them, but not the other way around. + +`DROP EXPRESSION [ IF EXISTS ]` + +This form turns a stored generated column into a normal base column. Existing data in the columns is retained, but future changes will no longer apply the generation expression. + +If`DROP EXPRESSION IF EXISTS`is specified and the column is not a stored generated column, no error is thrown. In this case a notice is issued instead. + +`ADD GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY`\ +`SET GENERATED { ALWAYS | BY DEFAULT }`\ +`DROP IDENTITY [ IF EXISTS ]` + +These forms change whether a column is an identity column or change the generation attribute of an existing identity column. See[`CREATE TABLE`](sql-createtable.html)for details. Like`SET DEFAULT`, these forms only affect the behavior of subsequent`INSERT`and`UPDATE`commands; they do not cause rows already in the table to change. + +If`DROP IDENTITY IF EXISTS`is specified and the column is not an identity column, no error is thrown. In this case a notice is issued instead. + +`SET *`sequence_option`*`\ +`RESTART` + +These forms alter the sequence that underlies an existing identity column.*`sequence_option`*is an option supported by[`ALTER SEQUENCE`](sql-altersequence.html)such as`增量`. + +`设置统计` + +这种形式为后续的每列统计收集目标设置[`分析`](sql-analyze.html)操作。目标可在 0 到 10000 范围内设置;或者,将其设置为 -1 以恢复使用系统默认统计目标 ([默认\_统计数据\_目标](runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET))。有关 PostgreSQL 查询计划器使用统计信息的更多信息,请参阅[第 14.2 节](planner-stats.html). + +`设置统计`获得一个`共享更新独家`锁。 + +`放 ( *`属性选项`* = *`价值`* [, ... ] )`\ +`重置 ( *`属性选项`* [, ... ] )` + +此表单设置或重置每个属性选项。目前,唯一定义的每个属性选项是`n_distinct`和`n_distinct_inherited`,它覆盖了后续所做的不同值的数量估计[`分析`](sql-analyze.html)操作。`n_distinct`影响表本身的统计信息,而`n_distinct_inherited`影响为表及其继承子级收集的统计信息。当设置为正值时,`分析`将假定该列正好包含指定数量的不同非空值。当设置为负值时,必须大于或等于 -1,`分析`将假设列中不同的非空值的数量与表的大小成线性关系;确切的计数是通过将估计的表大小乘以给定数字的绝对值来计算的。例如,值 -1 意味着列中的所有值都是不同的,而值 -0.5 意味着每个值平均出现两次。当表的大小随时间变化时,这可能很有用,因为直到查询计划时间才执行乘以表中的行数。指定值 0 以恢复正常估计不同值的数量。有关 PostgreSQL 查询计划器使用统计信息的更多信息,请参阅[第 14.2 节](planner-stats.html). + +更改每个属性选项会获得`共享更新独家`锁。 + +`设置存储` [](<>) + +此表单设置列的存储模式。这控制此列是内联保存还是保存在辅助 TOAST 表中,以及是否应压缩数据。`清楚的`必须用于固定长度的值,例如`整数`并且是内联的,未压缩的。`主要的`用于内联的可压缩数据。`外部的`用于外部未压缩数据,并且`扩展`用于外部压缩数据。`扩展`是大多数支持非`清楚的`贮存。用于`外部的`将使子字符串操作非常大`文本`和`拜茶`值运行得更快,但会增加存储空间。注意`设置存储`它本身不会改变表格中的任何内容,它只是设置在未来表格更新期间要采用的策略。看[第 70.2 节](storage-toast.html)了解更多信息。 + +`设置压缩 *`压缩方法`*` + +此表单设置列的压缩方法,确定将来插入的值将如何压缩(如果存储模式完全允许压缩)。这不会导致表被重写,因此现有数据仍可能使用其他压缩方法进行压缩。如果表是用 pg 恢复的\_恢复,然后使用配置的压缩方法重写所有值。但是,当从另一个关系插入数据时(例如,通过`插入...选择`),源表中的值不一定会被解压,因此任何先前压缩的数据都可能保留其现有的压缩方法,而不是使用目标列的压缩方法重新压缩。支持的压缩方法是`pglz`和`lz4`.(`lz4`仅在以下情况下可用`--with-lz4`在构建 PostgreSQL 时使用。)此外,*`压缩方法`*可`default`, which selects the default behavior of consulting the[default_toast_compression](runtime-config-client.html#GUC-DEFAULT-TOAST-COMPRESSION)setting at the time of data insertion to determine the method to use. + +`ADD *`table_constraint`* [ NOT VALID ]` + +This form adds a new constraint to a table using the same constraint syntax as[`CREATE TABLE`](sql-createtable.html), plus the option`NOT VALID`, which is currently only allowed for foreign key and CHECK constraints. + +Normally, this form will cause a scan of the table to verify that all existing rows in the table satisfy the new constraint. But if the`NOT VALID`option is used, this potentially-lengthy scan is skipped. The constraint will still be enforced against subsequent inserts or updates (that is, they'll fail unless there is a matching row in the referenced table, in the case of foreign keys, or they'll fail unless the new row matches the specified check condition). But the database will not assume that the constraint holds for all rows in the table, until it is validated by using the`VALIDATE CONSTRAINT`option. See[Notes](sql-altertable.html#SQL-ALTERTABLE-NOTES)below for more information about using the`NOT VALID`option. + +Although most forms of`ADD *`table_constraint`*`require an`访问独家`锁,`添加外键`只需要一个`共享行独家`锁。注意`添加外键`还获得了一个`共享行独家`除了在声明约束的表上锁定之外,还锁定被引用的表。 + +将唯一键或主键约束添加到分区表时会应用其他限制;看[`创建表`](sql-createtable.html).此外,可能不会声明分区表上的外键约束`无效`目前。 + +`添加 *`table_constraint_using_index`*` + +这种形式增加了一个新的`首要的关键`要么`独特`基于现有唯一索引的表的约束。索引的所有列都将包含在约束中。 + +索引不能有表达式列,也不能是部分索引。此外,它必须是具有默认排序顺序的 b 树索引。这些限制确保索引等同于由常规构建的索引`添加主键`要么`添加唯一`命令。 + +如果`首要的关键`已指定,并且索引的列尚未标记`非空`, 那么这个命令会尝试做`ALTER COLUMN SET NOT NULL`针对每个这样的列。这需要全表扫描来验证列不包含空值。在所有其他情况下,这是一个快速操作。 + +如果提供了约束名称,则索引将被重命名以匹配约束名称。否则,约束将被命名为与索引相同。 + +执行此命令后,索引由约束“拥有”,就像索引已由常规构建一样`添加主键`要么`添加唯一`命令。特别是,删除约束也会使索引消失。 + +分区表目前不支持这种形式。 + +### 笔记 + +在需要添加新约束而又不长时间阻塞表更新的情况下,使用现有索引添加约束可能会很有帮助。为此,请使用创建索引`并发创建索引`,然后使用此语法将其安装为官方约束。请参见下面的示例。 + +`改变约束` + +这种形式改变了先前创建的约束的属性。目前只有外键约束可以改变。 + +`验证约束` + +此表单验证先前创建为的外键或检查约束`无效`,通过扫描表以确保没有不满足约束的行。如果约束已被标记为有效,则不会发生任何事情。(看[笔记](sql-altertable.html#SQL-ALTERTABLE-NOTES)下面解释了这个命令的用处。) + +该命令获取一个`共享更新独家`锁。 + +`删除约束 [如果存在]` + +此表单删除表上的指定约束以及约束下的任何索引。如果`如果存在`is specified and the constraint does not exist, no error is thrown. In this case a notice is issued instead. + +`DISABLE`/`ENABLE [ REPLICA | ALWAYS ] TRIGGER` + +These forms configure the firing of trigger(s) belonging to the table. A disabled trigger is still known to the system, but is not executed when its triggering event occurs. For a deferred trigger, the enable status is checked when the event occurs, not when the trigger function is actually executed. One can disable or enable a single trigger specified by name, or all triggers on the table, or only user triggers (this option excludes internally generated constraint triggers such as those that are used to implement foreign key constraints or deferrable uniqueness and exclusion constraints). Disabling or enabling internally generated constraint triggers requires superuser privileges; it should be done with caution since of course the integrity of the constraint cannot be guaranteed if the triggers are not executed. + +The trigger firing mechanism is also affected by the configuration variable[session_replication_role](runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE). Simply enabled triggers (the default) will fire when the replication role is “origin” (the default) or “local”. Triggers configured as`ENABLE REPLICA`will only fire if the session is in “replica” mode, and triggers configured as`ENABLE ALWAYS`will fire regardless of the current replication role. + +The effect of this mechanism is that in the default configuration, triggers do not fire on replicas. This is useful because if a trigger is used on the origin to propagate data between tables, then the replication system will also replicate the propagated data, and the trigger should not fire a second time on the replica, because that would lead to duplication. However, if a trigger is used for another purpose such as creating external alerts, then it might be appropriate to set it to`ENABLE ALWAYS`so that it is also fired on replicas. + +This command acquires a`SHARE ROW EXCLUSIVE`lock. + +`DISABLE`/`ENABLE [ REPLICA | ALWAYS ] RULE` + +These forms configure the firing of rewrite rules belonging to the table. A disabled rule is still known to the system, but is not applied during query rewriting. The semantics are as for disabled/enabled triggers. This configuration is ignored for`ON SELECT`rules, which are always applied in order to keep views working even if the current session is in a non-default replication role. + +The rule firing mechanism is also affected by the configuration variable[session\_复制\_角色](runtime-config-client.html#GUC-SESSION-REPLICATION-ROLE),类似于上面描述的触发器。 + +`禁用`/`启用行级安全` + +这些表单控制属于表的行安全策略的应用。如果已启用且表不存在任何策略,则应用默认拒绝策略。请注意,即使禁用了行级安全性,表也可以存在策略。在这种情况下,政策将*不是*被应用并且策略将被忽略。也可以看看[`创建政策`](sql-createpolicy.html). + +`没有力量`/`强制行级安全` + +当用户是表所有者时,这些表单控制属于表的行安全策略的应用。如果启用,当用户是表所有者时将应用行级安全策略。如果禁用(默认),则当用户是表所有者时,不会应用行级安全性。也可以看看[`创建政策`](sql-createpolicy.html). + +`集群开启` + +此表单选择未来的默认索引[`簇`](sql-cluster.html)操作。它实际上并没有重新聚集表。 + +更改集群选项会获得`共享更新独家`锁。 + +`无簇集` + +此表单删除最近使用的[`簇`](sql-cluster.html)表中的索引规范。这会影响未指定索引的未来集群操作。 + +更改集群选项会获得`共享更新独家`锁。 + +`无 OID 设置` + +用于删除的向后兼容语法`样的`系统栏。作为`样的`无法再添加系统列,这永远不会产生影响。 + +`设置表空间` + +此表单将表的表空间更改为指定的表空间,并将与表关联的数据文件移动到新的表空间。表上的索引(如果有)不会移动;但它们可以单独移动`设置表空间`命令。当应用于分区表时,不会移动任何内容,但之后创建的任何分区`创建表分区`将使用该表空间,除非被`表空间`条款。 + +表空间中当前数据库中的所有表都可以使用`全部在表空间中`表单,它将先锁定所有要移动的表,然后再移动每个表。这种形式还支持`拥有者`,这只会移动指定角色拥有的表。如果`现在等待`如果指定了选项,则如果无法立即获取所需的所有锁,该命令将失败。请注意,此命令不会移动系统目录;采用`更改数据库`或明确的`更改表`如果需要,可以调用。这`信息模式`关系不被视为系统目录的一部分,将被移动。也可以看看[`CREATE TABLESPACE`](sql-createtablespace.html). + +`SET { LOGGED | UNLOGGED }` + +This form changes the table from unlogged to logged or vice-versa (see[`UNLOGGED`](sql-createtable.html#SQL-CREATETABLE-UNLOGGED)). It cannot be applied to a temporary table. + +`SET ( *`storage_parameter`* [= *`value`*] [, ... ] )` + +This form changes one or more storage parameters for the table. See[Storage Parameters](sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS)in the[`CREATE TABLE`](sql-createtable.html)documentation for details on the available parameters. Note that the table contents will not be modified immediately by this command; depending on the parameter you might need to rewrite the table to get the desired effects. That can be done with[`VACUUM FULL`](sql-vacuum.html),[`CLUSTER`](sql-cluster.html)or one of the forms of`ALTER TABLE`that forces a table rewrite. For planner related parameters, changes will take effect from the next time the table is locked so currently executing queries will not be affected. + +`SHARE UPDATE EXCLUSIVE`lock will be taken for fillfactor, toast and autovacuum storage parameters, as well as the planner parameter`parallel_workers`. + +`RESET ( *`storage_parameter`* [, ... ] )` + +This form resets one or more storage parameters to their defaults. As with`放`,可能需要重写表才能完全更新表。 + +`继承 *`父表`*` + +此表单将目标表添加为指定父表的新子表。随后,针对父表的查询将包括目标表的记录。要作为子表添加,目标表必须已经包含与父表相同的所有列(它也可以有其他列)。列必须具有匹配的数据类型,并且如果它们具有`非空`父母中的约束,那么他们也必须有`非空`孩子身上的约束。 + +还必须有所有匹配的子表约束`查看`父级的约束,除了那些标记为不可继承的(即,用`ALTER TABLE ... 添加约束 ... 无继承`) 在父级中,它们被忽略;所有匹配的子表约束不得标记为不可继承。现在`独特`,`首要的关键`, 和`外键`不考虑约束,但将来可能会改变。 + +`没有继承 *`父表`*` + +此表单从指定父表的子表中删除目标表。针对父表的查询将不再包括从目标表中提取的记录。 + +`的 *`类型名称`*` + +此表单将表链接到复合类型,就好像`创建表`已经形成了。表的列名和类型列表必须与复合类型的列表精确匹配。该表不得从任何其他表继承。这些限制确保`创建表`将允许等效的表定义。 + +`不属于` + +此表单将类型化表与其类型分离。 + +`拥有者` + +此表单将表、序列、视图、物化视图或外部表的所有者更改为指定用户。 + +`副本身份` + +此表单更改写入预写日志的信息,以识别更新或删除的行。除非正在使用逻辑复制,否则此选项无效。在所有情况下,都不会记录旧值,除非至少要记录的列之一在行的旧版本和新版本之间有所不同。 + +`默认` + +记录主键列的旧值(如果有)。这是非系统表的默认值。 + +`使用索引 *`索引名称`*` + +记录命名索引所涵盖的列的旧值,这些值必须是唯一的、非部分的、不可延迟的,并且仅包括标记的列`非空`.如果删除此索引,则行为与`没有`. + +`满的` + +记录行中所有列的旧值。 + +`没有` + +不记录有关旧行的信息。这是系统表的默认设置。 + +`改名` + +这`改名`表单更改表(或索引、序列、视图、物化视图或外部表)的名称、表中单个列的名称或表的约束名称。重命名具有基础索引的约束时,索引也会被重命名。对存储的数据没有影响。 + +`设置架构` + +这种形式将表移动到另一个模式中。表列所拥有的关联索引、约束和序列也会被移动。 + +`附加分区 *`partition_name`* { FOR VALUES *`partition_bound_spec`* | DEFAULT }` + +This form attaches an existing table (which might itself be partitioned) as a partition of the target table. The table can be attached as a partition for specific values using`FOR VALUES`or as a default partition by using`DEFAULT`. For each index in the target table, a corresponding one will be created in the attached table; or, if an equivalent index already exists, it will be attached to the target table's index, as if`ALTER INDEX ATTACH PARTITION`had been executed. Note that if the existing table is a foreign table, it is currently not allowed to attach the table as a partition of the target table if there are`UNIQUE`indexes on the target table. (See also[CREATE FOREIGN TABLE](sql-createforeigntable.html).) For each user-defined row-level trigger that exists in the target table, a corresponding one is created in the attached table. + +A partition using`FOR VALUES`uses same syntax for*`partition_bound_spec`*as[`CREATE TABLE`](sql-createtable.html). The partition bound specification must correspond to the partitioning strategy and partition key of the target table. The table to be attached must have all the same columns as the target table and no more; moreover, the column types must also match. Also, it must have all the`NOT NULL`and`CHECK`constraints of the target table. Currently`FOREIGN KEY`constraints are not considered.`UNIQUE`and`首要的关键`如果父表中的约束不存在,则将在分区中创建它们。如果任何一个`查看`被附加的表的约束被标记`没有继承`,命令将失败;必须在没有`没有继承`条款。 + +如果新分区是常规表,则执行全表扫描以检查表中的现有行是否违反分区约束。可以通过添加一个有效的`查看`对表的约束,在运行此命令之前只允许满足所需分区约束的行。这`查看`约束将用于确定不需要扫描表来验证分区约束。但是,如果任何分区键是表达式并且分区不接受,则这不起作用`空值`价值观。如果附加一个不接受的列表分区`空值`值,还添加`非空`对分区键列的约束,除非它是一个表达式。 + +如果新分区是外表,则不做任何事情来验证外表中的所有行是否符合分区约束。(见讨论[创建外表](sql-createforeigntable.html)关于外表的约束。) + +当表具有默认分区时,定义新分区会更改默认分区的分区约束。默认分区不能包含任何需要移动到新分区的行,并且将被扫描以验证是否不存在任何行。这种扫描,就像对新分区的扫描一样,如果适当的话可以避免`查看`存在约束。也和新分区的扫描一样,默认分区是外表时总是跳过。 + +附加一个分区会获得一个`共享更新独家`锁定父表,除了`访问独家`锁定正在附加的表和默认分区(如果有)。 + +Further locks must also be held on all sub-partitions if the table being attached is itself a partitioned table. Likewise if the default partition is itself a partitioned table. The locking of the sub-partitions can be avoided by adding a`CHECK`constraint as described in[Section 5.11.2.2](ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-MAINTENANCE). + +`DETACH PARTITION *`partition_name`* [ CONCURRENTLY | FINALIZE ]` + +This form detaches the specified partition of the target table. The detached partition continues to exist as a standalone table, but no longer has any ties to the table from which it was detached. Any indexes that were attached to the target table's indexes are detached. Any triggers that were created as clones of those in the target table are removed.`SHARE`lock is obtained on any tables that reference this partitioned table in foreign key constraints. + +If`CONCURRENTLY`is specified, it runs using a reduced lock level to avoid blocking other sessions that might be accessing the partitioned table. In this mode, two transactions are used internally. During the first transaction, a`SHARE UPDATE EXCLUSIVE`lock is taken on both parent table and partition, and the partition is marked as undergoing detach; at that point, the transaction is committed and all other transactions using the partitioned table are waited for. Once all those transactions have completed, the second transaction acquires`SHARE UPDATE EXCLUSIVE`on the partitioned table and`ACCESS EXCLUSIVE`on the partition, and the detach process completes. A`CHECK`constraint that duplicates the partition constraint is added to the partition.`CONCURRENTLY`cannot be run in a transaction block and is not allowed if the partitioned table contains a default partition. + +If`FINALIZE`is specified, a previous`DETACH CONCURRENTLY`invocation that was canceled or interrupted is completed. At most one partition in a partitioned table can be pending detach at a time. + +All the forms of ALTER TABLE that act on a single table, except`RENAME`,`SET SCHEMA`,`ATTACH PARTITION`, and`DETACH PARTITION`can be combined into a list of multiple alterations to be applied together. For example, it is possible to add several columns and/or alter the type of several columns in a single command. This is particularly useful with large tables, since only one pass over the table need be made. + +You must own the table to use`ALTER TABLE`. To change the schema or tablespace of a table, you must also have`CREATE`privilege on the new schema or tablespace. To add the table as a new child of a parent table, you must own the parent table as well. Also, to attach a table as a new partition of the table, you must own the table being attached. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have`CREATE`privilege on the table's schema. (These restrictions enforce that altering the owner doesn't do anything you couldn't do by dropping and recreating the table. However, a superuser can alter ownership of any table anyway.) To add a column or alter a column type or use the`OF`clause, you must also have`USAGE`privilege on the data type. + +## Parameters + +`IF EXISTS` + +Do not throw an error if the table does not exist. A notice is issued in this case. + +*`name`* + +The name (optionally schema-qualified) of an existing table to alter. If`ONLY`is specified before the table name, only that table is altered. If`ONLY`is not specified, the table and all its descendant tables (if any) are altered. Optionally,`*`can be specified after the table name to explicitly indicate that descendant tables are included. + +*`列名`* + +新列或现有列的名称。 + +*`新列名`* + +现有列的新名称。 + +*`新名字`* + +表的新名称。 + +*`数据类型`* + +新列的数据类型,或现有列的新数据类型。 + +*`表约束`* + +表的新表约束。 + +*`约束名`* + +新约束或现有约束的名称。 + +`级联` + +自动删除依赖于已删除列或约束的对象(例如,引用该列的视图),然后依次删除依赖于这些对象的所有对象(请参阅[第 5.14 节](ddl-depend.html))。 + +`严格` + +如果有任何依赖对象,则拒绝删除列或约束。这是默认行为。 + +*`触发器名称`* + +要禁用或启用的单个触发器的名称。 + +`全部` + +禁用或启用属于该表的所有触发器。(如果任何触发器是内部生成的约束触发器,例如用于实现外键约束或可延迟唯一性和排除约束的触发器,则这需要超级用户权限。) + +`用户` + +禁用或启用属于表的所有触发器,内部生成的约束触发器除外,例如用于实现外键约束或可延迟唯一性和排除约束的触发器。 + +*`索引名称`* + +现有索引的名称。 + +*`存储参数`* + +表存储参数的名称。 + +*`价值`* + +表存储参数的新值。这可能是一个数字或一个单词,具体取决于参数。 + +*`父表`* + +要与此表关联或取消关联的父表。 + +*`新主人`* + +表的新所有者的用户名。 + +*`新表空间`* + +表将被移动到的表空间的名称。 + +*`新模式`* + +表将被移动到的模式的名称。 + +*`分区名`* + +要作为新分区附加或从该表分离的表的名称。 + +*`partition_bound_spec`* + +新分区的分区绑定规范。参考[创建表](sql-createtable.html)有关相同语法的更多详细信息。 + +## 笔记 + +关键词`柱子`是噪声,可以省略。 + +当添加一列时`添加列`和非易失性`默认`如果指定,则在执行语句时评估默认值,并将结果存储在表的元数据中。该值将用于所有现有行的列。如果不`默认`已指定,则使用 NULL。在这两种情况下都不需要重写表。 + +添加具有 volatile 的列`默认`或更改现有列的类型将需要重写整个表及其索引。作为一个例外,在更改现有列的类型时,如果`使用`子句不会更改列内容,并且旧类型要么是二进制可强制转换为新类型,要么是新类型上的不受约束的域,不需要表重写;但仍必须重建受影响列上的任何索引。对于大型表,表和/或索引重建可能需要大量时间;并且将暂时需要两倍的磁盘空间。 + +添加一个`查看`要么`非空`约束需要扫描表以验证现有行是否满足约束,但不需要重写表。 + +类似地,在附加新分区时,可能会对其进行扫描以验证现有行是否满足分区约束。 + +提供在单个文件中指定多个更改的选项的主要原因`更改表`是多个表扫描或重写可以组合成一个单一的遍历表。 + +扫描大表以验证新的外键或检查约束可能需要很长时间,并且对表的其他更新被锁定,直到`ALTER TABLE 添加约束`命令已提交。的主要目的`无效`约束选项是减少添加约束对并发更新的影响。和`无效`, 这`添加约束`命令不扫描表,可以立即提交。之后,一个`验证约束`可以发出命令来验证现有行是否满足约束。验证步骤不需要锁定并发更新,因为它知道其他事务将对它们插入或更新的行执行约束;只需要检查预先存在的行。因此,验证只获得一个`共享更新独家`锁定正在更改的表。(如果约束是外键,那么`行共享`约束所引用的表也需要锁。)除了提高并发性之外,使用`无效`和`验证约束`在已知表格包含预先存在的违规行为的情况下。一旦约束到位,就不能插入新的违规行为,并且可以随意纠正现有问题,直到`验证约束`终于成功了。 + +这`删除列`form 不会物理删除该列,而只是使其对 SQL 操作不可见。表中的后续插入和更新操作将存储该列的空值。因此,删除列很快,但不会立即减少表的磁盘大小,因为被删除列占用的空间不会被回收。随着现有行的更新,空间将随着时间的推移而被回收。 + +要强制立即回收被丢弃的列占用的空间,您可以执行以下形式之一`更改表`执行整个表的重写。这将导致重建每一行,并将删除的列替换为空值。 + +重写的形式`更改表`不是 MVCC 安全的。表重写后,如果并发事务使用的是在重写发生之前拍摄的快照,则表将显示为空。看[第 13.5 节](mvcc-caveats.html)更多细节。 + +这`使用`选项`设置数据类型`实际上可以指定任何涉及该行旧值的表达式;也就是说,它可以引用其他列以及正在转换的列。这允许使用`设置数据类型`句法。由于这种灵活性,`使用`表达式不适用于列的默认值(如果有);结果可能不是默认值所需的常量表达式。这意味着当没有从旧类型到新类型的隐式或赋值转换时,`设置数据类型`可能无法转换默认值,即使`使用`条款提供。在这种情况下,删除默认值`删除默认值`, 执行`改变类型`,然后使用`默认设置`添加合适的新默认值。类似的考虑适用于涉及列的索引和约束。 + +如果一个表有任何后代表,则不允许在父表中添加、重命名或更改列的类型,而不对后代进行相同的操作。这可确保后代始终具有与父级匹配的列。同样,一个`查看`约束不能在父级中重命名而不在所有后代中重命名,因此`查看`约束条件也在父代与其后代之间匹配。(但是,该限制不适用于基于索引的约束。)此外,由于从父项中选择也从其后代中选择,因此对父项的约束不能标记为有效,除非它也对这些后代标记为有效。在所有这些情况下,`仅更改表`将被拒绝。 + +递归`删除列`仅当后代不从任何其他父级继承该列并且从未对该列进行独立定义时,操作才会删除后代表的列。非递归的`删除列`(IE。,`仅更改表...删除列`) 从不删除任何后代列,而是将它们标记为独立定义而不是继承。非递归的`删除列`对于分区表,命令将失败,因为表的所有分区必须具有与分区根相同的列。 + +标识列的操作 (`添加生成`,`放`ETC。,`放弃身份`),以及动作`扳机`,`簇`,`所有者`, 和`表空间`从不递归到后代表;也就是说,他们总是表现得好像`只要`被指定。添加约束仅针对`查看`未标记的约束`没有继承`. + +不允许更改系统目录表的任何部分。 + +参考[创建表](sql-createtable.html)有关有效参数的进一步说明。[第 5 章](ddl.html)有更多关于继承的信息。 + +## 例子 + +添加类型的列`varchar`到一张桌子: + +``` +ALTER TABLE distributors ADD COLUMN address varchar(30); +``` + +这将导致表中的所有现有行都被新列的空值填充。 + +添加具有非空默认值的列: + +``` +ALTER TABLE measurements + ADD COLUMN mtime timestamp with time zone DEFAULT now(); +``` + +现有行将填充当前时间作为新列的值,然后新行将接收其插入时间。 + +要添加一列并使用与稍后使用的默认值不同的值填充它: + +``` +ALTER TABLE transactions + ADD COLUMN status varchar(30) DEFAULT 'old', + ALTER COLUMN status SET default 'current'; +``` + +现有行将被填充`老的`,但随后命令的默认值将是`当前的`.效果与两个子命令分别发出一样`更改表`命令。 + +要从表中删除列: + +``` +ALTER TABLE distributors DROP COLUMN address RESTRICT; +``` + +要在一个操作中更改两个现有列的类型: + +``` +ALTER TABLE distributors + ALTER COLUMN address TYPE varchar(80), + ALTER COLUMN name TYPE varchar(100); +``` + +将包含 Unix 时间戳的整数列更改为`带时区的时间戳`通过一个`使用`条款: + +``` +ALTER TABLE foo + ALTER COLUMN foo_timestamp SET DATA TYPE timestamp with time zone + USING + timestamp with time zone 'epoch' + foo_timestamp * interval '1 second'; +``` + +同样,当列具有不会自动转换为新数据类型的默认表达式时: + +``` +ALTER TABLE foo + ALTER COLUMN foo_timestamp DROP DEFAULT, + ALTER COLUMN foo_timestamp TYPE timestamp with time zone + USING + timestamp with time zone 'epoch' + foo_timestamp * interval '1 second', + ALTER COLUMN foo_timestamp SET DEFAULT now(); +``` + +要重命名现有列: + +``` +ALTER TABLE distributors RENAME COLUMN address TO city; +``` + +要重命名现有表: + +``` +ALTER TABLE distributors RENAME TO suppliers; +``` + +要重命名现有约束: + +``` +ALTER TABLE distributors RENAME CONSTRAINT zipchk TO zip_check; +``` + +向列添加非空约束: + +``` +ALTER TABLE distributors ALTER COLUMN street SET NOT NULL; +``` + +要从列中删除非空约束: + +``` +ALTER TABLE distributors ALTER COLUMN street DROP NOT NULL; +``` + +向表及其所有子表添加检查约束: + +``` +ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5); +``` + +要将检查约束仅添加到表而不添加到其子表: + +``` +ALTER TABLE distributors ADD CONSTRAINT zipchk CHECK (char_length(zipcode) = 5) NO INHERIT; +``` + +(未来的孩子也不会继承检查约束。) + +要从表及其所有子表中删除检查约束: + +``` +ALTER TABLE distributors DROP CONSTRAINT zipchk; +``` + +仅从一个表中删除检查约束: + +``` +ALTER TABLE ONLY distributors DROP CONSTRAINT zipchk; +``` + +(所有子表的检查约束仍然存在。) + +向表中添加外键约束: + +``` +ALTER TABLE distributors ADD CONSTRAINT distfk FOREIGN KEY (address) REFERENCES addresses (address); +``` + +将外键约束添加到对其他工作影响最小的表中: + +``` +ALTER TABLE distributors ADD CONSTRAINT distfk FOREIGN KEY (address) REFERENCES addresses (address) NOT VALID; +ALTER TABLE distributors VALIDATE CONSTRAINT distfk; +``` + +向表中添加(多列)唯一约束: + +``` +ALTER TABLE distributors ADD CONSTRAINT dist_id_zipcode_key UNIQUE (dist_id, zipcode); +``` + +要将自动命名的主键约束添加到表中,请注意表只能有一个主键: + +``` +ALTER TABLE distributors ADD PRIMARY KEY (dist_id); +``` + +要将表移动到不同的表空间: + +``` +ALTER TABLE distributors SET TABLESPACE fasttablespace; +``` + +要将表移动到不同的架构: + +``` +ALTER TABLE myschema.distributors SET SCHEMA yourschema; +``` + +要重新创建主键约束,在重建索引时不阻塞更新: + +``` +CREATE UNIQUE INDEX CONCURRENTLY dist_id_temp_idx ON distributors (dist_id); +ALTER TABLE distributors DROP CONSTRAINT distributors_pkey, + ADD CONSTRAINT distributors_pkey PRIMARY KEY USING INDEX dist_id_temp_idx; +``` + +要将分区附加到范围分区表: + +``` +ALTER TABLE measurement + ATTACH PARTITION measurement_y2016m07 FOR VALUES FROM ('2016-07-01') TO ('2016-08-01'); +``` + +要将分区附加到列表分区表: + +``` +ALTER TABLE cities + ATTACH PARTITION cities_ab FOR VALUES IN ('a', 'b'); +``` + +要将分区附加到散列分区表: + +``` +ALTER TABLE orders + ATTACH PARTITION orders_p4 FOR VALUES WITH (MODULUS 4, REMAINDER 3); +``` + +要将默认分区附加到分区表: + +``` +ALTER TABLE cities + ATTACH PARTITION cities_partdef DEFAULT; +``` + +要从分区表中分离分区: + +``` +ALTER TABLE measurement + DETACH PARTITION measurement_y2015m12; +``` + +## 兼容性 + +表格`添加`(没有`使用索引`),`删除 [栏]`,`放弃身份`,`重新开始`,`默认设置`,`设置数据类型`(没有`使用`),`集生成`, 和`放 *`序列选项`*`符合 SQL 标准。其他形式是 SQL 标准的 PostgreSQL 扩展。此外,能够在单个操作中指定多个操作`更改表`命令是一个扩展。 + +`更改表删除列`可用于删除表的唯一列,留下一个零列表。这是 SQL 的扩展,它不允许零列表。 + +## 也可以看看 + +[创建表](sql-createtable.html) diff --git a/docs/X/sql-altertype.md b/docs/en/sql-altertype.md similarity index 100% rename from docs/X/sql-altertype.md rename to docs/en/sql-altertype.md diff --git a/docs/en/sql-altertype.zh.md b/docs/en/sql-altertype.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e6c4b948bfdef422c5dbf684c827a902e4f7db26 --- /dev/null +++ b/docs/en/sql-altertype.zh.md @@ -0,0 +1,205 @@ +## ALTER TYPE + +ALTER TYPE — change the definition of a type + +## Synopsis + +``` +ALTER TYPE name OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER } +ALTER TYPE name RENAME TO new_name +ALTER TYPE name SET SCHEMA new_schema +ALTER TYPE name RENAME ATTRIBUTE attribute_name TO new_attribute_name [ CASCADE | RESTRICT ] +ALTER TYPE name action [, ... ] +ALTER TYPE name ADD VALUE [ IF NOT EXISTS ] new_enum_value [ { BEFORE | AFTER } neighbor_enum_value ] +ALTER TYPE name RENAME VALUE existing_enum_value TO new_enum_value +ALTER TYPE name SET ( property = value [, ... ] ) + +where action is one of: + + ADD ATTRIBUTE attribute_name data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] + DROP ATTRIBUTE [ IF EXISTS ] attribute_name [ CASCADE | RESTRICT ] + ALTER ATTRIBUTE attribute_name [ SET DATA ] TYPE data_type [ COLLATE collation ] [ CASCADE | RESTRICT ] +``` + +## Description + +`ALTER TYPE`changes the definition of an existing type. There are several subforms: + +`OWNER` + +This form changes the owner of the type. + +`RENAME` + +This form changes the name of the type. + +`SET SCHEMA` + +This form moves the type into another schema. + +`RENAME ATTRIBUTE` + +This form is only usable with composite types. It changes the name of an individual attribute of the type. + +`ADD ATTRIBUTE` + +This form adds a new attribute to a composite type, using the same syntax as[`CREATE TYPE`](sql-createtype.html). + +`DROP ATTRIBUTE [ IF EXISTS ]` + +This form drops an attribute from a composite type. If`IF EXISTS`is specified and the attribute does not exist, no error is thrown. In this case a notice is issued instead. + +`ALTER ATTRIBUTE ... SET DATA TYPE` + +This form changes the type of an attribute of a composite type. + +`ADD VALUE [ IF NOT EXISTS ] [ BEFORE | AFTER ]` + +This form adds a new value to an enum type. The new value's place in the enum's ordering can be specified as being`BEFORE`要么`后`现有值之一。否则,新项目将添加到值列表的末尾。 + +如果`如果不存在`已指定,如果类型已包含新值则不是错误:发出通知但不采取其他操作。否则,如果新值已经存在,则会发生错误。 + +`重命名值` + +此表单重命名枚举类型的值。该值在枚举排序中的位置不受影响。如果指定的值不存在或新名称已存在,则会发生错误。 + +`放 ( *`财产`* = *`价值`* [, ... ] )` + +此表单仅适用于基本类型。它允许调整可以设置的基本类型属性的子集`创建类型`.具体来说,可以更改这些属性: + +- `收到`可以设置为二进制输入函数的名称,或者`没有`删除类型的二进制输入函数。使用此选项需要超级用户权限。 + +- `发送`可以设置为二进制输出函数的名称,或者`没有`删除类型的二进制输出函数。使用此选项需要超级用户权限。 + +- `TYPMOD_IN`可以设置为类型修饰符输入函数的名称,或者`没有`删除类型的类型修饰符输入函数。使用此选项需要超级用户权限。 + +- `TYPMOD_OUT`可以设置为类型修饰符输出函数的名称,或者`没有任何`删除类型的类型修饰符输出函数。使用此选项需要超级用户权限。 + +- `分析`可以设置为特定类型的统计信息收集函数的名称,或者`没有任何`删除类型的统计信息收集功能。使用此选项需要超级用户权限。 + +- `订阅`可以设置为特定类型的下标处理函数的名称,或者`没有任何`删除类型的下标处理函数。使用此选项需要超级用户权限。 + +- `贮存`[](<>)可以设置为`清楚的`,`扩展的`,`外部的`, 或者`主要的`(看[第 70.2 节](storage-toast.html)有关这些含义的更多信息)。然而,从改变`清楚的`到另一个设置需要超级用户权限(因为它要求类型的 C 函数都准备好 TOAST),并更改为`清楚的`根本不允许来自另一个设置(因为该类型可能已经在数据库中存在 TOASTed 值)。请注意,更改此选项本身不会更改任何存储的数据,它只是设置默认 TOAST 策略以用于将来创建的表列。看[更改表](sql-altertable.html)更改现有表列的 TOAST 策略。 + + 看[创建类型](sql-createtype.html)有关这些类型属性的更多详细信息。请注意,在适当的情况下,基类型的这些属性的更改将自动传播到基于该类型的域。 + + 这`添加属性`,`掉落属性`, 和`改变属性`动作可以组合成多个更改的列表以并行应用。例如,可以在单个命令中添加多个属性和/或更改多个属性的类型。 + + 您必须拥有要使用的类型`改变类型`.要更改类型的架构,您还必须具有`创造`新架构的特权。要更改所有者,您还必须是新所有者角色的直接或间接成员,并且该角色必须具有`创造`对类型架构的特权。(这些限制强制改变所有者不会做任何你不能通过删除和重新创建类型来做的事情。但是,超级用户无论如何都可以更改任何类型的所有权。)要添加属性或更改属性类型,您必须也有`用法`属性数据类型的特权。 + +## 参数 + +*`姓名`* + +要更改的现有类型的名称(可能是模式限定的)。 + +*`新名字`* + +类型的新名称。 + +*`新主人`* + +类型的新所有者的用户名。 + +*`新模式`* + +该类型的新架构。 + +*`属性名`* + +要添加、更改或删除的属性的名称。 + +*`新属性名称`* + +要重命名的属性的新名称。 + +*`数据类型`* + +要添加的属性的数据类型,或要更改的属性的新类型。 + +*`新枚举值`* + +要添加到枚举类型的值列表的新值,或要赋予现有值的新名称。像所有枚举文字一样,它需要被引用。 + +*`邻居枚举值`* + +在枚举类型的排序顺序之前或之后应立即添加新值的现有枚举值。像所有枚举文字一样,它需要被引用。 + +*`现有枚举值`* + +应该重命名的现有枚举值。像所有枚举文字一样,它需要被引用。 + +*`财产`* + +要修改的基本类型属性的名称;有关可能的值,请参见上文。 + +`级联` + +自动将操作传播到正在更改的类型的类型表及其后代。 + +`严格` + +如果要更改的类型是类型化表的类型,则拒绝该操作。这是默认设置。 + +## 笔记 + +如果`改变类型...增加价值`(向枚举类型添加新值的形式)在事务块内执行,直到事务提交后才能使用新值。 + +涉及添加枚举值的比较有时会比仅涉及枚举类型的原始成员的比较慢。这通常只会发生在`前`要么`后`用于将新值的排序位置设置在列表末尾以外的某个位置。但是,有时即使在最后添加了新值也会发生这种情况(如果 OID 计数器在最初创建枚举类型后“环绕”,则会发生这种情况)。放缓通常是微不足道的。但如果重要的话,可以通过删除和重新创建枚举类型或转储和重新加载数据库来重新获得最佳性能。 + +## 例子 + +要重命名数据类型: + +``` +ALTER TYPE electronic_mail RENAME TO email; +``` + +更改类型的所有者`电子邮件`到`乔`: + +``` +ALTER TYPE email OWNER TO joe; +``` + +更改类型的架构`电子邮件`到`顾客`: + +``` +ALTER TYPE email SET SCHEMA customers; +``` + +向复合类型添加新属性: + +``` +ALTER TYPE compfoo ADD ATTRIBUTE f3 int; +``` + +要将新值添加到特定排序位置的枚举类型: + +``` +ALTER TYPE colors ADD VALUE 'orange' AFTER 'red'; +``` + +要重命名枚举值: + +``` +ALTER TYPE colors RENAME VALUE 'purple' TO 'mauve'; +``` + +为现有的基本类型创建二进制 I/O 函数: + +``` +CREATE FUNCTION mytypesend(mytype) RETURNS bytea ...; +CREATE FUNCTION mytyperecv(internal, oid, integer) RETURNS mytype ...; +ALTER TYPE mytype SET ( + SEND = mytypesend, + RECEIVE = mytyperecv +); +``` + +## 兼容性 + +添加和删​​除属性的变体是 SQL 标准的一部分;其他变体是 PostgreSQL 扩展。 + +## 也可以看看 + +[创建类型](sql-createtype.html),[掉落类型](sql-droptype.html) diff --git a/docs/X/sql-alterview.md b/docs/en/sql-alterview.md similarity index 100% rename from docs/X/sql-alterview.md rename to docs/en/sql-alterview.md diff --git a/docs/en/sql-alterview.zh.md b/docs/en/sql-alterview.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..0d3b4c97fc5040eb7c47dfe5afc19fc5e0fe77f9 --- /dev/null +++ b/docs/en/sql-alterview.zh.md @@ -0,0 +1,99 @@ +## ALTER VIEW + +ALTER VIEW — change the definition of a view + +## Synopsis + +``` +ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name SET DEFAULT expression +ALTER VIEW [ IF EXISTS ] name ALTER [ COLUMN ] column_name DROP DEFAULT +ALTER VIEW [ IF EXISTS ] name OWNER TO { new_owner | CURRENT_ROLE | CURRENT_USER | SESSION_USER } +ALTER VIEW [ IF EXISTS ] name RENAME [ COLUMN ] column_name TO new_column_name +ALTER VIEW [ IF EXISTS ] name RENAME TO new_name +ALTER VIEW [ IF EXISTS ] name SET SCHEMA new_schema +ALTER VIEW [ IF EXISTS ] name SET ( view_option_name [= view_option_value] [, ... ] ) +ALTER VIEW [ IF EXISTS ] name RESET ( view_option_name [, ... ] ) +``` + +## Description + +`ALTER VIEW`changes various auxiliary properties of a view. (If you want to modify the view's defining query, use`CREATE OR REPLACE VIEW`.) + +You must own the view to use`ALTER VIEW`. To change a view's schema, you must also have`CREATE`privilege on the new schema. To alter the owner, you must also be a direct or indirect member of the new owning role, and that role must have`CREATE`privilege on the view's schema. (These restrictions enforce that altering the owner doesn't do anything you couldn't do by dropping and recreating the view. However, a superuser can alter ownership of any view anyway.) + +## Parameters + +*`name`* + +The name (optionally schema-qualified) of an existing view. + +*`column_name`* + +Name of an existing column. + +*`new_column_name`* + +New name for an existing column. + +`IF EXISTS` + +Do not throw an error if the view does not exist. A notice is issued in this case. + +`SET`/`DROP DEFAULT` + +These forms set or remove the default value for a column. A view column's default value is substituted into any`INSERT`或者`更新`在为视图应用任何规则或触发器之前,其目标是视图的命令。因此,视图的默认值将优先于基础关系中的任何默认值。 + +*`新主人`* + +视图的新所有者的用户名。 + +*`新名字`* + +视图的新名称。 + +*`新模式`* + +视图的新架构。 + +`放 ( *`view_option_name`* [= *`view_option_value`*] [, ... ])`\ +`重置 ( *`view_option_name`* [, ... ] )` + +设置或重置视图选项。目前支持的选项有: + +`check_option`(`枚举`) + +更改视图的检查选项。值必须是`当地的`或者`级联`. + +`安全屏障`(`布尔值`) + +更改视图的安全屏障属性。该值必须是布尔值,例如`真的`要么`错误的`. + +## 笔记 + +由于历史原因,`更改表`也可以与视图一起使用;但唯一的变体`更改表`视图允许的与上面显示的相同。 + +## 例子 + +重命名视图`富`到`酒吧`: + +``` +ALTER VIEW foo RENAME TO bar; +``` + +要将默认列值附加到可更新视图: + +``` +CREATE TABLE base_table (id int, ts timestamptz); +CREATE VIEW a_view AS SELECT * FROM base_table; +ALTER VIEW a_view ALTER COLUMN ts SET DEFAULT now(); +INSERT INTO base_table(id) VALUES(1); -- ts will receive a NULL +INSERT INTO a_view(id) VALUES(2); -- ts will receive the current time +``` + +## 兼容性 + +`改变视图`是 SQL 标准的 PostgreSQL 扩展。 + +## 也可以看看 + +[创建视图](sql-createview.html),[下拉视图](sql-dropview.html) diff --git a/docs/X/sql-cluster.md b/docs/en/sql-cluster.md similarity index 100% rename from docs/X/sql-cluster.md rename to docs/en/sql-cluster.md diff --git a/docs/en/sql-cluster.zh.md b/docs/en/sql-cluster.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..465fa932dfac07085bca25a29879bae68119f5dc --- /dev/null +++ b/docs/en/sql-cluster.zh.md @@ -0,0 +1,99 @@ +## CLUSTER + +CLUSTER — cluster a table according to an index + +## Synopsis + +``` +CLUSTER [VERBOSE] table_name [ USING index_name ] +CLUSTER ( option [, ...] ) table_name [ USING index_name ] +CLUSTER [VERBOSE] + +where option can be one of: + + VERBOSE [ boolean ] +``` + +## Description + +`CLUSTER`instructs PostgreSQL to cluster the table specified by*`table_name`*based on the index specified by*`index_name`*. The index must already have been defined on*`table_name`*. + +When a table is clustered, it is physically reordered based on the index information. Clustering is a one-time operation: when the table is subsequently updated, the changes are not clustered. That is, no attempt is made to store new or updated rows according to their index order. (If one wishes, one can periodically recluster by issuing the command again. Also, setting the table's`fillfactor`storage parameter to less than 100% can aid in preserving cluster ordering during updates, since updated rows are kept on the same page if enough space is available there.) + +When a table is clustered, PostgreSQL remembers which index it was clustered by. The form`CLUSTER *`table_name`*`reclusters the table using the same index as before. You can also use the`CLUSTER`or`SET WITHOUT CLUSTER`forms of[`ALTER TABLE`](sql-altertable.html)to set the index to be used for future cluster operations, or to clear any previous setting. + +`CLUSTER`without any parameter reclusters all the previously-clustered tables in the current database that the calling user owns, or all such tables if called by a superuser. This form of`CLUSTER`cannot be executed inside a transaction block. + +当一个表被聚类时,一个`访问独家`锁定就可以了。这可以防止任何其他数据库操作(读取和写入)对表进行操作,直到`簇`完成了。 + +## 参数 + +*`表名`* + +表的名称(可能是模式限定的)。 + +*`索引名称`* + +索引的名称。 + +`详细` + +在每个表都聚集在一起时打印进度报告。 + +*`布尔值`* + +指定是否应打开或关闭所选选项。你可以写`真的`,`在`, 或者`1`启用该选项,并且`错误的`,`离开`, 或者`0`禁用它。这*`布尔值`*value 也可以省略,在这种情况下`真的`假设。 + +## 笔记 + +如果您在表中随机访问单行,则表中数据的实际顺序并不重要。但是,如果您倾向于访问某些数据而不是其他数据,并且有一个将它们组合在一起的索引,那么您将从使用`簇`.如果您要从表中请求一系列索引值,或者请求具有多行匹配的单个索引值,`簇`将有所帮助,因为一旦索引识别出匹配的第一行的表页,所有其他匹配的行可能已经在同一个表页上,因此您可以节省磁盘访问并加快查询速度。 + +`簇`可以使用指定索引上的索引扫描或(如果索引是 b 树)顺序扫描然后排序来重新排序表。它将尝试根据计划者成本参数和可用的统计信息选择更快的方法。 + +使用索引扫描时,会创建一个表的临时副本,其中包含按索引顺序排列的表数据。还会创建表上每个索引的临时副本。因此,您需要磁盘上的可用空间至少等于表大小和索引大小的总和。 + +当使用顺序扫描和排序时,还会创建一个临时排序文件,因此峰值临时空间需求是表大小的两倍,再加上索引大小。这种方法通常比索引扫描方法快,但是如果磁盘空间要求不能忍受,可以通过临时设置禁用这个选项[使能够\_种类](runtime-config-query.html#GUC-ENABLE-SORT)到`离开`. + +建议设置[维护\_工作\_内存](runtime-config-resource.html#GUC-MAINTENANCE-WORK-MEM)到一个相当大的值(但不超过您可以专用于`簇`操作)在聚类之前。 + +因为规划器记录了关于表排序的统计信息,所以建议运行[`分析`](sql-analyze.html)在新聚集的表上。否则,计划者可能会做出糟糕的查询计划选择。 + +因为`簇`记住哪些索引是集群的,可以在第一次手动集群想要集群的表,然后设置执行的定期维护脚本`簇`没有任何参数,以便定期重新聚集所需的表。 + +每个后端运行`簇`将报告其进展情况`pg_stat_progress_cluster`看法。看[第 28.4.4 节](progress-reporting.html#CLUSTER-PROGRESS-REPORTING)详情。 + +## 例子 + +对表进行聚类`雇员`根据其指数`employees_ind`: + +``` +CLUSTER employees USING employees_ind; +``` + +聚类`雇员`使用之前使用的相同索引的表: + +``` +CLUSTER employees; +``` + +集群数据库中之前已经集群的所有表: + +``` +CLUSTER; +``` + +## 兼容性 + +没有`簇`SQL 标准中的语句。 + +语法 + +``` +CLUSTER index_name ON table_name +``` + +还支持与 8.3 之前的 PostgreSQL 版本兼容。 + +## 也可以看看 + +[集群数据库](app-clusterdb.html),[第 28.4.4 节](progress-reporting.html#CLUSTER-PROGRESS-REPORTING) diff --git a/docs/X/sql-copy.md b/docs/en/sql-copy.md similarity index 100% rename from docs/X/sql-copy.md rename to docs/en/sql-copy.md diff --git a/docs/en/sql-copy.zh.md b/docs/en/sql-copy.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..dee0055ed1914e1706be263c689cf50d77433732 --- /dev/null +++ b/docs/en/sql-copy.zh.md @@ -0,0 +1,395 @@ +## 复制 + +COPY — 在文件和表之间复制数据 + +## 概要 + +``` +COPY table_name [ ( column_name [, ...] ) ] + FROM { 'filename' | PROGRAM 'command' | STDIN } + [ [ WITH ] ( option [, ...] ) ] + [ WHERE condition ] + +COPY { table_name [ ( column_name [, ...] ) ] | ( query ) } + TO { 'filename' | PROGRAM 'command' | STDOUT } + [ [ WITH ] ( option [, ...] ) ] + +where option can be one of: + + FORMAT format_name + FREEZE [ boolean ] + DELIMITER 'delimiter_character' + NULL 'null_string' + HEADER [ boolean ] + QUOTE 'quote_character' + ESCAPE 'escape_character' + FORCE_QUOTE { ( column_name [, ...] ) | * } + FORCE_NOT_NULL ( column_name [, ...] ) + FORCE_NULL ( column_name [, ...] ) + ENCODING 'encoding_name' +``` + +## 描述 + +`复制`在 PostgreSQL 表和标准文件系统文件之间移动数据。`复制到`复制表格的内容*到*一个文件,而`复制自`复制数据*从*将文件添加到表中(将数据附加到表中已有的内容)。`复制到`也可以复制一个结果`选择`询问。 + +如果指定了列列表,`复制到`仅将指定列中的数据复制到文件中。为了`复制自`,文件中的每个字段都按顺序插入到指定的列中。表中未指定的列`复制自`列列表将收到它们的默认值。 + +`复制`使用文件名指示 PostgreSQL 服务器直接读取或写入文件。该文件必须可由 PostgreSQL 用户(服务器运行的用户 ID)访问,并且必须从服务器的角度指定名称。什么时候`程序`指定时,服务器执行给定的命令并从程序的标准输出读取,或写入程序的标准输入。该命令必须从服务器的角度指定,并且可由 PostgreSQL 用户执行。什么时候`标准输入`或者`标准输出`指定时,数据通过客户端和服务器之间的连接传输。 + +每个后端运行`复制`将报告其进展情况`pg_stat_progress_copy`看法。看[第 28.4.6 节](progress-reporting.html#COPY-PROGRESS-REPORTING)详情。 + +## 参数 + +*`表名`* + +现有表的名称(可选模式限定)。 + +*`列名`* + +要复制的列的可选列表。如果未指定列列表,则将复制表中除生成列之外的所有列。 + +*`询问`* + +一种[`选择`](sql-select.html),[`价值观`](sql-values.html),[`插入`](sql-insert.html),[`更新`](sql-update.html), 或者[`删除`](sql-delete.html)要复制其结果的命令。请注意,查询周围需要括号。 + +为了`插入`,`更新`和`删除`查询必须提供 RETURNING 子句,并且目标关系不能有条件规则,也不能`还`规则,也不是`反而`扩展到多个语句的规则。 + +*`文件名`* + +输入或输出文件的路径名。输入文件名可以是绝对路径或相对路径,但输出文件名必须是绝对路径。Windows 用户可能需要使用`E''`字符串并将路径名中使用的任何反斜杠加倍。 + +`程序` + +要执行的命令。在`复制自`,输入是从命令的标准输出中读取的,并且在`复制到`,输出被写入命令的标准输入。 + +请注意,该命令由 shell 调用,因此如果您需要将任何来自不受信任来源的参数传递给 shell 命令,则必须小心去除或转义任何可能对 shell 具有特殊含义的特殊字符。出于安全原因,最好使用固定的命令字符串,或者至少避免在其中传递任何用户输入。 + +`标准输入` + +指定输入来自客户端应用程序。 + +`标准输出` + +指定输出到客户端应用程序。 + +*`布尔值`* + +指定是否应打开或关闭所选选项。你可以写`真的`,`在`, 或者`1`启用该选项,并且`错误的`,`离开`, 或者`0`禁用它。这*`布尔值`*value 也可以省略,在这种情况下`真的`假设。 + +`格式` + +选择要读取或写入的数据格式:`文本`,`csv`(逗号分隔值),或`二进制`.默认是`文本`. + +`冻结` + +请求复制已冻结行的数据,就像在运行`真空冷冻`命令。这旨在作为初始数据加载的性能选项。仅当正在加载的表已在当前子事务中创建或截断时,行才会被冻结,没有游标打开并且该事务没有持有旧快照。目前无法执行`复制冻结`在分区表上。 + +请注意,一旦数据成功加载,所有其他会话将立即能够看到数据。这违反了 MVCC 可见性的正常规则,用户指定应该意识到这可能导致的潜在问题。 + +`分隔符` + +指定在文件的每一行(行)中分隔列的字符。默认是文本格式的制表符,逗号`CSV`格式。这必须是一个单字节字符。使用时不允许此选项`二进制`格式。 + +`空值` + +指定表示空值的字符串。默认是`\N`(反斜杠-N)文本格式,以及一个不带引号的空字符串`CSV`格式。在您不想区分空字符串和空字符串的情况下,您可能更喜欢使用文本格式的空字符串。使用时不允许此选项`二进制`格式。 + +### 笔记 + +使用时`复制自`,任何与此字符串匹配的数据项都将存储为空值,因此您应确保使用与您使用的相同的字符串`复制到`. + +`标题` + +指定文件包含带有文件中每一列名称的标题行。输出时,第一行包含表中的列名,输入时,第一行被忽略。此选项仅在使用时允许`CSV`格式。 + +`引用` + +指定引用数据值时要使用的引用字符。默认为双引号。这必须是一个单字节字符。此选项仅在使用时允许`CSV`格式。 + +`逃脱` + +指定应该出现在匹配的数据字符之前的字符`引用`价值。默认值与`引用`值(以便引用字符在数据中出现时加倍)。这必须是一个单字节字符。此选项仅在使用时允许`CSV`格式。 + +`FORCE_QUOTE` + +强制引用用于所有非`空值`每个指定列中的值。`空值`输出永远不会被引用。如果`*`是指定的,非`空值`值将在所有列中引用。此选项仅在`复制到`,并且仅在使用时`CSV`格式。 + +`FORCE_NOT_NULL` + +不要将指定列的值与空字符串匹配。在空字符串为空的默认情况下,这意味着空值将被读取为长度为零的字符串而不是空值,即使它们没有被引用。此选项仅在`复制自`,并且仅在使用时`CSV`格式。 + +`FORCE_NULL` + +将指定列的值与空字符串匹配,即使它已被引用,如果找到匹配项,请将值设置为`空值`.在空字符串为空的默认情况下,这会将带引号的空字符串转换为 NULL。此选项仅在`复制自`,并且仅在使用时`CSV`格式。 + +`编码` + +指定文件以*`编码名称`*.如果省略此选项,则使用当前客户端编码。有关详细信息,请参阅下面的注释。 + +`在哪里` + +可选的`在哪里`子句具有一般形式 + +``` +WHERE condition +``` + +在哪里*`(健康)状况`*是任何计算结果类型的表达式`布尔值`.任何不满足此条件的行都不会插入到表中。如果在实际行值替换任何变量引用时返回 true,则该行满足条件。 + +目前,子查询是不允许的`在哪里`表达式,并且评估看不到由`复制`本身(当表达式包含对`易挥发的`职能)。 + +## 输出 + +成功完成后,一个`复制`命令返回形式的命令标签 + +``` +COPY count +``` + +这*`数数`*是复制的行数。 + +### 笔记 + +psql 将仅在命令不存在时打印此命令标记`复制...到标准输出`,或等效的 psql 元命令`\copy ... 到标准输出`.这是为了防止将命令标签与刚刚打印的数据混淆。 + +## 笔记 + +`复制到`只能用于普通表,不能用于视图,并且不能从子表或子分区复制行。例如,`复制 *`桌子`* 到`复制相同的行`仅从 * 中选择 *`桌子`*`.语法`复制(选择 * 从 *`桌子`*) 到 ...`可用于转储继承层次结构、分区表或视图中的所有行。 + +`复制自`可以与普通表、外部表或分区表一起使用,也可以与具有`代替插入`触发器。 + +您必须对读取其值的表具有选择权限`复制到`, 以及插入值的表的插入权限`复制自`.对命令中列出的列具有列权限就足够了。 + +如果为表启用了行级安全性,则相关的`选择`政策将适用于`复制 *`桌子`* 到`陈述。目前,`复制自`具有行级安全性的表不支持。使用等价物`插入`而是声明。 + +以 a 命名的文件`复制`命令由服务器直接读取或写入,而不是由客户端应用程序读取或写入。因此,它们必须驻留在数据库服务器机器上或可供其访问,而不是客户端。它们必须可由 PostgreSQL 用户(服务器运行的用户 ID)访问和读写,而不是客户端。同样,用指定的命令`程序`由服务器直接执行,而不是由客户端应用程序执行,必须由 PostgreSQL 用户执行。`复制`仅允许数据库超级用户或被授予其中一种角色的用户命名文件或命令`pg_read_server_files`,`pg_write_server_files`, 或者`pg_execute_server_program`,因为它允许读取或写入任何文件或运行服务器有权访问的程序。 + +不要混淆`复制`使用 psql 指令`[\copy](app-psql.html#APP-PSQL-META-COMMANDS-COPY)`.`\复制`调用`从标准输入复制`或者`复制到标准输出`,然后在 psql 客户端可访问的文件中获取/存储数据。因此,文件可访问性和访问权限取决于客户端而不是服务器`\复制`用来。 + +建议使用的文件名`复制`始终指定为绝对路径。这是由服务器在以下情况下强制执行的`复制到`, 但对于`复制自`您确实可以选择从相对路径指定的文件中读取。该路径将被解释为相对于服务器进程的工作目录(通常是集群的数据目录),而不是客户端的工作目录。 + +执行命令`程序`可能会受到操作系统的访问控制机制的限制,例如 SELinux。 + +`复制自`将调用任何触发器并检查目标表上的约束。但是,它不会调用规则。 + +对于标识列,`复制自`命令将始终写入输入数据中提供的列值,例如`插入`选项`压倒一切的系统价值`. + +`复制`输入和输出受`日期样式`.确保对可能使用非默认安装的其他 PostgreSQL 安装的可移植性`日期样式`设置,`日期样式`应该设置为`国际标准化组织`使用前`复制到`.避免使用`间隔样式`设置`sql_standard`,因为负间隔值可能会被具有不同设置的服务器误解为`间隔样式`. + +输入数据根据`编码`选项或当前客户端编码,输出数据编码为`编码`或当前客户端编码,即使数据不通过客户端而是由服务器直接读取或写入文件。 + +`复制`在第一个错误时停止操作。这不应该导致在发生问题时出现问题`复制到`,但目标表已经接收到较早的行`复制自`.这些行将不可见或不可访问,但它们仍会占用磁盘空间。如果故障恰好发生在大型复制操作中,这可能相当于浪费了大量的磁盘空间。您可能希望调用`真空`来回收浪费的空间。 + +`FORCE_NULL`和`FORCE_NOT_NULL`可以在同一列上同时使用。这导致将带引号的空字符串转换为空值,将不带引号的空字符串转换为空字符串。 + +## 文件格式 + +### 文本格式 + +当。。。的时候`文本`使用格式时,读取或写入的数据是一个文本文件,每行一行。行中的列由分隔符分隔。列值本身是每个属性数据类型的输出函数生成的或输入函数可接受的字符串。指定的空字符串用于代替空列。`复制自`如果输入文件的任何行包含比预期更多或更少的列,则会引发错误。 + +数据的结尾可以由仅包含反斜杠句点的单行表示(`\。`)。从文件中读取时不需要数据结束标记,因为文件结尾非常好;只有在使用 3.0 之前的客户端协议将数据复制到客户端应用程序或从客户端应用程序复制数据时才需要它。 + +反斜杠字符 (`\`) 可用于`复制`data 引用可能被视为行或列分隔符的数据字符。特别是以下字符*必须*如果它们作为列值的一部分出现,则以反斜杠开头:反斜杠本身、换行符、回车符和当前分隔符。 + +指定的空字符串由`复制到`不添加任何反斜杠;反过来,`复制自`在删除反斜杠之前将输入与空字符串匹配。因此,一个空字符串,如`\N`不能与实际数据值混淆`\N`(这将表示为`\\N`)。 + +以下特殊反斜杠序列由`复制自`: + +| 顺序 | 代表 | +| --- | --- | +| `\b` | 退格(ASCII 8) | +| `\f` | 换页 (ASCII 12) | +| `\n` | 换行符(ASCII 10) | +| `\r` | 回车 (ASCII 13) | +| `\t` | 制表符(ASCII 9) | +| `\v` | 垂直制表符 (ASCII 11) | +| `\`*`数字`*\|后跟一到三个八进制数字的反斜杠指定具有该数字代码的字节 | | +| `\x`*`数字`* | 反斜杠`x`后跟一个或两个十六进制数字指定具有该数字代码的字节 | + +目前,`复制到`永远不会发出八进制或十六进制数字的反斜杠序列,但它确实使用上面列出的其他序列来处理这些控制字符。 + +上表中未提及的任何其他反斜杠字符将被视为代表自身。但是,请注意不要添加不必要的反斜杠,因为这可能会意外生成与数据结束标记匹配的字符串 (`\。`) 或空字符串 (`\N`默认情况下)。这些字符串将在任何其他反斜杠处理完成之前被识别。 + +强烈建议应用程序生成`复制`数据转换数据换行符和回车到`\n`和`\r`序列分别。目前可以用反斜杠和回车来表示数据回车,用反斜杠和换行来表示数据换行。但是,这些表示形式可能不会在未来的版本中被接受。他们也很容易受到腐败的影响,如果`复制`文件在不同的机器之间传输(例如,从 Unix 到 Windows,反之亦然)。 + +所有反斜杠序列都在编码转换后进行解释。用八进制和十六进制数字反斜杠序列指定的字节必须在数据库编码中形成有效字符。 + +`复制到`将以 Unix 风格的换行符终止每一行(“`\n`”)。在 Microsoft Windows 上运行的服务器改为输出回车/换行符(“`\r\n`”),但仅适用于`复制`到服务器文件;为了跨平台的一致性,`复制到标准输出`总是发送“`\n`” 无论服务器平台如何。`复制自`可以处理以换行符、回车符或回车符/换行符结尾的行。为了降低由于作为数据的未反斜杠换行符或回车而导致的错误风险,`复制自`如果输入中的行结尾不完全相同,则会抱怨。 + +### CSV 格式 + +此格式选项用于导入和导出逗号分隔值 (`CSV`) 许多其他程序(例如电子表格)使用的文件格式。代替 PostgreSQL 标准文本格式使用的转义规则,它生成并识别常见的 CSV 转义机制。 + +每条记录中的值由`分隔符`特点。如果值包含分隔符,则`引用`性格`空值`字符串、回车符或换行符,则整个值都以`引用`字符,以及任何出现在 a 的值中`引用`性格或`逃脱`字符前面是转义字符。你也可以使用`FORCE_QUOTE`在输出非时强制引用`空值`特定列中的值。 + +这`CSV`格式没有标准的方法来区分`空值`来自空字符串的值。PostgreSQL的`复制`通过引用来处理这个问题。一种`空值`输出为`空值`参数字符串并且不被引用,而一个非`空值`匹配的值`空值`参数字符串被引用。例如,在默认设置下,`空值`写为不带引号的空字符串,而空字符串数据值用双引号 (`“”`)。读取值遵循类似的规则。您可以使用`FORCE_NOT_NULL`阻止`空值`特定列的输入比较。你也可以使用`FORCE_NULL`将带引号的空字符串数据值转换为`空值`. + +因为反斜杠在`CSV`格式,`\。`,数据结束标记,也可以显示为数据值。为避免任何误解,`\。`在一行中作为单独条目出现的数据值在输出时自动引用,并且在输入时,如果引用,则不会被解释为数据结束标记。如果您正在加载由另一个应用程序创建的文件,该文件具有一个未加引号的列并且可能具有值`\。`,您可能需要在输入文件中引用该值。 + +### 笔记 + +在`CSV`格式,所有字符都有意义。由空格或任何字符以外的字符包围的引用值`分隔符`, 将包括这些字符。如果您从填充的系统导入数据,这可能会导致错误`CSV`带有空白的线条到某个固定宽度。如果出现这种情况,您可能需要预处理`CSV`文件以删除尾随空格,然后再将数据导入 PostgreSQL。 + +### 笔记 + +CSV 格式将识别和生成带有包含嵌入式回车和换行符的引用值的 CSV 文件。因此,这些文件并不像文本格式文件那样严格地每表行一行。 + +### 笔记 + +许多程序会生成奇怪且偶尔不正常的 CSV 文件,因此文件格式与其说是标准,不如说是一种约定。因此,您可能会遇到一些无法使用此机制导入的文件,并且`复制`可能会产生其他程序无法处理的文件。 + +### 二进制格式 + +这`二进制`format option causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the text and`CSV`formats, but a binary-format file is less portable across machine architectures and PostgreSQL versions. Also, the binary format is very data type specific; for example it will not work to output binary data from a`smallint`column and read it into an`integer`column, even though that would work fine in text format. + +The`binary`file format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are in network byte order. + +### Note + +PostgreSQL releases before 7.4 used a different binary file format. + +#### File Header + +The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are: + +Signature + +11-byte sequence`PGCOPY\n\377\r\n\0`— note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped zero bytes, dropped high bits, or parity changes.) + +Flags field + +32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16–31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0–15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently only one flag bit is defined, and the rest must be zero: + +Bit 16 + +If 1, OIDs are included in the data; if 0, not. Oid system columns are not supported in PostgreSQL anymore, but the format still contains the indicator. + +Header extension area length + +32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with. + +The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field is not intended to tell readers what is in the extension area. Specific design of header extension contents is left for a later release. + +This design allows for both backwards-compatible header additions (add header extension chunks, or set low-order flag bits) and non-backwards-compatible changes (set high-order flag bits to signal such changes, and add supporting data to the extension area if needed). + +#### Tuples + +Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in a table will have the same count, but that might not always be true.) Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case. + +There is no alignment padding or any other extra data between fields. + +目前,二进制格式文件中的所有数据值都假定为二进制格式(格式代码一)。预计未来的扩展可能会添加一个允许指定每列格式代码的标题字段。 + +要确定实际元组数据的适当二进制格式,您应该查阅 PostgreSQL 源,特别是`*发送`和`*接收`每个列的数据类型的函数(通常这些函数位于`src/后端/utils/adt/`源分发目录)。 + +如果文件中包含 OID,则 OID 字段紧跟在字段计数字之后。它是一个普通字段,只是它不包含在字段计数中。请注意,当前版本的 PostgreSQL 不支持 oid 系统列。 + +#### 文件预告片 + +文件尾部由一个包含 -1 的 16 位整数字组成。这很容易与元组的字段计数词区分开来。 + +如果字段计数字既不是 -1 也不是预期的列数,阅读器应该报告错误。这提供了额外的检查,以防止以某种方式与数据不同步。 + +## 例子 + +以下示例使用竖线 (`|`) 作为字段分隔符: + +``` +COPY country TO STDOUT (DELIMITER '|'); +``` + +要将文件中的数据复制到`国家`桌子: + +``` +COPY country FROM '/usr1/proj/bray/sql/country_data'; +``` + +要将名称以“A”开头的国家/地区复制到文件中: + +``` +COPY (SELECT * FROM country WHERE country_name LIKE 'A%') TO '/usr1/proj/bray/sql/a_list_countries.copy'; +``` + +要复制到压缩文件中,您可以通过外部压缩程序管道输出: + +``` +COPY country TO PROGRAM 'gzip > /usr1/proj/bray/sql/country_data.gz'; +``` + +这是一个适合复制到表中的数据示例`标准输入`: + +``` +AF AFGHANISTAN +AL ALBANIA +DZ ALGERIA +ZM ZAMBIA +ZW ZIMBABWE +``` + +请注意,每一行的空格实际上是一个制表符。 + +以下是相同的数据,以二进制格式输出。数据通过 Unix 实用程序过滤后显示`od -c`.该表有三列;第一个有类型`字符(2)`,第二个有类型`文本`, 第三个有类型`整数`.所有行在第三列中都有一个空值。 + +``` +0000000 P G C O P Y \n 377 \r \n \0 \0 \0 \0 \0 \0 +0000020 \0 \0 \0 \0 003 \0 \0 \0 002 A F \0 \0 \0 013 A +0000040 F G H A N I S T A N 377 377 377 377 \0 003 +0000060 \0 \0 \0 002 A L \0 \0 \0 007 A L B A N I +0000100 A 377 377 377 377 \0 003 \0 \0 \0 002 D Z \0 \0 \0 +0000120 007 A L G E R I A 377 377 377 377 \0 003 \0 \0 +0000140 \0 002 Z M \0 \0 \0 006 Z A M B I A 377 377 +0000160 377 377 \0 003 \0 \0 \0 002 Z W \0 \0 \0 \b Z I +0000200 M B A B W E 377 377 377 377 377 377 +``` + +## 兼容性 + +没有`复制`SQL 标准中的语句。 + +以下语法在 PostgreSQL 版本 9.0 之前使用并且仍然受支持: + +``` +COPY table_name [ ( column_name [, ...] ) ] + FROM { 'filename' | STDIN } + [ [ WITH ] + [ BINARY ] + [ DELIMITER [ AS ] 'delimiter_character' ] + [ NULL [ AS ] 'null_string' ] + [ CSV [ HEADER ] + [ QUOTE [ AS ] 'quote_character' ] + [ ESCAPE [ AS ] 'escape_character' ] + [ FORCE NOT NULL column_name [, ...] ] ] ] + +COPY { table_name [ ( column_name [, ...] ) ] | ( query ) } + TO { 'filename' | STDOUT } + [ [ WITH ] + [ BINARY ] + [ DELIMITER [ AS ] 'delimiter_character' ] + [ NULL [ AS ] 'null_string' ] + [ CSV [ HEADER ] + [ QUOTE [ AS ] 'quote_character' ] + [ ESCAPE [ AS ] 'escape_character' ] + [ FORCE QUOTE { column_name [, ...] | * } ] ] ] +``` + +请注意,在此语法中,`二进制`和`CSV`被视为独立的关键字,而不是 a 的参数`格式`选项。 + +以下语法在 PostgreSQL 版本 7.3 之前使用并且仍然受支持: + +``` +COPY [ BINARY ] table_name + FROM { 'filename' | STDIN } + [ [USING] DELIMITERS 'delimiter_character' ] + [ WITH NULL AS 'null_string' ] + +COPY [ BINARY ] table_name + TO { 'filename' | STDOUT } + [ [USING] DELIMITERS 'delimiter_character' ] + [ WITH NULL AS 'null_string' ] +``` + +## 也可以看看 + +[第 28.4.6 节](progress-reporting.html#COPY-PROGRESS-REPORTING) diff --git a/docs/X/sql-createcast.md b/docs/en/sql-createcast.md similarity index 100% rename from docs/X/sql-createcast.md rename to docs/en/sql-createcast.md diff --git a/docs/en/sql-createcast.zh.md b/docs/en/sql-createcast.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f41cbb6f2c58b62a9182941e9fb15aab8583f239 --- /dev/null +++ b/docs/en/sql-createcast.zh.md @@ -0,0 +1,143 @@ +## 创建演员表 + +CREATE CAST — 定义一个新的演员表 + +## 概要 + +``` +CREATE CAST (source_type AS target_type) + WITH FUNCTION function_name [ (argument_type [, ...]) ] + [ AS ASSIGNMENT | AS IMPLICIT ] + +CREATE CAST (source_type AS target_type) + WITHOUT FUNCTION + [ AS ASSIGNMENT | AS IMPLICIT ] + +CREATE CAST (source_type AS target_type) + WITH INOUT + [ AS ASSIGNMENT | AS IMPLICIT ] +``` + +## 描述 + +`创建演员表`定义了一个新的演员表。强制转换指定如何在两种数据类型之间执行转换。例如, + +``` +SELECT CAST(42 AS float8); +``` + +将整数常量 42 转换为类型`浮动8`通过调用先前指定的函数,在这种情况下`浮动8(int4)`.(如果没有定义合适的演员表,则转换失败。) + +可以有两种类型*二进制可强制*,这意味着可以“免费”执行转换,而无需调用任何函数。这要求对应的值使用相同的内部表示。例如,类型`文本`和`varchar`是双向可强制的。二元强制力不一定是对称关系。例如,演员阵容来自`xml`到`文本`在本实现中可以免费执行,但反向需要一个至少执行语法检查的函数。(双向可二进制强制的两种类型也称为二进制兼容。) + +您可以将强制转换定义为*I/O 转换转换*通过使用`带输入`句法。通过调用源数据类型的输出函数并将结果字符串传递给目标数据类型的输入函数来执行 I/O 转换转换。在许多常见情况下,此功能避免了编写单​​独的强制转换函数进行转换的需要。I/O 转换转换的行为与常规的基于函数的转换相同;只有实现不同。 + +默认情况下,只能通过显式转换请求调用转换,即显式转换请求`投掷(*`x`* 作为 *`类型名称`*)`或者*`x`*`::`*`类型名称`*构造。 + +如果演员表被标记`作业`然后在为目标数据类型的列赋值时可以隐式调用它。例如,假设`foo.f1`是类型的列`文本`, 然后: + +``` +INSERT INTO foo (f1) VALUES (42); +``` + +如果从类型转换将被允许`整数`输入`文本`被标记`作业`,否则不是。(我们一般用这个词*分配演员表*来描述这种演员阵容。) + +如果演员表被标记`隐含的`那么它可以在任何上下文中隐式调用,无论是赋值还是表达式内部。(我们一般用这个词*隐式转换*来描述这种类型的演员。)例如,考虑这个查询: + +``` +SELECT 2 + 4.0; +``` + +解析器最初将常量标记为类型`整数`和`数字`分别。没有`整数` `+` `数字`操作员在系统目录中,但有一个`数字` `+` `数字`操作员。因此,如果从`整数`到`数字`可用并标记`隐含的`——事实上就是这样。解析器将应用隐式转换并解析查询,就好像它已经被写入一样 + +``` +SELECT CAST ( 2 AS numeric ) + 4.0; +``` + +现在,目录还提供了来自`数字`到`整数`.如果该演员表被标记`隐含的`- 它不是 - 那么解析器将面临在上述解释和强制转换的替代方案之间进行选择`数字`恒定到`整数`并应用`整数` `+` `整数`操作员。如果不知道更喜欢哪个选择,它会放弃并声明查询不明确。两个强制转换中只有一个是隐式的这一事实是我们教解析器更喜欢混合的解析的方式`数字`-和-`整数`表达为`数字`;没有关于此的内置知识。 + +保守地将演员表标记为隐式是明智的。过多的隐式转换路径会导致 PostgreSQL 选择令人惊讶的命令解释,或者根本无法解析命令,因为有多种可能的解释。一个好的经验法则是使强制转换仅可用于同一通用类型类别中的类型之间的信息保留转换。例如,演员阵容来自`整数2`到`整数4`可以合理地隐含,但从`浮动8`到`整数4`应该可能只是分配。跨类型类别转换,例如`文本`到`整数4`, 最好只显式。 + +### 笔记 + +有时出于可用性或标准合规性的原因,有必要在一组类型之间提供多个隐式转换,从而导致如上所述无法避免的歧义。解析器具有基于回退启发式*类型类别*和*首选类型*这有助于在这种情况下提供所需的行为。看[创建类型](sql-createtype.html)了解更多信息。 + +为了能够创建强制转换,您必须拥有源或目标数据类型并拥有`用法`其他类型的特权。要创建二进制强制转换,您必须是超级用户。(这个限制是因为错误的二进制强制转换很容易使服务器崩溃。) + +## 参数 + +*`源类型`* + +转换的源数据类型的名称。 + +*`目标类型`* + +转换的目标数据类型的名称。 + +`*`函数名`*[(*`参数类型`* [, ...])]` + +用于执行强制转换的函数。函数名称可以是模式限定的。如果不是,则将在模式搜索路径中查找该函数。函数的结果数据类型必须匹配转换的目标类型。下面讨论它的论点。如果未指定参数列表,则函数名称在其模式中必须是唯一的。 + +`没有功能` + +表示源类型对目标类型是二进制强制的,因此不需要函数来执行转换。 + +`带输入` + +表示强制转换是 I/O 转换强制转换,通过调用源数据类型的输出函数并将结果字符串传递给目标数据类型的输入函数来执行。 + +`作业` + +表示可以在赋值上下文中隐式调用强制转换。 + +`AS IMPLICIT` + +Indicates that the cast can be invoked implicitly in any context. + +Cast implementation functions can have one to three arguments. The first argument type must be identical to or binary-coercible from the cast's source type. The second argument, if present, must be type`integer`; it receives the type modifier associated with the destination type, or`-1`if there is none. The third argument, if present, must be type`boolean`; it receives`true`if the cast is an explicit cast,`false`otherwise. (Bizarrely, the SQL standard demands different behaviors for explicit and implicit casts in some cases. This argument is supplied for functions that must implement such casts. It is not recommended that you design your own data types so that this matters.) + +The return type of a cast function must be identical to or binary-coercible to the cast's target type. + +Ordinarily a cast must have different source and target data types. However, it is allowed to declare a cast with identical source and target types if it has a cast implementation function with more than one argument. This is used to represent type-specific length coercion functions in the system catalogs. The named function is used to coerce a value of the type to the type modifier value given by its second argument. + +When a cast has different source and target types and a function that takes more than one argument, it supports converting from one type to another and applying a length coercion in a single step. When no such entry is available, coercion to a type that uses a type modifier involves two cast steps, one to convert between data types and a second to apply the modifier. + +A cast to or from a domain type currently has no effect. Casting to or from a domain uses the casts associated with its underlying type. + +## Notes + +Use[`DROP CAST`](sql-dropcast.html)to remove user-defined casts. + +Remember that if you want to be able to convert types both ways you need to declare casts both ways explicitly. + +[](<>) + +It is normally not necessary to create casts between user-defined types and the standard string types (`text`,`varchar`, and`char(*`n`*)`,以及定义为字符串类别的用户定义类型)。PostgreSQL 为此提供了自动 I/O 转换转换。对字符串类型的自动转换被视为赋值转换,而来自字符串类型的自动转换是仅显式的。您可以通过声明自己的强制转换来替换自动强制转换来覆盖此行为,但通常这样做的唯一原因是,如果您希望转换比标准的仅赋值或仅显式设置更容易调用。另一个可能的原因是您希望转换的行为与类型的 I/O 函数不同;但这足以令人惊讶,您应该三思而后行是否是个好主意。(少数内置类型确实有不同的转换行为,主要是因为 SQL 标准的要求。) + +虽然不是必需的,但建议您继续遵循在目标数据类型之后命名强制转换实现函数的旧约定。许多用户习惯于使用函数式表示法来转换数据类型,即*`类型名称`*(*`x`*)。这种表示法实际上只不过是对强制转换实现函数的调用;它没有被特殊对待。如果你的转换函数没有被命名来支持这个约定,那么你会让用户感到惊讶。由于 PostgreSQL 允许使用不同的参数类型重载相同的函数名称,所以从不同类型的多个转换函数都使用目标类型的名称没有困难。 + +### 笔记 + +实际上,前面的段落过于简单化了:在两种情况下,函数调用构造将被视为强制转换请求,而无需将其与实际函数匹配。如果一个函数调用*`姓名`*(*`x`*) 不完全匹配任何现有函数,但*`姓名`*是数据类型的名称,并且`pg_cast`从*`x`*,然后调用将被解释为二进制强制转换。这个例外是为了使二进制强制转换可以使用函数式语法调用,即使它们缺少任何函数。同样,如果没有`pg_cast`条目,但转换将是到字符串类型或从字符串类型转换,调用将被解释为 I/O 转换转换。此异常允许使用函数式语法调用 I/O 转换强制转换。 + +### 笔记 + +该异常还有一个例外:从复合类型到字符串类型的 I/O 转换转换不能使用函数式语法调用,而必须以显式转换语法编写(或者`投掷`或者`::`符号)。之所以添加此异常,是因为在引入自动提供的 I/O 转换强制转换后,当打算引用函数或列时,很容易意外调用这种强制转换。 + +## 例子 + +从类型创建赋值转换`大整数`输入`整数4`使用功能`int4(大整数)`: + +``` +CREATE CAST (bigint AS int4) WITH FUNCTION int4(bigint) AS ASSIGNMENT; +``` + +(此演员表已在系统中预定义。) + +## 兼容性 + +这`创建演员表`command 符合 SQL 标准,只是 SQL 没有为实现函数提供二进制强制类型或额外参数。`隐含的`也是一个 PostgreSQL 扩展。 + +## 也可以看看 + +[创建函数](sql-createfunction.html), [创建类型](sql-createtype.html), [空投](sql-dropcast.html) diff --git a/docs/X/sql-expressions.md b/docs/en/sql-expressions.md similarity index 100% rename from docs/X/sql-expressions.md rename to docs/en/sql-expressions.md diff --git a/docs/en/sql-expressions.zh.md b/docs/en/sql-expressions.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..317936a4f23fe21f3e5357405455ab3c035862de --- /dev/null +++ b/docs/en/sql-expressions.zh.md @@ -0,0 +1,517 @@ +## 4.2. Value Expressions + +[4.2.1. Column References](sql-expressions.html#SQL-EXPRESSIONS-COLUMN-REFS) + +[4.2.2. Positional Parameters](sql-expressions.html#SQL-EXPRESSIONS-PARAMETERS-POSITIONAL) + +[4.2.3. Subscripts](sql-expressions.html#SQL-EXPRESSIONS-SUBSCRIPTS) + +[4.2.4. Field Selection](sql-expressions.html#FIELD-SELECTION) + +[4.2.5. Operator Invocations](sql-expressions.html#SQL-EXPRESSIONS-OPERATOR-CALLS) + +[4.2.6. Function Calls](sql-expressions.html#SQL-EXPRESSIONS-FUNCTION-CALLS) + +[4.2.7. Aggregate Expressions](sql-expressions.html#SYNTAX-AGGREGATES) + +[4.2.8. Window Function Calls](sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS) + +[4.2.9. Type Casts](sql-expressions.html#SQL-SYNTAX-TYPE-CASTS) + +[4.2.10. Collation Expressions](sql-expressions.html#SQL-SYNTAX-COLLATE-EXPRS) + +[4.2.11. Scalar Subqueries](sql-expressions.html#SQL-SYNTAX-SCALAR-SUBQUERIES) + +[4.2.12. Array Constructors](sql-expressions.html#SQL-SYNTAX-ARRAY-CONSTRUCTORS) + +[4.2.13. Row Constructors](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS) + +[4.2.14. Expression Evaluation Rules](sql-expressions.html#SYNTAX-EXPRESS-EVAL) + +[](<>)[](<>)[](<>) + +Value expressions are used in a variety of contexts, such as in the target list of the`SELECT`command, as new column values in`INSERT`or`UPDATE`, or in search conditions in a number of commands. The result of a value expression is sometimes called a*scalar*, to distinguish it from the result of a table expression (which is a table). Value expressions are therefore also called*scalar expressions*(or even simply*expressions*). The expression syntax allows the calculation of values from primitive parts using arithmetic, logical, set, and other operations. + +A value expression is one of the following: + +- A constant or literal value + +- A column reference + +- A positional parameter reference, in the body of a function definition or prepared statement + +- A subscripted expression + +- A field selection expression + +- An operator invocation + +- A function call + +- An aggregate expression + +- A window function call + +- A type cast + +- A collation expression + +- A scalar subquery + +- An array constructor + +- A row constructor + +- Another value expression in parentheses (used to group subexpressions and override precedence[](<>)) + + In addition to this list, there are a number of constructs that can be classified as an expression but do not follow any general syntax rules. These generally have the semantics of a function or operator and are explained in the appropriate location in[Chapter 9](functions.html). An example is the`IS NULL`clause. + + We have already discussed constants in[Section 4.1.2](sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS). The following sections discuss the remaining options. + +### 4.2.1. Column References + +[](<>) + +A column can be referenced in the form: + +``` +correlation.columnname +``` + +*`correlation`*is the name of a table (possibly qualified with a schema name), or an alias for a table defined by means of a`FROM`clause. The correlation name and separating dot can be omitted if the column name is unique across all the tables being used in the current query. (See also[Chapter 7](queries.html).) + +### 4.2.2. Positional Parameters + +[](<>)[](<>) + +A positional parameter reference is used to indicate a value that is supplied externally to an SQL statement. Parameters are used in SQL function definitions and in prepared queries. Some client libraries also support specifying data values separately from the SQL command string, in which case parameters are used to refer to the out-of-line data values. The form of a parameter reference is: + +``` +$number +``` + +For example, consider the definition of a function,`dept`, as: + +``` +CREATE FUNCTION dept(text) RETURNS dept + AS $$ SELECT * FROM dept WHERE name = $1 $$ + LANGUAGE SQL; +``` + +Here the`$1`references the value of the first function argument whenever the function is invoked. + +### 4.2.3. Subscripts + +[](<>) + +If an expression yields a value of an array type, then a specific element of the array value can be extracted by writing + +``` +expression[subscript] +``` + +or multiple adjacent elements (an “array slice”) can be extracted by writing + +``` +expression[lower_subscript:upper_subscript] +``` + +(Here, the brackets`[ ]`are meant to appear literally.) Each*`subscript`*is itself an expression, which will be rounded to the nearest integer value. + +In general the array*`expression`*must be parenthesized, but the parentheses can be omitted when the expression to be subscripted is just a column reference or positional parameter. Also, multiple subscripts can be concatenated when the original array is multidimensional. For example: + +``` +mytable.arraycolumn[4] +mytable.two_d_column[17][34] +$1[10:42] +(arrayfunction(a,b)) + +[42] +``` + +The parentheses in the last example are required. See[Section 8.15](arrays.html)for more about arrays. + +### 4.2.4. Field Selection + +[](<>) + +If an expression yields a value of a composite type (row type), then a specific field of the row can be extracted by writing + +``` +expression.fieldname +``` + +In general the row*`expression`*must be parenthesized, but the parentheses can be omitted when the expression to be selected from is just a table reference or positional parameter. For example: + +``` +mytable.mycolumn +$1.somecolumn +(rowfunction(a,b)).col3 +``` + +(Thus, a qualified column reference is actually just a special case of the field selection syntax.) An important special case is extracting a field from a table column that is of a composite type: + +``` +(compositecol).somefield +(mytable.compositecol).somefield +``` + +The parentheses are required here to show that`compositecol`is a column name not a table name, or that`mytable`is a table name not a schema name in the second case. + +You can ask for all fields of a composite value by writing`.*`: + +``` +(compositecol).* +``` + +This notation behaves differently depending on context; see[Section 8.16.5](rowtypes.html#ROWTYPES-USAGE)for details. + +### 4.2.5. Operator Invocations + +[](<>) + +There are two possible syntaxes for an operator invocation: + +| *`expression`* *`operator`* *`expression`*(binary infix operator) | +| ----------------------------------------------------------------- | +| *`operator`* *`expression`*(unary prefix operator) | + +where the*`operator`*token follows the syntax rules of[Section 4.1.3](sql-syntax-lexical.html#SQL-SYNTAX-OPERATORS), or is one of the key words`AND`,`OR`, and`NOT`, or is a qualified operator name in the form: + +``` +OPERATOR(schema.operatorname) +``` + +Which particular operators exist and whether they are unary or binary depends on what operators have been defined by the system or the user.[Chapter 9](functions.html)describes the built-in operators. + +### 4.2.6. Function Calls + +[](<>) + +The syntax for a function call is the name of a function (possibly qualified with a schema name), followed by its argument list enclosed in parentheses: + +``` +function_name ([expression [, expression ... ]] ) +``` + +For example, the following computes the square root of 2: + +``` +sqrt(2) +``` + +The list of built-in functions is in[Chapter 9](functions.html). Other functions can be added by the user. + +When issuing queries in a database where some users mistrust other users, observe security precautions from[Section 10.3](typeconv-func.html)when writing function calls. + +The arguments can optionally have names attached. See[Section 4.3](sql-syntax-calling-funcs.html)for details. + +### Note + +A function that takes a single argument of composite type can optionally be called using field-selection syntax, and conversely field selection can be written in functional style. That is, the notations`col(table)`and`table.col`are interchangeable. This behavior is not SQL-standard but is provided in PostgreSQL because it allows use of functions to emulate “computed fields”. For more information see[Section 8.16.5](rowtypes.html#ROWTYPES-USAGE). + +### 4.2.7. Aggregate Expressions + +[](<>)[](<>)[](<>)[](<>) + +An*aggregate expression*represents the application of an aggregate function across the rows selected by a query. An aggregate function reduces multiple inputs to a single output value, such as the sum or average of the inputs. The syntax of an aggregate expression is one of the following: + +``` +aggregate_name (expression [ , ... ] [ order_by_clause ] ) [ FILTER ( WHERE filter_clause ) ] +aggregate_name (ALL expression [ , ... ] [ order_by_clause ] ) [ FILTER ( WHERE filter_clause ) ] +aggregate_name (DISTINCT expression [ , ... ] [ order_by_clause ] ) [ FILTER ( WHERE filter_clause ) ] +aggregate_name ( * ) [ FILTER ( WHERE filter_clause ) ] +aggregate_name ( [ expression [ , ... ] ] ) WITHIN GROUP ( order_by_clause ) [ FILTER ( WHERE filter_clause ) ] +``` + +where*`aggregate_name`*is a previously defined aggregate (possibly qualified with a schema name) and*`表达`*是任何本身不包含聚合表达式或窗口函数调用的值表达式。可选的*`order_by_clause`*和*`过滤子句`*如下所述。 + +聚合表达式的第一种形式为每个输入行调用一次聚合。第二种形式与第一种形式相同,因为`全部`是默认值。第三种形式为在输入行中找到的每个不同的表达式值(或不同的值集,对于多个表达式)调用一次聚合。第四种形式为每个输入行调用一次聚合;由于没有指定特定的输入值,它通常只对`数数(*)`聚合函数。最后一种形式用于*有序集*聚合函数,如下所述。 + +大多数聚合函数忽略空输入,因此一个或多个表达式产生空值的行将被丢弃。对于所有内置聚合,除非另有说明,否则可以假定为 true。 + +例如,`数数(*)`产生输入行的总数;`计数(f1)`产生输入行数,其中`f1`是非空的,因为`数数`忽略空值;和`计数(不同的 f1)`产生不同的非空值的数量`f1`. + +通常,输入行以未指定的顺序馈送到聚合函数。在许多情况下,这无关紧要。例如,`分钟`无论以何种顺序接收输入,都会产生相同的结果。但是,某些聚合函数(例如`array_agg`和`string_agg`) 产生的结果取决于输入行的顺序。当使用这样的聚合时,可选的*`order_by_clause`*可用于指定所需的顺序。这*`order_by_clause`*具有与查询级别相同的语法`订购方式`条款,如中所述[第 7.5 节](queries-order.html),除了它的表达式总是只是表达式,不能是输出列名或数字。例如: + +``` +SELECT array_agg(a ORDER BY b DESC) FROM table; +``` + +在处理多参数聚合函数时,请注意`订购方式`子句在所有聚合参数之后。例如,这样写: + +``` +SELECT string_agg(a, ',' ORDER BY a) FROM table; +``` + +不是这个: + +``` +SELECT string_agg(a ORDER BY a, ',') FROM table; -- incorrect +``` + +后者在语法上是有效的,但它表示一个单参数聚合函数的调用,其中包含两个`订购方式`键(第二个相当没用,因为它是一个常数)。 + +如果`清楚的`被指定除了一个*`order_by_clause`*, 那么所有`订购方式`表达式必须匹配聚合的常规参数;也就是说,您不能对未包含在`清楚的`列表。 + +### 笔记 + +指定两者的能力`清楚的`和`订购方式`在聚合函数中是 PostgreSQL 扩展。 + +配售`订购方式`在聚合的常规参数列表中,如目前所描述的,用于对通用和统计聚合的输入行进行排序时,排序是可选的。聚合函数有一个子类,称为*有序集聚合*为此*`order_by_clause`*是*必需的*,通常是因为聚合的计算仅在其输入行的特定排序方面才有意义。有序集聚合的典型示例包括排名和百分位数计算。对于有序集聚合,*`order_by_clause`*里面写着`组内 (...)`,如上面的最终语法替代所示。中的表达式*`order_by_clause`*每个输入行评估一次,就像常规聚合参数一样,按*`order_by_clause`*的要求,并作为输入参数提供给聚合函数。(这与非`组内` *`order_by_clause`*,它不被视为聚合函数的参数。)前面的参数表达式`组内`,如果有的话,被称为*直接论据*将它们与*聚合参数*列在*`order_by_clause`*. Unlike regular aggregate arguments, direct arguments are evaluated only once per aggregate call, not once per input row. This means that they can contain variables only if those variables are grouped by`GROUP BY`; this restriction is the same as if the direct arguments were not inside an aggregate expression at all. Direct arguments are typically used for things like percentile fractions, which only make sense as a single value per aggregation calculation. The direct argument list can be empty; in this case, write just`()`not`(*)`. (PostgreSQL will actually accept either spelling, but only the first way conforms to the SQL standard.) + +[](<>)An example of an ordered-set aggregate call is: + +``` +SELECT percentile_cont(0.5) WITHIN GROUP (ORDER BY income) FROM households; + percentile_cont +### 4.2.8. Window Function Calls + +[]()[]() + + A *window function call* represents the application of an aggregate-like function over some portion of the rows selected by a query. Unlike non-window aggregate calls, this is not tied to grouping of the selected rows into a single output row — each row remains separate in the query output. However the window function has access to all the rows that would be part of the current row's group according to the grouping specification (`PARTITION BY` list) of the window function call. The syntax of a window function call is one of the following: +``` + +function_name (\[expression[, expression ...]])[FILTER ( WHERE filter_clause )]OVER window_name function_name (\[expression[, expression ...]])[FILTER ( WHERE filter_clause )]OVER ( window_definition ) function_name (*)[FILTER ( WHERE filter_clause )]OVER window_name function_name (*)[FILTER ( WHERE filter_clause )]OVER ( window_definition ) + +``` + where *`window_definition`* has the syntax +``` + +[existing_window_name]PARTITION BY expression[, ...]][ORDER BY expression \[ ASC | DESC | USING operator \] \[ NULLS { FIRST | LAST } \] \[, ...\]][ frame_clause ] + +``` + The optional *`frame_clause`* can be one of +``` + +{ RANGE | ROWS | GROUPS } frame_start[frame_exclusion]{ RANGE | ROWS | GROUPS } BETWEEN frame_start AND frame_end[frame_exclusion] + +``` + where *`frame_start`* and *`frame_end`* can be one of +``` + +UNBOUNDED PRECEDING offset PRECEDING CURRENT ROW offset FOLLOWING UNBOUNDED FOLLOWING + +``` + and *`frame_exclusion`* can be one of +``` + +EXCLUDE CURRENT ROW EXCLUDE GROUP EXCLUDE TIES EXCLUDE NO OTHERS + +``` + Here, *`expression`* represents any value expression that does not itself contain window function calls. + +*`window_name`* is a reference to a named window specification defined in the query's `WINDOW` clause. Alternatively, a full *`window_definition`* can be given within parentheses, using the same syntax as for defining a named window in the `WINDOW` clause; see the [SELECT](sql-select.html) reference page for details. It's worth pointing out that `OVER wname` is not exactly equivalent to `OVER (wname ...)`; the latter implies copying and modifying the window definition, and will be rejected if the referenced window specification includes a frame clause. + + The `PARTITION BY` clause groups the rows of the query into *partitions*, which are processed separately by the window function. `PARTITION BY` works similarly to a query-level `GROUP BY` clause, except that its expressions are always just expressions and cannot be output-column names or numbers. Without `PARTITION BY`, all rows produced by the query are treated as a single partition. The `ORDER BY` clause determines the order in which the rows of a partition are processed by the window function. It works similarly to a query-level `ORDER BY` clause, but likewise cannot use output-column names or numbers. Without `ORDER BY`, rows are processed in an unspecified order. + + The *`frame_clause`* specifies the set of rows constituting the *window frame*, which is a subset of the current partition, for those window functions that act on the frame instead of the whole partition. The set of rows in the frame can vary depending on which row is the current row. The frame can be specified in `RANGE`, `ROWS` or `GROUPS` mode; in each case, it runs from the *`frame_start`* to the *`frame_end`*. If *`frame_end`* is omitted, the end defaults to `CURRENT ROW`. + + A *`frame_start`* of `UNBOUNDED PRECEDING` means that the frame starts with the first row of the partition, and similarly a *`frame_end`* of `UNBOUNDED FOLLOWING` means that the frame ends with the last row of the partition. + + In `RANGE` or `GROUPS` mode, a *`frame_start`* of `CURRENT ROW` means the frame starts with the current row's first *peer* row (a row that the window's `ORDER BY` clause sorts as equivalent to the current row), while a *`frame_end`* of `CURRENT ROW` means the frame ends with the current row's last peer row. In `ROWS` mode, `CURRENT ROW` simply means the current row. + + In the *`offset`* `PRECEDING` and *`offset`* `FOLLOWING` frame options, the *`offset`* must be an expression not containing any variables, aggregate functions, or window functions. The meaning of the *`offset`* depends on the frame mode: + +* In `ROWS` mode, the *`offset`* must yield a non-null, non-negative integer, and the option means that the frame starts or ends the specified number of rows before or after the current row. + +* In `GROUPS` mode, the *`offset`* again must yield a non-null, non-negative integer, and the option means that the frame starts or ends the specified number of *peer groups* before or after the current row's peer group, where a peer group is a set of rows that are equivalent in the `ORDER BY` ordering. (There must be an `ORDER BY` clause in the window definition to use `GROUPS` mode.) + +* In `RANGE` mode, these options require that the `ORDER BY` clause specify exactly one column. The *`offset`* specifies the maximum difference between the value of that column in the current row and its value in preceding or following rows of the frame. The data type of the *`offset`* expression varies depending on the data type of the ordering column. For numeric ordering columns it is typically of the same type as the ordering column, but for datetime ordering columns it is an `interval`. For example, if the ordering column is of type `date` or `timestamp`, one could write `RANGE BETWEEN '1 day' PRECEDING AND '10 days' FOLLOWING`. The *`offset`* is still required to be non-null and non-negative, though the meaning of “non-negative” depends on its data type. + + In any case, the distance to the end of the frame is limited by the distance to the end of the partition, so that for rows near the partition ends the frame might contain fewer rows than elsewhere. + + Notice that in both `ROWS` and `GROUPS` mode, `0 PRECEDING` and `0 FOLLOWING` are equivalent to `CURRENT ROW`. This normally holds in `RANGE` mode as well, for an appropriate data-type-specific meaning of “zero”. + + The *`frame_exclusion`* option allows rows around the current row to be excluded from the frame, even if they would be included according to the frame start and frame end options. `EXCLUDE CURRENT ROW` excludes the current row from the frame. `EXCLUDE GROUP` excludes the current row and its ordering peers from the frame. `EXCLUDE TIES` excludes any peers of the current row from the frame, but not the current row itself. `EXCLUDE NO OTHERS` simply specifies explicitly the default behavior of not excluding the current row or its peers. + + The default framing option is `RANGE UNBOUNDED PRECEDING`, which is the same as `RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW`. With `ORDER BY`, this sets the frame to be all rows from the partition start up through the current row's last `ORDER BY` peer. Without `ORDER BY`, this means all rows of the partition are included in the window frame, since all rows become peers of the current row. + + Restrictions are that *`frame_start`* cannot be `UNBOUNDED FOLLOWING`, *`frame_end`* cannot be `UNBOUNDED PRECEDING`, and the *`frame_end`* choice cannot appear earlier in the above list of *`frame_start`* and *`frame_end`* options than the *`frame_start`* choice does — for example `RANGE BETWEEN CURRENT ROW AND *`offset`* PRECEDING` is not allowed. But, for example, `ROWS BETWEEN 7 PRECEDING AND 8 PRECEDING` is allowed, even though it would never select any rows. + + If `FILTER` is specified, then only the input rows for which the *`filter_clause`* evaluates to true are fed to the window function; other rows are discarded. Only window functions that are aggregates accept a `FILTER` clause. + + The built-in window functions are described in [Table 9.62](functions-window.html#FUNCTIONS-WINDOW-TABLE). Other window functions can be added by the user. Also, any built-in or user-defined general-purpose or statistical aggregate can be used as a window function. (Ordered-set and hypothetical-set aggregates cannot presently be used as window functions.) + + The syntaxes using `*` are used for calling parameter-less aggregate functions as window functions, for example `count(*) OVER (PARTITION BY x ORDER BY y)`. The asterisk (`*`) is customarily not used for window-specific functions. Window-specific functions do not allow `DISTINCT` or `ORDER BY` to be used within the function argument list. + + Window function calls are permitted only in the `SELECT` list and the `ORDER BY` clause of the query. + + More information about window functions can be found in [Section 3.5](tutorial-window.html), [Section 9.22](functions-window.html), and [Section 7.2.5](queries-table-expressions.html#QUERIES-WINDOW). + +### 4.2.9. Type Casts + +[]()[]()[]() + + A type cast specifies a conversion from one data type to another. PostgreSQL accepts two equivalent syntaxes for type casts: +``` + +CAST ( expression AS type ) expression::type + +``` + The `CAST` syntax conforms to SQL; the syntax with `::` is historical PostgreSQL usage. + + When a cast is applied to a value expression of a known type, it represents a run-time type conversion. The cast will succeed only if a suitable type conversion operation has been defined. Notice that this is subtly different from the use of casts with constants, as shown in [Section 4.1.2.7](sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS-GENERIC). A cast applied to an unadorned string literal represents the initial assignment of a type to a literal constant value, and so it will succeed for any type (if the contents of the string literal are acceptable input syntax for the data type). + + An explicit type cast can usually be omitted if there is no ambiguity as to the type that a value expression must produce (for example, when it is assigned to a table column); the system will automatically apply a type cast in such cases. However, automatic casting is only done for casts that are marked “OK to apply implicitly” in the system catalogs. Other casts must be invoked with explicit casting syntax. This restriction is intended to prevent surprising conversions from being applied silently. + + It is also possible to specify a type cast using a function-like syntax: +``` + +typename ( expression ) + +``` + However, this only works for types whose names are also valid as function names. For example, `double precision` cannot be used this way, but the equivalent `float8` can. Also, the names `interval`, `time`, and `timestamp` can only be used in this fashion if they are double-quoted, because of syntactic conflicts. Therefore, the use of the function-like cast syntax leads to inconsistencies and should probably be avoided. + +### Note + + The function-like syntax is in fact just a function call. When one of the two standard cast syntaxes is used to do a run-time conversion, it will internally invoke a registered function to perform the conversion. By convention, these conversion functions have the same name as their output type, and thus the “function-like syntax” is nothing more than a direct invocation of the underlying conversion function. Obviously, this is not something that a portable application should rely on. For further details see [CREATE CAST](sql-createcast.html). + +### 4.2.10. Collation Expressions + +[]() + + The `COLLATE` clause overrides the collation of an expression. It is appended to the expression it applies to: +``` + +expr COLLATE collation + +``` + where *`collation`* is a possibly schema-qualified identifier. The `COLLATE` clause binds tighter than operators; parentheses can be used when necessary. + + If no collation is explicitly specified, the database system either derives a collation from the columns involved in the expression, or it defaults to the default collation of the database if no column is involved in the expression. + + The two common uses of the `COLLATE` clause are overriding the sort order in an `ORDER BY` clause, for example: +``` + +SELECT a, b, c FROM tbl WHERE ... ORDER BY a COLLATE "C"; + +``` + and overriding the collation of a function or operator call that has locale-sensitive results, for example: +``` + +SELECT \* FROM tbl WHERE a > 'foo' COLLATE "C"; + +``` + Note that in the latter case the `COLLATE` clause is attached to an input argument of the operator we wish to affect. It doesn't matter which argument of the operator or function call the `COLLATE` clause is attached to, because the collation that is applied by the operator or function is derived by considering all arguments, and an explicit `COLLATE` clause will override the collations of all other arguments. (Attaching non-matching `COLLATE` clauses to more than one argument, however, is an error. For more details see [Section 24.2](collation.html).) Thus, this gives the same result as the previous example: +``` + +SELECT \* FROM tbl WHERE a COLLATE "C" > 'foo'; + +``` + But this is an error: +``` + +SELECT \* FROM tbl WHERE (a > 'foo') COLLATE "C"; + +``` + because it attempts to apply a collation to the result of the `>` operator, which is of the non-collatable data type `boolean`. + +### 4.2.11. Scalar Subqueries + +[]() + + A scalar subquery is an ordinary `SELECT` query in parentheses that returns exactly one row with one column. (See [Chapter 7](queries.html) for information about writing queries.) The `SELECT` query is executed and the single returned value is used in the surrounding value expression. It is an error to use a query that returns more than one row or more than one column as a scalar subquery. (But if, during a particular execution, the subquery returns no rows, there is no error; the scalar result is taken to be null.) The subquery can refer to variables from the surrounding query, which will act as constants during any one evaluation of the subquery. See also [Section 9.23](functions-subquery.html) for other expressions involving subqueries. + + For example, the following finds the largest city population in each state: +``` + +SELECT name, (SELECT max(pop) FROM cities WHERE cities.state = states.name) FROM states; + +``` +### 4.2.12. Array Constructors + +[]()[]() + + An array constructor is an expression that builds an array value using values for its member elements. A simple array constructor consists of the key word `ARRAY`, a left square bracket `[`, a list of expressions (separated by commas) for the array element values, and finally a right square bracket `]`. For example: +``` + +SELECT ARRAY[1,2,3+4]; array + +### 4.2.13. Row Constructors + +[](<>)[](<>)[](<>) + +A row constructor is an expression that builds a row value (also called a composite value) using values for its member fields. A row constructor consists of the key word`ROW`, a left parenthesis, zero or more expressions (separated by commas) for the row field values, and finally a right parenthesis. For example: + +``` +SELECT ROW(1,2.5,'this is a test'); +``` + +The key word`ROW`is optional when there is more than one expression in the list. + +A row constructor can include the syntax*`rowvalue`*`.*`, which will be expanded to a list of the elements of the row value, just as occurs when the`.*`syntax is used at the top level of a`SELECT`列表(见[第 8.16.5 节](rowtypes.html#ROWTYPES-USAGE))。例如,如果表`吨`有列`f1`和`f2`,这些是相同的: + +``` +SELECT ROW(t.*, 42) FROM t; +SELECT ROW(t.f1, t.f2, 42) FROM t; +``` + +### 笔记 + +在 PostgreSQL 8.2 之前,`.*`语法没有在行构造函数中展开,所以写`行(t.*,42)`创建了一个双字段行,其第一个字段是另一个行值。新行为通常更有用。如果您需要嵌套行值的旧行为,请编写内部行值而不`.*`, 例如`行(t,42)`. + +默认情况下,由 a 创建的值`排`表达式是匿名记录类型。如有必要,可以将其转换为命名的复合类型——表的行类型或使用创建的复合类型`创建类型为`.可能需要显式转换以避免歧义。例如: + +``` +CREATE TABLE mytable(f1 int, f2 float, f3 text); + +CREATE FUNCTION getf1(mytable) RETURNS int AS 'SELECT $1.f1' LANGUAGE SQL; + +-- No cast needed since only one getf1() exists +SELECT getf1(ROW(1,2.5,'this is a test')); + getf1 +### 4.2.14. Expression Evaluation Rules + +[]() + + The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. + + Furthermore, if the result of an expression can be determined by evaluating only some parts of it, then other subexpressions might not be evaluated at all. For instance, if one wrote: +``` + +选择真或 somefunc(); + +``` + then `somefunc()` would (probably) not be called at all. The same would be the case if one wrote: +``` + +选择 somefunc() 或真; + +``` + Note that this is not the same as the left-to-right “short-circuiting” of Boolean operators that is found in some programming languages. + + As a consequence, it is unwise to use functions with side effects as part of complex expressions. It is particularly dangerous to rely on side effects or evaluation order in `WHERE` and `HAVING` clauses, since those clauses are extensively reprocessed as part of developing an execution plan. Boolean expressions (`AND`/`OR`/`NOT` combinations) in those clauses can be reorganized in any manner allowed by the laws of Boolean algebra. + + When it is essential to force evaluation order, a `CASE` construct (see [Section 9.18](functions-conditional.html)) can be used. For example, this is an untrustworthy way of trying to avoid division by zero in a `WHERE` clause: +``` + +选择 ... 其中 x > 0 且 y/x > 1.5; + +``` + But this is safe: +``` + +SELECT ... WHERE CASE WHEN x > 0 THEN y/x > 1.5 ELSE false END; + +``` + A `CASE` construct used in this fashion will defeat optimization attempts, so it should only be done when necessary. (In this particular example, it would be better to sidestep the problem by writing `y > 1.5*x` instead.) + +`CASE` is not a cure-all for such issues, however. One limitation of the technique illustrated above is that it does not prevent early evaluation of constant subexpressions. As described in [Section 38.7](xfunc-volatility.html), functions and operators marked `IMMUTABLE` can be evaluated when the query is planned rather than when it is executed. Thus for example +``` + +SELECT CASE WHEN x > 0 THEN x ELSE 1/0 END FROM tab; + +``` + is likely to result in a division-by-zero failure due to the planner trying to simplify the constant subexpression, even if every row in the table has `x > 0` so that the `ELSE` arm would never be entered at run time. + + While that particular example might seem silly, related cases that don't obviously involve constants can occur in queries executed within functions, since the values of function arguments and local variables can be inserted into queries as constants for planning purposes. Within PL/pgSQL functions, for example, using an `IF`-`THEN`-`ELSE` statement to protect a risky computation is much safer than just nesting it in a `CASE` expression. + + Another limitation of the same kind is that a `CASE` cannot prevent evaluation of an aggregate expression contained within it, because aggregate expressions are computed before other expressions in a `SELECT` list or `HAVING` clause are considered. For example, the following query can cause a division-by-zero error despite seemingly having protected against it: +``` + +SELECT CASE WHEN min(employees) > 0 THEN avg(expenses / employees) END FROM 部门; + +``` + The `min()` and `avg()` aggregates are computed concurrently over all the input rows, so if any row has `employees` equal to zero, the division-by-zero error will occur before there is any opportunity to test the result of `min()`. Instead, use a `WHERE` or `FILTER` clause to prevent problematic input rows from reaching an aggregate function in the first place. +``` diff --git a/docs/X/sql-keywords-appendix.md b/docs/en/sql-keywords-appendix.md similarity index 100% rename from docs/X/sql-keywords-appendix.md rename to docs/en/sql-keywords-appendix.md diff --git a/docs/en/sql-keywords-appendix.zh.md b/docs/en/sql-keywords-appendix.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..156e6bad7fca25a1abc146c36fbf5e818c5914d8 --- /dev/null +++ b/docs/en/sql-keywords-appendix.zh.md @@ -0,0 +1,848 @@ +## Appendix C. SQL Key Words + +[](<>) + +[Table C.1](sql-keywords-appendix.html#KEYWORDS-TABLE)lists all tokens that are key words in the SQL standard and in PostgreSQL 14.2. Background information can be found in[Section 4.1.1](sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS). (For space reasons, only the latest two versions of the SQL standard, and SQL-92 for historical comparison, are included. The differences between those and the other intermediate standard versions are small.) + +SQL distinguishes between*reserved*and*non-reserved*key words. According to the standard, reserved key words are the only real key words; they are never allowed as identifiers. Non-reserved key words only have a special meaning in particular contexts and can be used as identifiers in other contexts. Most non-reserved key words are actually the names of built-in tables and functions specified by SQL. The concept of non-reserved key words essentially only exists to declare that some predefined meaning is attached to a word in some contexts. + +In the PostgreSQL parser, life is a bit more complicated. There are several different classes of tokens ranging from those that can never be used as an identifier to those that have absolutely no special status in the parser, but are considered ordinary identifiers. (The latter is usually the case for functions specified by SQL.) Even reserved key words are not completely reserved in PostgreSQL, but can be used as column labels (for example,`SELECT 55 AS CHECK`, even though`CHECK`is a reserved key word). + +In[Table C.1](sql-keywords-appendix.html#KEYWORDS-TABLE)in the column for PostgreSQL we classify as “non-reserved” those key words that are explicitly known to the parser but are allowed as column or table names. Some key words that are otherwise non-reserved cannot be used as function or data type names and are marked accordingly. (Most of these words represent built-in functions or data types with special syntax. The function or type is still available but it cannot be redefined by the user.) Labeled “reserved” are those tokens that are not allowed as column or table names. Some reserved key words are allowable as names for functions or data types; this is also shown in the table. If not so marked, a reserved key word is only allowed as a column label. A blank entry in this column means that the word is treated as an ordinary identifier by PostgreSQL. + +Furthermore, while most key words can be used as “bare” column labels without writing`AS`before them (as described in[Section 7.3.2](queries-select-lists.html#QUERIES-COLUMN-LABELS)), there are a few that require a leading`AS`to avoid ambiguity. These are marked in the table as “requires`AS`”. + +As a general rule, if you get spurious parser errors for commands that use any of the listed key words as an identifier, you should try quoting the identifier to see if the problem goes away. + +It is important to understand before studying[Table C.1](sql-keywords-appendix.html#KEYWORDS-TABLE)that the fact that a key word is not reserved in PostgreSQL does not mean that the feature related to the word is not implemented. Conversely, the presence of a key word does not indicate the existence of a feature. + +**Table C.1. SQL Key Words** + +| Key Word | PostgreSQL | SQL:2016 | SQL:2011 | SQL-92 | +| -------- | ---------- | -------- | -------- | ------ | +| `A` | | non-reserved | non-reserved | | +| `ABORT` | non-reserved | | | | +| `ABS` | | reserved | reserved | | +| `ABSENT` | | non-reserved | non-reserved | | +| `ABSOLUTE` | non-reserved | non-reserved | non-reserved | reserved | +| `ACCESS` | non-reserved | | | | +| `ACCORDING` | | non-reserved | non-reserved | | +| `ACOS` | | reserved | | | +| `行动` | 非保留 | 非保留 | 非保留 | 预订的 | +| `ADA` | | 非保留 | 非保留 | 非保留 | +| `添加` | 非保留 | 非保留 | 非保留 | 预订的 | +| `行政` | 非保留 | 非保留 | 非保留 | | +| `后` | 非保留 | 非保留 | 非保留 | | +| `总计的` | 非保留 | | | | +| `全部` | 预订的 | 预订的 | 预订的 | 预订的 | +| `分配` | | 预订的 | 预订的 | 预订的 | +| `还` | 非保留 | | | | +| `改变` | 非保留 | 预订的 | 预订的 | 预订的 | +| `总是` | 非保留 | 非保留 | 非保留 | | +| `分析` | 预订的 | | | | +| `分析` | 预订的 | | | | +| `和` | 预订的 | 预订的 | 预订的 | 预订的 | +| `任何` | 预订的 | 预订的 | 预订的 | 预订的 | +| `是` | | 预订的 | 预订的 | 预订的 | +| `大批` | 保留,需要`作为` | 预订的 | 预订的 | | +| `ARRAY_AGG` | | 预订的 | 预订的 | | +| `ARRAY_​MAX_​CARDINALITY` | | 预订的 | 预订的 | | +| `作为` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `ASC` | 预订的 | 非保留 | 非保留 | 预订的 | +| `敏感的` | 非保留 | 预订的 | 预订的 | | +| `亚信` | | 预订的 | | | +| `断言` | 非保留 | 非保留 | 非保留 | 预订的 | +| `任务` | 非保留 | 非保留 | 非保留 | | +| `不对称` | 预订的 | 预订的 | 预订的 | | +| `在` | 非保留 | 预订的 | 预订的 | 预订的 | +| `晒黑` | | 预订的 | | | +| `原子` | 非保留 | 预订的 | 预订的 | | +| `附` | 非保留 | | | | +| `属性` | 非保留 | 非保留 | 非保留 | | +| `属性` | | 非保留 | 非保留 | | +| `授权` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `平均` | | 预订的 | 预订的 | 预订的 | +| `落后` | 非保留 | | | | +| `BASE64` | | 非保留 | 非保留 | | +| `前` | 非保留 | 非保留 | 非保留 | | +| `开始` | 非保留 | 预订的 | 预订的 | 预订的 | +| `BEGIN_FRAME` | | 预订的 | 预订的 | | +| `BEGIN_PARTITION` | | 预订的 | 预订的 | | +| `伯努利` | | 非保留 | 非保留 | | +| `之间` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `大整数` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `二进制` | 保留(可以是函数或类型) | 预订的 | 预订的 | | +| `少量` | 非保留(不能是函数或类型) | | | 预订的 | +| `BIT_LENGTH` | | | | 预订的 | +| `斑点` | | 预订的 | 预订的 | | +| `封锁` | | 非保留 | 非保留 | | +| `物料清单` | | 非保留 | 非保留 | | +| `布尔值` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `两个都` | 预订的 | 预订的 | 预订的 | 预订的 | +| `宽度` | 非保留 | 非保留 | 非保留 | | +| `经过` | 非保留 | 预订的 | 预订的 | 预订的 | +| `C` | | 非保留 | 非保留 | 非保留 | +| `缓存` | 非保留 | | | | +| `称呼` | 非保留 | 预订的 | 预订的 | | +| `被叫` | 非保留 | 预订的 | 预订的 | | +| `基数` | | 预订的 | 预订的 | | +| `级联` | 非保留 | 非保留 | 非保留 | 预订的 | +| `级联` | 非保留 | 预订的 | 预订的 | 预订的 | +| `案子` | 预订的 | 预订的 | 预订的 | 预订的 | +| `投掷` | 预订的 | 预订的 | 预订的 | 预订的 | +| `目录` | 非保留 | 非保留 | 非保留 | 预订的 | +| `CATALOG_NAME` | | 非保留 | 非保留 | 非保留 | +| `CEIL` | | 预订的 | 预订的 | | +| `天花板` | | 预订的 | 预订的 | | +| `链` | 非保留 | 非保留 | 非保留 | | +| `连锁` | | 非保留 | | | +| `字符` | 非保留(不能是函数或类型),需要`作为` | 预订的 | 预订的 | 预订的 | +| `特点` | 非保留(不能是函数或类型),需要`作为` | 预订的 | 预订的 | 预订的 | +| `特征` | 非保留 | 非保留 | 非保留 | | +| `人物` | | 非保留 | 非保留 | | +| `CHARACTER_LENGTH` | | 预订的 | 预订的 | 预订的 | +| `CHARACTER_​SET_​CATALOG` | | 非保留 | 非保留 | 非保留 | +| `CHARACTER_SET_NAME` | | 非保留 | 非保留 | 非保留 | +| `CHARACTER_SET_SCHEMA` | | 非保留 | 非保留 | 非保留 | +| `CHAR_LENGTH` | | 预订的 | 预订的 | 预订的 | +| `查看` | 预订的 | 预订的 | 预订的 | 预订的 | +| `检查点` | 非保留 | | | | +| `班级` | 非保留 | | | | +| `分类器` | | 预订的 | | | +| `CLASS_ORIGIN` | | 非保留 | 非保留 | 非保留 | +| `CLOB` | | 预订的 | 预订的 | | +| `关闭` | 非保留 | 预订的 | 预订的 | 预订的 | +| `簇` | 非保留 | | | | +| `合并` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `COBOL` | | 非保留 | 非保留 | 非保留 | +| `整理` | 预订的 | 预订的 | 预订的 | 预订的 | +| `整理` | 保留(可以是函数或类型) | 非保留 | 非保留 | 预订的 | +| `COLLATION_CATALOG` | | 非保留 | 非保留 | 非保留 | +| `COLLATION_NAME` | | 非保留 | 非保留 | 非保留 | +| `COLLATION_SCHEMA` | | 非保留 | 非保留 | 非保留 | +| `收藏` | | 预订的 | 预订的 | | +| `柱子` | 预订的 | 预订的 | 预订的 | 预订的 | +| `列` | 非保留 | 非保留 | 非保留 | | +| `COLUMN_NAME` | | 非保留 | 非保留 | 非保留 | +| `COMMAND_FUNCTION` | | 非保留 | 非保留 | 非保留 | +| `COMMAND_​FUNCTION_​代码` | | 非保留 | 非保留 | | +| `评论` | 非保留 | | | | +| `注释` | 非保留 | | | | +| `犯罪` | 非保留 | 预订的 | 预订的 | 预订的 | +| `坚定的` | 非保留 | 非保留 | 非保留 | 非保留 | +| `压缩` | 非保留 | | | | +| `同时` | 保留(可以是函数或类型) | | | | +| `健康)状况` | | 预订的 | 预订的 | | +| `有条件的` | | 非保留 | | | +| `CONDITION_NUMBER` | | 非保留 | 非保留 | 非保留 | +| `配置` | 非保留 | | | | +| `冲突` | 非保留 | | | | +| `连接` | | 预订的 | 预订的 | 预订的 | +| `联系` | 非保留 | 非保留 | 非保留 | 预订的 | +| `CONNECTION_NAME` | | 非保留 | 非保留 | 非保留 | +| `约束` | 预订的 | 预订的 | 预订的 | 预订的 | +| `约束` | 非保留 | 非保留 | 非保留 | 预订的 | +| `约束目录` | | 非保留 | 非保留 | 非保留 | +| `CONSTRAINT_NAME` | | 非保留 | 非保留 | 非保留 | +| `CONSTRAINT_SCHEMA` | | 非保留 | 非保留 | 非保留 | +| `建设者` | | 非保留 | 非保留 | | +| `包含` | | 预订的 | 预订的 | | +| `内容` | 非保留 | 非保留 | 非保留 | | +| `继续` | 非保留 | 非保留 | 非保留 | 预订的 | +| `控制` | | 非保留 | 非保留 | | +| `转换` | 非保留 | | | | +| `转变` | | 预订的 | 预订的 | 预订的 | +| `复制` | 非保留 | 预订的 | | | +| `CORR` | | 预订的 | 预订的 | | +| `相应的` | | 预订的 | 预订的 | 预订的 | +| `COS` | | 预订的 | | | +| `职业安全与健康委员会` | | 预订的 | | | +| `成本` | 非保留 | | | | +| `数数` | | 预订的 | 预订的 | 预订的 | +| `COVAR_POP` | | 预订的 | 预订的 | | +| `COVAR_SAMP` | | 预订的 | 预订的 | | +| `创造` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `叉` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `CSV` | 非保留 | | | | +| `立方体` | 非保留 | 预订的 | 预订的 | | +| `CUME_DIST` | | 预订的 | 预订的 | | +| `当前的` | 非保留 | 预订的 | 预订的 | 预订的 | +| `当前目录` | 预订的 | 预订的 | 预订的 | | +| `当前日期` | 预订的 | 预订的 | 预订的 | 预订的 | +| `CURRENT_​DEFAULT_​TRANSFORM_​GROUP` | | 预订的 | 预订的 | | +| `当前路径` | | 预订的 | 预订的 | | +| `当前角色` | 预订的 | 预订的 | 预订的 | | +| `CURRENT_ROW` | | 预订的 | 预订的 | | +| `CURRENT_SCHEMA` | 保留(可以是函数或类型) | 预订的 | 预订的 | | +| `当前时间` | 预订的 | 预订的 | 预订的 | 预订的 | +| `CURRENT_TIMESTAMP` | 预订的 | 预订的 | 预订的 | 预订的 | +| `CURRENT_​TRANSFORM_​GROUP_​FOR_​TYPE` | | 预订的 | 预订的 | | +| `当前用户` | 预订的 | 预订的 | 预订的 | 预订的 | +| `光标` | 非保留 | 预订的 | 预订的 | 预订的 | +| `CURSOR_NAME` | | 非保留 | 非保留 | 非保留 | +| `循环` | 非保留 | 预订的 | 预订的 | | +| `数据` | 非保留 | 非保留 | 非保留 | 非保留 | +| `数据库` | 非保留 | | | | +| `数据链接` | | 预订的 | 预订的 | | +| `日期` | | 预订的 | 预订的 | 预订的 | +| `DATETIME_​INTERVAL_​CODE` | | 非保留 | 非保留 | 非保留 | +| `DATETIME_​INTERVAL_​精度` | | 非保留 | 非保留 | 非保留 | +| `日` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `D B` | | 非保留 | 非保留 | | +| `解除分配` | 非保留 | 预订的 | 预订的 | 预订的 | +| `十二月` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `DECFLOAT` | | 预订的 | | | +| `十进制` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `宣布` | 非保留 | 预订的 | 预订的 | 预订的 | +| `默认` | 预订的 | 预订的 | 预订的 | 预订的 | +| `默认值` | 非保留 | 非保留 | 非保留 | | +| `可延期的` | 预订的 | 非保留 | 非保留 | 预订的 | +| `延期` | 非保留 | 非保留 | 非保留 | 预订的 | +| `定义` | | 预订的 | | | +| `定义` | | 非保留 | 非保留 | | +| `定义者` | 非保留 | 非保留 | 非保留 | | +| `程度` | | 非保留 | 非保留 | | +| `删除` | 非保留 | 预订的 | 预订的 | 预订的 | +| `分隔符` | 非保留 | | | | +| `分隔符` | 非保留 | | | | +| `DENSE_RANK` | | 预订的 | 预订的 | | +| `依靠` | 非保留 | | | | +| `深度` | 非保留 | 非保留 | 非保留 | | +| `DEREF` | | 预订的 | 预订的 | | +| `衍生的` | | 非保留 | 非保留 | | +| `DESC` | 预订的 | 非保留 | 非保留 | 预订的 | +| `描述` | | 预订的 | 预订的 | 预订的 | +| `描述符` | | 非保留 | 非保留 | 预订的 | +| `分离` | 非保留 | | | | +| `确定性` | | 预订的 | 预订的 | | +| `诊断` | | 非保留 | 非保留 | 预订的 | +| `字典` | 非保留 | | | | +| `禁用` | 非保留 | | | | +| `丢弃` | 非保留 | | | | +| `断开` | | 预订的 | 预订的 | 预订的 | +| `派遣` | | 非保留 | 非保留 | | +| `清楚的` | 预订的 | 预订的 | 预订的 | 预订的 | +| `DLNEWCOPY` | | 预订的 | 预订的 | | +| `DLPREVIOUSCOPY` | | 预订的 | 预订的 | | +| `DLURL完成` | | 预订的 | 预订的 | | +| `DLURLCOMPLETEONLY` | | 预订的 | 预订的 | | +| `DLURLCOMPLETEWRITE` | | 预订的 | 预订的 | | +| `DLURL路径` | | 预订的 | 预订的 | | +| `DLURLPATHONLY` | | 预订的 | 预订的 | | +| `DLURLPATHWRITE` | | 预订的 | 预订的 | | +| `DLURLSCHEME` | | 预订的 | 预订的 | | +| `DLURL服务器` | | 预订的 | 预订的 | | +| `DLVALUE` | | 预订的 | 预订的 | | +| `做` | 预订的 | | | | +| `文档` | 非保留 | 非保留 | 非保留 | | +| `领域` | 非保留 | 非保留 | 非保留 | 预订的 | +| `双倍的` | 非保留 | 预订的 | 预订的 | 预订的 | +| `降低` | 非保留 | 预订的 | 预订的 | 预订的 | +| `动态的` | | 预订的 | 预订的 | | +| `DYNAMIC_FUNCTION` | | 非保留 | 非保留 | 非保留 | +| `DYNAMIC_​FUNCTION_​代码` | | 非保留 | 非保留 | | +| `每个` | 非保留 | 预订的 | 预订的 | | +| `元素` | | 预订的 | 预订的 | | +| `别的` | 预订的 | 预订的 | 预订的 | 预订的 | +| `空的` | | 预订的 | 非保留 | | +| `使能够` | 非保留 | | | | +| `编码` | 非保留 | 非保留 | 非保留 | | +| `加密` | 非保留 | | | | +| `结尾` | 预订的 | 预订的 | 预订的 | 预订的 | +| `结束执行` | | 预订的 | 预订的 | 预订的 | +| `END_FRAME` | | 预订的 | 预订的 | | +| `END_PARTITION` | | 预订的 | 预订的 | | +| `强制执行` | | 非保留 | 非保留 | | +| `枚举` | 非保留 | | | | +| `等于` | | 预订的 | 预订的 | | +| `错误` | | 非保留 | | | +| `逃脱` | 非保留 | 预订的 | 预订的 | 预订的 | +| `事件` | 非保留 | | | | +| `每一个` | | 预订的 | 预订的 | | +| `除了` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `例外` | | | | 预订的 | +| `排除` | 非保留 | 非保留 | 非保留 | | +| `排除` | 非保留 | 非保留 | 非保留 | | +| `独家的` | 非保留 | | | | +| `执行` | | 预订的 | 预订的 | 预订的 | +| `执行` | 非保留 | 预订的 | 预订的 | 预订的 | +| `存在` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `经验值` | | 预订的 | 预订的 | | +| `解释` | 非保留 | | | | +| `表达` | 非保留 | 非保留 | 非保留 | | +| `延期` | 非保留 | | | | +| `外部的` | 非保留 | 预订的 | 预订的 | 预订的 | +| `提炼` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `错误的` | 预订的 | 预订的 | 预订的 | 预订的 | +| `家庭` | 非保留 | | | | +| `拿来` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `文件` | | 非保留 | 非保留 | | +| `筛选` | 非保留,需要`作为` | 预订的 | 预订的 | | +| `最终的` | | 非保留 | 非保留 | | +| `完成` | 非保留 | | | | +| `结束` | | 非保留 | | | +| `第一的` | 非保留 | 非保留 | 非保留 | 预订的 | +| `FIRST_VALUE` | | 预订的 | 预订的 | | +| `旗帜` | | 非保留 | 非保留 | | +| `漂浮` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `地面` | | 预订的 | 预订的 | | +| `下列的` | 非保留 | 非保留 | 非保留 | | +| `为了` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `力量` | 非保留 | | | | +| `外国的` | 预订的 | 预订的 | 预订的 | 预订的 | +| `格式` | | 非保留 | | | +| `FORTRAN` | | 非保留 | 非保留 | 非保留 | +| `向前` | 非保留 | | | | +| `成立` | | 非保留 | 非保留 | 预订的 | +| `FRAME_ROW` | | 预订的 | 预订的 | | +| `自由` | | 预订的 | 预订的 | | +| `冻结` | 保留(可以是函数或类型) | | | | +| `从` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `FS` | | 非保留 | 非保留 | | +| `实现` | | 非保留 | | | +| `满的` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `功能` | 非保留 | 预订的 | 预订的 | | +| `职能` | 非保留 | | | | +| `融合` | | 预订的 | 预订的 | | +| `G` | | 非保留 | 非保留 | | +| `一般的` | | 非保留 | 非保留 | | +| `已生成` | 非保留 | 非保留 | 非保留 | | +| `得到` | | 预订的 | 预订的 | 预订的 | +| `全球的` | 非保留 | 预订的 | 预订的 | 预订的 | +| `去` | | 非保留 | 非保留 | 预订的 | +| `去` | | 非保留 | 非保留 | 预订的 | +| `授予` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `的确` | 非保留 | 非保留 | 非保留 | | +| `最伟大的` | 非保留(不能是函数或类型) | | | | +| `团体` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `分组` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `组` | 非保留 | 预订的 | 预订的 | | +| `处理程序` | 非保留 | | | | +| `拥有` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `标题` | 非保留 | | | | +| `十六进制` | | 非保留 | 非保留 | | +| `等级制度` | | 非保留 | 非保留 | | +| `抓住` | 非保留 | 预订的 | 预订的 | | +| `小时` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `ID` | | 非保留 | 非保留 | | +| `身份` | 非保留 | 预订的 | 预订的 | 预订的 | +| `如果` | 非保留 | | | | +| `忽视` | | 非保留 | 非保留 | | +| `我喜欢` | 保留(可以是函数或类型) | | | | +| `即时` | 非保留 | 非保留 | 非保留 | 预订的 | +| `立即地` | | 非保留 | 非保留 | | +| `不可变的` | 非保留 | | | | +| `执行` | | 非保留 | 非保留 | | +| `隐含的` | 非保留 | | | | +| `进口` | 非保留 | 预订的 | 预订的 | | +| `在` | 预订的 | 预订的 | 预订的 | 预订的 | +| `包括` | 非保留 | | | | +| `包含` | 非保留 | 非保留 | 非保留 | | +| `增量` | 非保留 | 非保留 | 非保留 | | +| `缩进` | | 非保留 | 非保留 | | +| `指数` | 非保留 | | | | +| `索引` | 非保留 | | | | +| `指标` | | 预订的 | 预订的 | 预订的 | +| `继承` | 非保留 | | | | +| `继承` | 非保留 | | | | +| `最初的` | | 预订的 | | | +| `最初` | 预订的 | 非保留 | 非保留 | 预订的 | +| `排队` | 非保留 | | | | +| `内` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `进出` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `输入` | 非保留 | 非保留 | 非保留 | 预订的 | +| `不敏感` | 非保留 | 预订的 | 预订的 | 预订的 | +| `插入` | 非保留 | 预订的 | 预订的 | 预订的 | +| `实例` | | 非保留 | 非保留 | | +| `可实例化` | | 非保留 | 非保留 | | +| `反而` | 非保留 | 非保留 | 非保留 | | +| `INT` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `整数` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `正直` | | 非保留 | 非保留 | | +| `相交` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `路口` | | 预订的 | 预订的 | | +| `间隔` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `进入` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `调用者` | 非保留 | 非保留 | 非保留 | | +| `是` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `一片空白` | 保留(可以是函数或类型),需要`作为` | | | | +| `隔离` | 非保留 | 非保留 | 非保留 | 预订的 | +| `加入` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `JSON` | | 非保留 | | | +| `JSON_ARRAY` | | 预订的 | | | +| `JSON_ARRAYAGG` | | 预订的 | | | +| `JSON_EXISTS` | | 预订的 | | | +| `JSON_OBJECT` | | 预订的 | | | +| `JSON_OBJECTAGG` | | 预订的 | | | +| `JSON_QUERY` | | 预订的 | | | +| `JSON_TABLE` | | 预订的 | | | +| `JSON_TABLE_PRIMITIVE` | | 预订的 | | | +| `JSON_VALUE` | | 预订的 | | | +| `ķ` | | 非保留 | 非保留 | | +| `保持` | | 非保留 | | | +| `钥匙` | 非保留 | 非保留 | 非保留 | 预订的 | +| `键` | | 非保留 | | | +| `KEY_MEMBER` | | 非保留 | 非保留 | | +| `KEY_TYPE` | | 非保留 | 非保留 | | +| `标签` | 非保留 | | | | +| `落后` | | 预订的 | 预订的 | | +| `语言` | 非保留 | 预订的 | 预订的 | 预订的 | +| `大的` | 非保留 | 预订的 | 预订的 | | +| `最后的` | 非保留 | 非保留 | 非保留 | 预订的 | +| `LAST_VALUE` | | 预订的 | 预订的 | | +| `侧` | 预订的 | 预订的 | 预订的 | | +| `带领` | | 预订的 | 预订的 | | +| `领导` | 预订的 | 预订的 | 预订的 | 预订的 | +| `防漏` | 非保留 | | | | +| `至少` | 非保留(不能是函数或类型) | | | | +| `剩下` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `长度` | | 非保留 | 非保留 | 非保留 | +| `等级` | 非保留 | 非保留 | 非保留 | 预订的 | +| `图书馆` | | 非保留 | 非保留 | | +| `喜欢` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `LIKE_REGEX` | | 预订的 | 预订的 | | +| `限制` | 保留,需要`作为` | 非保留 | 非保留 | | +| `关联` | | 非保留 | 非保留 | | +| `列表` | | 预订的 | | | +| `听` | 非保留 | | | | +| `LN` | | 预订的 | 预订的 | | +| `加载` | 非保留 | | | | +| `当地的` | 非保留 | 预订的 | 预订的 | 预订的 | +| `当地时间` | 预订的 | 预订的 | 预订的 | | +| `本地时间戳` | 预订的 | 预订的 | 预订的 | | +| `地点` | 非保留 | 非保留 | 非保留 | | +| `定位器` | | 非保留 | 非保留 | | +| `锁` | 非保留 | | | | +| `锁定` | 非保留 | | | | +| `日志` | | 预订的 | | | +| `日志10` | | 预订的 | | | +| `已记录` | 非保留 | | | | +| `降低` | | 预订的 | 预订的 | 预订的 | +| `米` | | 非保留 | 非保留 | | +| `地图` | | 非保留 | 非保留 | | +| `映射` | 非保留 | 非保留 | 非保留 | | +| `匹配` | 非保留 | 预订的 | 预订的 | 预订的 | +| `匹配` | | 非保留 | 非保留 | | +| `火柴` | | 预订的 | | | +| `MATCH_NUMBER` | | 预订的 | | | +| `MATCH_RECOGNIZE` | | 预订的 | | | +| `物化` | 非保留 | | | | +| `最大限度` | | 预订的 | 预订的 | 预订的 | +| `最大值` | 非保留 | 非保留 | 非保留 | | +| `措施` | | 预订的 | | | +| `成员` | | 预订的 | 预订的 | | +| `合并` | | 预订的 | 预订的 | | +| `MESSAGE_LENGTH` | | 非保留 | 非保留 | 非保留 | +| `MESSAGE_OCTET_LENGTH` | | 非保留 | 非保留 | 非保留 | +| `MESSAGE_TEXT` | | 非保留 | 非保留 | 非保留 | +| `方法` | 非保留 | 预订的 | 预订的 | | +| `最小` | | 预订的 | 预订的 | 预订的 | +| `分钟` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `最小值` | 非保留 | 非保留 | 非保留 | | +| `模组` | | 预订的 | 预订的 | | +| `模式` | 非保留 | | | | +| `修改` | | 预订的 | 预订的 | | +| `模块` | | 预订的 | 预订的 | 预订的 | +| `月` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `更多的` | | 非保留 | 非保留 | 非保留 | +| `移动` | 非保留 | | | | +| `多组` | | 预订的 | 预订的 | | +| `腮腺炎` | | 非保留 | 非保留 | 非保留 | +| `姓名` | 非保留 | 非保留 | 非保留 | 非保留 | +| `名称` | 非保留 | 非保留 | 非保留 | 预订的 | +| `命名空间` | | 非保留 | 非保留 | | +| `国家的` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `自然的` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `NCHAR` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `NCLOB` | | 预订的 | 预订的 | | +| `嵌套` | | 非保留 | | | +| `嵌套` | | 非保留 | 非保留 | | +| `新的` | 非保留 | 预订的 | 预订的 | | +| `下一个` | 非保留 | 非保留 | 非保留 | 预订的 | +| `NFC` | 非保留 | 非保留 | 非保留 | | +| `NFD` | 非保留 | 非保留 | 非保留 | | +| `NFKC` | 非保留 | 非保留 | 非保留 | | +| `NFKD` | 非保留 | 非保留 | 非保留 | | +| `零` | | 非保留 | 非保留 | | +| `不` | 非保留 | 预订的 | 预订的 | 预订的 | +| `没有` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `标准化` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `归一化` | 非保留 | 非保留 | 非保留 | | +| `不是` | 预订的 | 预订的 | 预订的 | 预订的 | +| `没有` | 非保留 | | | | +| `通知` | 非保留 | | | | +| `非空` | 保留(可以是函数或类型),需要`作为` | | | | +| `现在等待` | 非保留 | | | | +| `NTH_VALUE` | | 预订的 | 预订的 | | +| `NTILE` | | 预订的 | 预订的 | | +| `空值` | 预订的 | 预订的 | 预订的 | 预订的 | +| `可空的` | | 非保留 | 非保留 | 非保留 | +| `NULLIF` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `空值` | 非保留 | 非保留 | 非保留 | | +| `数字` | | 非保留 | 非保留 | 非保留 | +| `数字` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `目的` | 非保留 | 非保留 | 非保留 | | +| `OCCURRENCES_REGEX` | | 预订的 | 预订的 | | +| `八位字节` | | 非保留 | 非保留 | | +| `OCTET_LENGTH` | | 预订的 | 预订的 | 预订的 | +| `的` | 非保留 | 预订的 | 预订的 | 预订的 | +| `离开` | 非保留 | 非保留 | 非保留 | | +| `抵消` | 保留,需要`作为` | 预订的 | 预订的 | | +| `OID` | 非保留 | | | | +| `老的` | 非保留 | 预订的 | 预订的 | | +| `忽略` | | 预订的 | | | +| `在` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `一` | | 预订的 | | | +| `只要` | 预订的 | 预订的 | 预订的 | 预订的 | +| `打开` | | 预订的 | 预订的 | 预订的 | +| `操作员` | 非保留 | | | | +| `选项` | 非保留 | 非保留 | 非保留 | 预订的 | +| `选项` | 非保留 | 非保留 | 非保留 | | +| `要么` | 预订的 | 预订的 | 预订的 | 预订的 | +| `命令` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `订购` | | 非保留 | 非保留 | | +| `序数` | 非保留 | 非保留 | 非保留 | | +| `其他` | 非保留 | 非保留 | 非保留 | | +| `出去` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `外` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `输出` | | 非保留 | 非保留 | 预订的 | +| `超过` | 非保留,需要`作为` | 预订的 | 预订的 | | +| `溢出` | | 非保留 | | | +| `重叠` | 保留(可以是函数或类型),需要`作为` | 预订的 | 预订的 | 预订的 | +| `覆盖` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `压倒一切` | 非保留 | 非保留 | 非保留 | | +| `拥有` | 非保留 | | | | +| `所有者` | 非保留 | | | | +| `磷` | | 非保留 | 非保留 | | +| `软垫` | | 非保留 | 非保留 | 预订的 | +| `平行线` | 非保留 | | | | +| `范围` | | 预订的 | 预订的 | | +| `PARAMETER_MODE` | | 非保留 | 非保留 | | +| `PARAMETER_NAME` | | 非保留 | 非保留 | | +| `PARAMETER_​ORDINAL_​POSITION` | | 非保留 | 非保留 | | +| `PARAMETER_​SPECIFIC_​CATALOG` | | 非保留 | 非保留 | | +| `PARAMETER_​SPECIFIC_​NAME` | | 非保留 | 非保留 | | +| `PARAMETER_​SPECIFIC_​SCHEMA` | | 非保留 | 非保留 | | +| `解析器` | 非保留 | | | | +| `部分的` | 非保留 | 非保留 | 非保留 | 预订的 | +| `划分` | 非保留 | 预订的 | 预订的 | | +| `帕斯卡` | | 非保留 | 非保留 | 非保留 | +| `经过` | | 非保留 | | | +| `通过` | 非保留 | 非保留 | 非保留 | | +| `直通` | | 非保留 | 非保留 | | +| `密码` | 非保留 | | | | +| `过去的` | | 非保留 | | | +| `小路` | | 非保留 | 非保留 | | +| `图案` | | 预订的 | | | +| `每` | | 预订的 | | | +| `百分` | | 预订的 | 预订的 | | +| `PERCENTILE_CONT` | | 预订的 | 预订的 | | +| `PERCENTILE_DISC` | | 预订的 | 预订的 | | +| `PERCENT_RANK` | | 预订的 | 预订的 | | +| `时期` | | 预订的 | 预订的 | | +| `允许` | | 非保留 | 非保留 | | +| `置换` | | 预订的 | | | +| `配售` | 预订的 | 非保留 | 非保留 | | +| `计划` | | 非保留 | | | +| `计划` | 非保留 | | | | +| `PLI` | | 非保留 | 非保留 | 非保留 | +| `政策` | 非保留 | | | | +| `部分` | | 预订的 | 预订的 | | +| `位置` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `POSITION_REGEX` | | 预订的 | 预订的 | | +| `力量` | | 预订的 | 预订的 | | +| `之前` | | 预订的 | 预订的 | | +| `前` | 非保留 | 非保留 | 非保留 | | +| `精确` | 非保留(不能是函数或类型),需要`作为` | 预订的 | 预订的 | 预订的 | +| `准备` | 非保留 | 预订的 | 预订的 | 预订的 | +| `准备好的` | 非保留 | | | | +| `保存` | 非保留 | 非保留 | 非保留 | 预订的 | +| `基本的` | 预订的 | 预订的 | 预订的 | 预订的 | +| `事先的` | 非保留 | 非保留 | 非保留 | 预订的 | +| `私人的` | | 非保留 | | | +| `特权` | 非保留 | 非保留 | 非保留 | 预订的 | +| `程序` | 非保留 | | | | +| `程序` | 非保留 | 预订的 | 预订的 | 预订的 | +| `程序` | 非保留 | | | | +| `程序` | 非保留 | | | | +| `修剪` | | 非保留 | | | +| `PTF` | | 预订的 | | | +| `民众` | | 非保留 | 非保留 | 预订的 | +| `出版物` | 非保留 | | | | +| `引用` | 非保留 | | | | +| `引号` | | 非保留 | | | +| `范围` | 非保留 | 预订的 | 预订的 | | +| `秩` | | 预订的 | 预订的 | | +| `读` | 非保留 | 非保留 | 非保留 | 预订的 | +| `读取` | | 预订的 | 预订的 | | +| `真实的` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `重新分配` | 非保留 | | | | +| `重新检查` | 非保留 | | | | +| `恢复` | | 非保留 | 非保留 | | +| `递归的` | 非保留 | 预订的 | 预订的 | | +| `参考` | 非保留 | 预订的 | 预订的 | | +| `参考` | 预订的 | 预订的 | 预订的 | 预订的 | +| `参考` | 非保留 | 预订的 | 预订的 | | +| `刷新` | 非保留 | | | | +| `REGR_AVGX` | | 预订的 | 预订的 | | +| `REGR_AVGY` | | 预订的 | 预订的 | | +| `REGR_COUNT` | | 预订的 | 预订的 | | +| `REGR_INTERCEPT` | | 预订的 | 预订的 | | +| `REGR_R2` | | 预订的 | 预订的 | | +| `REGR_SLOPE` | | 预订的 | 预订的 | | +| `REGR_SXX` | | 预订的 | 预订的 | | +| `REGR_SXY` | | 预订的 | 预订的 | | +| `REGR_SYY` | | 预订的 | 预订的 | | +| `重新索引` | 非保留 | | | | +| `相对的` | 非保留 | 非保留 | 非保留 | 预订的 | +| `释放` | 非保留 | 预订的 | 预订的 | | +| `改名` | 非保留 | | | | +| `可重复` | 非保留 | 非保留 | 非保留 | 非保留 | +| `代替` | 非保留 | | | | +| `复制品` | 非保留 | | | | +| `要求` | | 非保留 | 非保留 | | +| `重启` | 非保留 | | | | +| `尊重` | | 非保留 | 非保留 | | +| `重新开始` | 非保留 | 非保留 | 非保留 | | +| `恢复` | | 非保留 | 非保留 | | +| `严格` | 非保留 | 非保留 | 非保留 | 预订的 | +| `结果` | | 预订的 | 预订的 | | +| `返回` | 非保留 | 预订的 | 预订的 | | +| `RETURNED_CARDINALITY` | | 非保留 | 非保留 | | +| `RETURNED_LENGTH` | | 非保留 | 非保留 | 非保留 | +| `返回_​OCTET_​LENGTH` | | 非保留 | 非保留 | 非保留 | +| `RETURNED_SQLSTATE` | | 非保留 | 非保留 | 非保留 | +| `返回` | 保留,需要`作为` | 非保留 | 非保留 | | +| `回报` | 非保留 | 预订的 | 预订的 | | +| `撤销` | 非保留 | 预订的 | 预订的 | 预订的 | +| `正确的` | 保留(可以是函数或类型) | 预订的 | 预订的 | 预订的 | +| `角色` | 非保留 | 非保留 | 非保留 | | +| `回滚` | 非保留 | 预订的 | 预订的 | 预订的 | +| `卷起` | 非保留 | 预订的 | 预订的 | | +| `常规` | 非保留 | 非保留 | 非保留 | | +| `例行公事` | 非保留 | | | | +| `ROUTINE_CATALOG` | | 非保留 | 非保留 | | +| `ROUTINE_NAME` | | 非保留 | 非保留 | | +| `ROUTINE_SCHEMA` | | 非保留 | 非保留 | | +| `排` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `行` | 非保留 | 预订的 | 预订的 | 预订的 | +| `ROW_COUNT 行` | | 非保留 | 非保留 | 非保留 | +| `ROW_NUMBER` | | 预订的 | 预订的 | | +| `规则` | 非保留 | | | | +| `跑步` | | 预订的 | | | +| `保存点` | 非保留 | 预订的 | 预订的 | | +| `标量` | | 非保留 | | | +| `规模` | | 非保留 | 非保留 | 非保留 | +| `架构` | 非保留 | 非保留 | 非保留 | 预订的 | +| `模式` | 非保留 | | | | +| `SCHEMA_NAME` | | 非保留 | 非保留 | 非保留 | +| `范围` | | 预订的 | 预订的 | | +| `SCOPE_CATALOG` | | 非保留 | 非保留 | | +| `SCOPE_NAME` | | 非保留 | 非保留 | | +| `SCOPE_SCHEMA` | | 非保留 | 非保留 | | +| `滚动` | 非保留 | 预订的 | 预订的 | 预订的 | +| `搜索` | 非保留 | 预订的 | 预订的 | | +| `第二` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `部分` | | 非保留 | 非保留 | 预订的 | +| `安全` | 非保留 | 非保留 | 非保留 | | +| `寻找` | | 预订的 | | | +| `选择` | 预订的 | 预订的 | 预订的 | 预订的 | +| `可选择的` | | 非保留 | 非保留 | | +| `自己` | | 非保留 | 非保留 | | +| `敏感的` | | 预订的 | 预订的 | | +| `顺序` | 非保留 | 非保留 | 非保留 | | +| `序列` | 非保留 | | | | +| `可序列化` | 非保留 | 非保留 | 非保留 | 非保留 | +| `服务器` | 非保留 | 非保留 | 非保留 | | +| `服务器名称` | | 非保留 | 非保留 | 非保留 | +| `会议` | 非保留 | 非保留 | 非保留 | 预订的 | +| `SESSION_USER` | 预订的 | 预订的 | 预订的 | 预订的 | +| `放` | 非保留 | 预订的 | 预订的 | 预订的 | +| `设置` | 非保留(不能是函数或类型) | | | | +| `套` | 非保留 | 非保留 | 非保留 | | +| `分享` | 非保留 | | | | +| `显示` | 非保留 | 预订的 | | | +| `相似的` | 保留(可以是函数或类型) | 预订的 | 预订的 | | +| `简单的` | 非保留 | 非保留 | 非保留 | | +| `罪` | | 预订的 | | | +| `信纳` | | 预订的 | | | +| `尺寸` | | 非保留 | 非保留 | 预订的 | +| `跳过` | 非保留 | 预订的 | | | +| `小灵通` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `快照` | 非保留 | | | | +| `一些` | 预订的 | 预订的 | 预订的 | 预订的 | +| `来源` | | 非保留 | 非保留 | | +| `空间` | | 非保留 | 非保留 | 预订的 | +| `具体的` | | 预订的 | 预订的 | | +| `特定类型` | | 预订的 | 预订的 | | +| `SPECIFIC_NAME` | | 非保留 | 非保留 | | +| `SQL` | 非保留 | 预订的 | 预订的 | 预订的 | +| `SQLCODE` | | | | 预订的 | +| `SQL错误` | | | | 预订的 | +| `SQL异常` | | 预订的 | 预订的 | | +| `SQLSTATE` | | 预订的 | 预订的 | 预订的 | +| `SQL警告` | | 预订的 | 预订的 | | +| `SQRT` | | 预订的 | 预订的 | | +| `稳定的` | 非保留 | | | | +| `独立的` | 非保留 | 非保留 | 非保留 | | +| `开始` | 非保留 | 预订的 | 预订的 | | +| `状态` | | 非保留 | 非保留 | | +| `陈述` | 非保留 | 非保留 | 非保留 | | +| `静止的` | | 预订的 | 预订的 | | +| `统计数据` | 非保留 | | | | +| `STDDEV_POP` | | 预订的 | 预订的 | | +| `STDDEV_SAMP` | | 预订的 | 预订的 | | +| `标准输入` | 非保留 | | | | +| `标准输出` | 非保留 | | | | +| `贮存` | 非保留 | | | | +| `已储存` | 非保留 | | | | +| `严格的` | 非保留 | | | | +| `细绳` | | 非保留 | | | +| `条` | 非保留 | 非保留 | 非保留 | | +| `结构体` | | 非保留 | 非保留 | | +| `风格` | | 非保留 | 非保留 | | +| `SUBCLASS_ORIGIN` | | 非保留 | 非保留 | 非保留 | +| `子集` | | 预订的 | 预订的 | | +| `订阅` | 非保留 | | | | +| `子集` | | 预订的 | | | +| `子串` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `SUBSTRING_REGEX` | | 预订的 | 预订的 | | +| `成功` | | 预订的 | 预订的 | | +| `和` | | 预订的 | 预订的 | 预订的 | +| `支持` | 非保留 | | | | +| `对称的` | 预订的 | 预订的 | 预订的 | | +| `系统标识符` | 非保留 | | | | +| `系统` | 非保留 | 预订的 | 预订的 | | +| `系统时间` | | 预订的 | 预订的 | | +| `SYSTEM_USER` | | 预订的 | 预订的 | 预订的 | +| `吨` | | 非保留 | 非保留 | | +| `桌子` | 预订的 | 预订的 | 预订的 | 预订的 | +| `表格` | 非保留 | | | | +| `表样` | 保留(可以是函数或类型) | 预订的 | 预订的 | | +| `表空间` | 非保留 | | | | +| `TABLE_NAME` | | 非保留 | 非保留 | 非保留 | +| `谭` | | 预订的 | | | +| `谭` | | 预订的 | | | +| `温度` | 非保留 | | | | +| `模板` | 非保留 | | | | +| `暂时的` | 非保留 | 非保留 | 非保留 | 预订的 | +| `文本` | 非保留 | | | | +| `然后` | 预订的 | 预订的 | 预订的 | 预订的 | +| `通过` | | 非保留 | | | +| `领带` | 非保留 | 非保留 | 非保留 | | +| `时间` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `时间戳` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `TIMEZONE_HOUR` | | 预订的 | 预订的 | 预订的 | +| `TIMEZONE_MINUTE` | | 预订的 | 预订的 | 预订的 | +| `到` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `代币` | | 非保留 | 非保留 | | +| `TOP_LEVEL_COUNT` | | 非保留 | 非保留 | | +| `尾随` | 预订的 | 预订的 | 预订的 | 预订的 | +| `交易` | 非保留 | 非保留 | 非保留 | 预订的 | +| `TRANSACTIONS_​已提交` | | 非保留 | 非保留 | | +| `TRANSACTIONS_​ROLLED_​BACK` | | 非保留 | 非保留 | | +| `TRANSACTION_ACTIVE` | | 非保留 | 非保留 | | +| `转换` | 非保留 | 非保留 | 非保留 | | +| `变换` | | 非保留 | 非保留 | | +| `翻译` | | 预订的 | 预订的 | 预订的 | +| `TRANSLATE_REGEX` | | 预订的 | 预订的 | | +| `翻译` | | 预订的 | 预订的 | 预订的 | +| `对待` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `扳机` | 非保留 | 预订的 | 预订的 | | +| `TRIGGER_CATALOG` | | 非保留 | 非保留 | | +| `TRIGGER_NAME` | | 非保留 | 非保留 | | +| `TRIGGER_SCHEMA` | | 非保留 | 非保留 | | +| `修剪` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `TRIM_ARRAY` | | 预订的 | 预订的 | | +| `真的` | 预订的 | 预订的 | 预订的 | 预订的 | +| `截短` | 非保留 | 预订的 | 预订的 | | +| `值得信赖` | 非保留 | | | | +| `类型` | 非保留 | 非保留 | 非保留 | 非保留 | +| `类型` | 非保留 | | | | +| `环球影城` | 非保留 | 预订的 | 预订的 | | +| `无界` | 非保留 | 非保留 | 非保留 | | +| `未提交` | 非保留 | 非保留 | 非保留 | 非保留 | +| `无条件的` | | 非保留 | | | +| `在下面` | | 非保留 | 非保留 | | +| `未加密` | 非保留 | | | | +| `联盟` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `独特` | 预订的 | 预订的 | 预订的 | 预订的 | +| `未知` | 非保留 | 预订的 | 预订的 | 预订的 | +| `取消链接` | | 非保留 | 非保留 | | +| `不听` | 非保留 | | | | +| `未记录` | 非保留 | | | | +| `无与伦比` | | 预订的 | | | +| `未命名` | | 非保留 | 非保留 | 非保留 | +| `无巢` | | 预订的 | 预订的 | | +| `直到` | 非保留 | | | | +| `未分类` | | 非保留 | 非保留 | | +| `更新` | 非保留 | 预订的 | 预订的 | 预订的 | +| `上` | | 预订的 | 预订的 | 预订的 | +| `URI` | | 非保留 | 非保留 | | +| `用法` | | 非保留 | 非保留 | 预订的 | +| `用户` | 预订的 | 预订的 | 预订的 | 预订的 | +| `USER_​DEFINED_​TYPE_​CATALOG` | | 非保留 | 非保留 | | +| `USER_​DEFINED_​TYPE_​CODE` | | 非保留 | 非保留 | | +| `USER_​DEFINED_​TYPE_​NAME` | | 非保留 | 非保留 | | +| `USER_​DEFINED_​TYPE_​SCHEMA` | | 非保留 | 非保留 | | +| `使用` | 预订的 | 预订的 | 预订的 | 预订的 | +| `UTF16` | | 非保留 | | | +| `UTF32` | | 非保留 | | | +| `UTF8` | | 非保留 | | | +| `真空` | 非保留 | | | | +| `有效的` | 非保留 | 非保留 | 非保留 | | +| `证实` | 非保留 | | | | +| `验证器` | 非保留 | | | | +| `价值` | 非保留 | 预订的 | 预订的 | 预订的 | +| `价值观` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `的价值` | | 预订的 | 预订的 | | +| `变量` | | 预订的 | 预订的 | | +| `VARCHAR` | 非保留(不能是函数或类型) | 预订的 | 预订的 | 预订的 | +| `杂音` | 预订的 | | | | +| `变化` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `VAR_POP` | | 预订的 | 预订的 | | +| `VAR_SAMP` | | 预订的 | 预订的 | | +| `详细` | 保留(可以是函数或类型) | | | | +| `版本` | 非保留 | 非保留 | 非保留 | | +| `版本控制` | | 预订的 | 预订的 | | +| `看法` | 非保留 | 非保留 | 非保留 | 预订的 | +| `意见` | 非保留 | | | | +| `易挥发的` | 非保留 | | | | +| `什么时候` | 预订的 | 预订的 | 预订的 | 预订的 | +| `每当` | | 预订的 | 预订的 | 预订的 | +| `在哪里` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `空白` | 非保留 | 非保留 | 非保留 | | +| `WIDTH_BUCKET` | | 预订的 | 预订的 | | +| `窗户` | 保留,需要`作为` | 预订的 | 预订的 | | +| `和` | 保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `之内` | 非保留,需要`作为` | 预订的 | 预订的 | | +| `没有` | 非保留,需要`作为` | 预订的 | 预订的 | | +| `工作` | 非保留 | 非保留 | 非保留 | 预订的 | +| `包装` | 非保留 | 非保留 | 非保留 | | +| `写` | 非保留 | 非保留 | 非保留 | 预订的 | +| `XML` | 非保留 | 预订的 | 预订的 | | +| `XMLAGG` | | 预订的 | 预订的 | | +| `XML属性` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XMLBINARY` | | 预订的 | 预订的 | | +| `XMLCAST` | | 预订的 | 预订的 | | +| `XML评论` | | 预订的 | 预订的 | | +| `XMLCONCAT` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XML声明` | | 非保留 | 非保留 | | +| `XML文档` | | 预订的 | 预订的 | | +| `XMLELEMENT` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XMLEXISTS` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XML森林` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XMLITERATE` | | 预订的 | 预订的 | | +| `XML命名空间` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XML解析` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XMLPI` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XML查询` | | 预订的 | 预订的 | | +| `XML根` | 非保留(不能是函数或类型) | | | | +| `XMLSCHEMA` | | 非保留 | 非保留 | | +| `XML序列化` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XML表` | 非保留(不能是函数或类型) | 预订的 | 预订的 | | +| `XML文本` | | 预订的 | 预订的 | | +| `XML验证` | | 预订的 | 预订的 | | +| `年` | 非保留,需要`作为` | 预订的 | 预订的 | 预订的 | +| `是的` | 非保留 | 非保留 | 非保留 | | +| `区` | 非保留 | 非保留 | 非保留 | 预订的 | diff --git a/docs/X/sql-listen.md b/docs/en/sql-listen.md similarity index 100% rename from docs/X/sql-listen.md rename to docs/en/sql-listen.md diff --git a/docs/en/sql-listen.zh.md b/docs/en/sql-listen.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..6185a1da40eed51c15c694824a971488104cf1b9 --- /dev/null +++ b/docs/en/sql-listen.zh.md @@ -0,0 +1,53 @@ +## 听 + +LISTEN — 收听通知 + +## 概要 + +``` +LISTEN channel +``` + +## 描述 + +`听`将当前会话注册为名为的通知通道上的侦听器*`渠道`*.如果当前会话已注册为此通知通道的侦听器,则不执行任何操作。 + +每当命令`通知 *`渠道`*`由该会话或连接到同一数据库的另一个会话调用,当前在该通知通道上侦听的所有会话都会被通知,并且每个会话将依次通知其连接的客户端应用程序。 + +会话可以取消注册给定的通知通道与`不听`命令。会话结束时会自动清除会话的监听注册。 + +客户端应用程序必须用来检测通知事件的方法取决于它使用的 PostgreSQL 应用程序编程接口。使用 libpq 库,应用程序问题`听`作为普通的 SQL 命令,然后必须定期调用该函数`PQ 通知`查看是否已收到任何通知事件。其他接口如 libpgtcl 提供了更高级别的方法来处理通知事件;事实上,使用 libpgtcl 应用程序程序员甚至不应该发出`听`或者`不听`直接地。有关更多详细信息,请参阅您正在使用的接口的文档。 + +## 参数 + +*`渠道`* + +通知通道的名称(任何标识符)。 + +## 笔记 + +`听`在事务提交时生效。如果`听`或者`不听`在稍后回滚的事务中执行,正在侦听的通知通道集不变。 + +已执行的交易`听`无法为两阶段提交做好准备。 + +首次设置侦听会话时存在竞争条件:如果并发提交的事务正在发送通知事件,那么新侦听会话将接收哪些通知事件?答案是会话将在事务提交步骤的某个瞬间之后接收所有提交的事件。但这比事务可能在查询中观察到的任何数据库状态稍晚。这导致以下使用规则`听`:首先执行(并提交!)该命令,然后在新事务中根据应用程序逻辑的需要检查数据库状态,然后依靠通知来了解数据库状态的后续更改。最初收到的几个通知可能是指在初始数据库检查中已经观察到的更新,但这通常是无害的。 + +[通知](sql-notify.html)包含对使用的更广泛的讨论`听`和`通知`. + +## 例子 + +从 psql 配置并执行监听/通知序列: + +``` +LISTEN virtual; +NOTIFY virtual; +Asynchronous notification "virtual" received from server process with PID 8448. +``` + +## 兼容性 + +没有`听`SQL 标准中的语句。 + +## 也可以看看 + +[通知](sql-notify.html),[不听](sql-unlisten.html) diff --git a/docs/X/sql-load.md b/docs/en/sql-load.md similarity index 100% rename from docs/X/sql-load.md rename to docs/en/sql-load.md diff --git a/docs/en/sql-load.zh.md b/docs/en/sql-load.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e86e52062cc9cabba993361140a1849fa22d27e8 --- /dev/null +++ b/docs/en/sql-load.zh.md @@ -0,0 +1,27 @@ +## 加载 + +LOAD — 加载共享库文件 + +## 概要 + +``` +LOAD 'filename' +``` + +## 描述 + +此命令将共享库文件加载到 PostgreSQL 服务器的地址空间中。如果文件已经加载,该命令什么也不做。每当调用其中一个函数时,会自动加载包含 C 函数的共享库文件。因此,一个显式`加载`通常只需要加载一个通过“钩子”修改服务器行为的库,而不是提供一组函数。 + +库文件名通常仅作为裸文件名给出,在服务器的库搜索路径中查找(由[动态的\_图书馆\_小路](runtime-config-client.html#GUC-DYNAMIC-LIBRARY-PATH))。或者,它可以作为完整路径名给出。在任何一种情况下,平台的标准共享库文件扩展名都可以省略。看[第 38.10.1 节](xfunc-c.html#XFUNC-C-DYNLOAD)有关此主题的更多信息。 + +[](<>) + +非超级用户只能申请`加载`到位于的库文件`$libdir/插件/`— 指定的*`文件名`*必须以该字符串开头。(数据库管理员有责任确保那里只安装“安全”的库。) + +## 兼容性 + +`加载`是一个 PostgreSQL 扩展。 + +## 也可以看看 + +[创建函数](sql-createfunction.html) diff --git a/docs/X/sql-lock.md b/docs/en/sql-lock.md similarity index 100% rename from docs/X/sql-lock.md rename to docs/en/sql-lock.md diff --git a/docs/en/sql-lock.zh.md b/docs/en/sql-lock.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..de6ff3ae7389fe63083c31d293d24eb5ca1db2de --- /dev/null +++ b/docs/en/sql-lock.zh.md @@ -0,0 +1,88 @@ +## 锁 + +LOCK — 锁定表 + +## 概要 + +``` +LOCK [ TABLE ] [ ONLY ] name [ * ] [, ...] [ IN lockmode MODE ] [ NOWAIT ] + +where lockmode is one of: + + ACCESS SHARE | ROW SHARE | ROW EXCLUSIVE | SHARE UPDATE EXCLUSIVE + | SHARE | SHARE ROW EXCLUSIVE | EXCLUSIVE | ACCESS EXCLUSIVE +``` + +## 描述 + +`锁表`获得表级锁,如有必要,等待释放任何冲突的锁。如果`现在等待`被指定,`锁表`不等待获取所需的锁:如果不能立即获取,则中止命令并发出错误。一旦获得,该锁将在当前事务的剩余部分中保持。(没有`解锁表`命令;锁总是在事务结束时释放。) + +当视图被锁定时,视图定义查询中出现的所有关系也以相同的锁定模式递归锁定。 + +当为引用表的命令自动获取锁时,PostgreSQL 总是使用限制最少的锁模式。`锁表`提供了您可能需要更多限制性锁定的情况。例如,假设应用程序在`阅读已提交`隔离级别,需要确保表中的数据在事务期间保持稳定。为此,您可以获得`分享`在查询之前对表进行锁定模式。这将防止并发数据更改并确保对表的后续读取看到已提交数据的稳定视图,因为`分享`锁定模式与`排独家`作家获得的锁,以及您的`锁定表 *`姓名`* 在分享模式`声明将等到任何并发持有人`排独家`模式锁定提交或回滚。因此,一旦您获得锁,就没有未提交的未提交写入;此外,在您释放锁之前,没有人可以开始。 + +在运行事务时达到类似的效果`可重复阅读`或者`可序列化`隔离级别,你必须执行`锁表`执行任何之前的语句`选择`或数据修改声明。一种`可重复阅读`或者`可序列化`事务的数据视图将在第一次被冻结`选择`或数据修改语句开始。一种`锁表`在事务的后期仍然会阻止并发写入——但它不能确保事务读取的内容与最新提交的值相对应。 + +如果此类事务要更改表中的数据,则应使用`共享行独家`锁定模式而不是`分享`模式。这可确保一次仅运行一个此类事务。没有这个,死锁是可能的:两个事务可能都获得`分享`模式,然后也无法获取`排独家`模式来实际执行他们的更新。(注意事务自己的锁永远不会冲突,所以事务可以获取`排独家`持有时的模式`分享`模式——但如果其他人持有则不会`分享`模式。)为避免死锁,请确保所有事务以相同的顺序获取相同对象的锁,如果单个对象涉及多种锁模式,则事务应始终首先获取最严格的模式。 + +有关锁定模式和锁定策略的更多信息,请参见[第 13.3 节](explicit-locking.html). + +## 参数 + +*`姓名`* + +要锁定的现有表的名称(可选模式限定)。如果`只要`在表名之前指定,只有该表被锁定。如果`只要`未指定时,该表及其所有后代表(如果有)将被锁定。可选地,`*`可以在表名之后指定以明确指示包含后代表。 + +命令`锁定表 a,b;`相当于`锁定表a;锁定表 b;`.这些表按照指定的顺序一一锁定`锁表`命令。 + +*`锁定模式`* + +锁定模式指定此锁定与哪些锁定冲突。锁定模式在[第 13.3 节](explicit-locking.html). + +如果未指定锁定模式,则`访问独家`,最严格的模式,被使用。 + +`现在等待` + +指定`锁表`不应该等待任何有冲突的锁被释放:如果不等待就不能立即获取指定的锁,则事务被中止。 + +## 笔记 + +`锁定表...在访问共享模式`需要`选择`目标表的权限。`锁定表...在排他模式`需要`插入`,`更新`,`删除`, 要么`截短`目标表的权限。所有其他形式的`锁`需要表级`更新`,`删除`, 要么`截短`特权。 + +对视图执行锁定的用户必须对视图具有相应的权限。此外,视图的所有者必须对底层基础关系具有相关权限,但执行锁定的用户不需要底层基础关系的任何权限。 + +`锁表`在事务块之外是无用的:锁只会保持到语句完成。因此 PostgreSQL 报告一个错误如果`锁`在事务块之外使用。采用[`开始`](sql-begin.html)和[`犯罪`](sql-commit.html)(要么[`回滚`](sql-rollback.html)) 来定义一个事务块。 + +`锁表`只处理表级锁,因此涉及的模式名称`排`都是用词不当。这些模式名称通常应该被解读为表明用户在锁定表中获取行级锁的意图。还,`排独家`mode 是一个可共享的表锁。请记住,所有锁定模式都具有相同的语义`锁表`是有关的,仅在关于哪些模式与哪些模式冲突的规则上有所不同。有关如何获取实际行级锁的信息,请参阅[第 13.3.2 节](explicit-locking.html#LOCKING-ROWS)和[锁定条款](sql-select.html#SQL-FOR-UPDATE-SHARE)在里面[选择](sql-select.html)文档。 + +## 例子 + +获得一个`分享`执行插入外键表时锁定主键表: + +``` +BEGIN WORK; +LOCK TABLE films IN SHARE MODE; +SELECT id FROM films + WHERE name = 'Star Wars: Episode I - The Phantom Menace'; +-- Do ROLLBACK if record was not returned +INSERT INTO films_user_comments VALUES + (_id_, 'GREAT! I was waiting for it for so long!'); +COMMIT WORK; +``` + +拿一个`共享行独家`执行删除操作时锁定主键表: + +``` +BEGIN WORK; +LOCK TABLE films IN SHARE ROW EXCLUSIVE MODE; +DELETE FROM films_user_comments WHERE id IN + (SELECT id FROM films WHERE rating < 5); +DELETE FROM films WHERE rating < 5; +COMMIT WORK; +``` + +## 兼容性 + +没有`锁表`在 SQL 标准中,它改为使用`设置交易`指定事务的并发级别。PostgreSQL 也支持。看[设置交易](sql-set-transaction.html)详情。 + +除了`访问共享`,`访问独家`, 和`共享更新独家`锁定模式、PostgreSQL 锁定模式和`锁表`语法与 Oracle 中的语法兼容。 diff --git a/docs/X/sql-move.md b/docs/en/sql-move.md similarity index 100% rename from docs/X/sql-move.md rename to docs/en/sql-move.md diff --git a/docs/en/sql-move.zh.md b/docs/en/sql-move.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..ef1c172952ffc147d5e8a5beaa781793e63225f7 --- /dev/null +++ b/docs/en/sql-move.zh.md @@ -0,0 +1,64 @@ +## 移动 + +MOVE — 定位光标 + +## 概要 + +``` +MOVE [ direction [ FROM | IN ] ] cursor_name + +where direction can be empty or one of: + + NEXT + PRIOR + FIRST + LAST + ABSOLUTE count + RELATIVE count + count + ALL + FORWARD + FORWARD count + FORWARD ALL + BACKWARD + BACKWARD count + BACKWARD ALL +``` + +## 描述 + +`移动`重新定位游标而不检索任何数据。`移动`工作原理完全一样`拿来`命令,除了它只定位光标并且不返回行。 + +的参数`移动`命令是相同的`拿来`命令;参考[拿来](sql-fetch.html)有关语法和用法的详细信息。 + +## 输出 + +成功完成后,一个`移动`命令返回形式的命令标签 + +``` +MOVE count +``` + +这*`数数`*是 a 的行数`拿来`具有相同参数的命令将返回(可能为零)。 + +## 例子 + +``` +BEGIN WORK; +DECLARE liahona CURSOR FOR SELECT * FROM films; + +-- Skip the first 5 rows: +MOVE FORWARD 5 IN liahona; +MOVE 5 + +-- Fetch the 6th row from the cursor liahona: +FETCH 1 FROM liahona; + code | title | did | date_prod | kind | len +## Compatibility + + There is no `MOVE` statement in the SQL standard. + +## See Also + +[CLOSE](sql-close.html), [DECLARE](sql-declare.html), [FETCH](sql-fetch.html) +``` diff --git a/docs/X/sql-notify.md b/docs/en/sql-notify.md similarity index 100% rename from docs/X/sql-notify.md rename to docs/en/sql-notify.md diff --git a/docs/en/sql-notify.zh.md b/docs/en/sql-notify.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..6f690c43cbb8550841a61c82c7d1757bdf2c4c6c --- /dev/null +++ b/docs/en/sql-notify.zh.md @@ -0,0 +1,75 @@ +## 通知 + +NOTIFY — 生成通知 + +## 概要 + +``` +NOTIFY channel [ , payload ] +``` + +## 描述 + +这`通知`命令将通知事件与可选的“有效负载”字符串一起发送到先前已执行的每个客户端应用程序`听 *`渠道`*`对于当前数据库中的指定通道名称。所有用户都可以看到通知。 + +`通知`为访问同一 PostgreSQL 数据库的进程集合提供了一种简单的进程间通信机制。有效负载字符串可以与通知一起发送,并且可以通过使用数据库中的表将附加数据从通知程序传递到侦听器来构建用于传递结构化数据的更高级别的机制。 + +为通知事件传递给客户端的信息包括通知通道名称、通知会话的服务器进程 PID 和有效负载字符串,如果未指定则为空字符串。 + +由数据库设计者定义将在给定数据库中使用的通道名称以及每个通道的含义。通常,通道名称与数据库中某个表的名称相同,并且 notify 事件本质上意味着“我更改了此表,看看它有什么新变化”。但没有这样的关联是由`通知`和`听`命令。例如,数据库设计人员可以使用几个不同的通道名称来表示对单个表的不同类型的更改。或者,可以使用有效负载字符串来区分各种情况。 + +什么时候`通知`用于表示对特定表的更改的发生,一种有用的编程技术是将`通知`在由表更新触发的语句触发器中。这样,当表格改变时,通知会自动发生,应用程序程序员不会不小心忘记这样做。 + +`通知`以一些重要的方式与 SQL 事务交互。首先,如果一个`通知`在事务内部执行时,除非事务被提交,否则不会传递通知事件。这是适当的,因为如果事务中止,其中的所有命令都无效,包括`通知`.但是,如果人们期望通知事件会立即传递,这可能会令人不安。其次,如果侦听会话在事务中接收到通知信号,则通知事件将在事务完成(提交或中止)之后才传递给其连接的客户端。同样,推理是,如果通知是在后来被中止的事务中传递的,人们会希望通知以某种方式撤消——但是一旦通知发送给客户端,服务器就不能“收回”通知。所以通知事件只在事务之间传递。这样做的结果是应用程序使用`通知`对于实时信号,应尽量缩短交易时间。 + +如果在同一事务中使用相同的有效负载字符串多次发出相同的通道名称信号,则只有一个通知事件实例被传递给侦听器。另一方面,具有不同有效负载字符串的通知将始终作为不同的通知传递。同样,来自不同交易的通知永远不会被合并到一个通知中。除了删除重复通知的后续实例之外,`通知`保证来自同一事务的通知按照它们发送的顺序传递。还保证来自不同事务的消息按照事务提交的顺序传递。 + +执行的客户端很常见`通知`在同一个通知频道本身上收听。在这种情况下,它会返回一个通知事件,就像所有其他监听会话一样。根据应用程序逻辑,这可能会导致无用的工作,例如,读取数据库表以查找该会话刚刚写出的相同更新。通过注意通知会话的服务器进程 PID(在通知事件消息中提供)是否与自己会话的 PID(可从 libpq 获得)相同,可以避免此类额外工作。当它们相同时,通知事件是自己的工作反弹,可以忽略。 + +## 参数 + +*`渠道`* + +要发出信号的通知通道的名称(任何标识符)。 + +*`有效载荷`* + +与通知一起传送的“有效负载”字符串。这必须指定为简单的字符串文字。在默认配置中,它必须短于 8000 字节。(如果需要传递二进制数据或大量信息,最好将其放在数据库表中并发送记录的密钥。) + +## 笔记 + +有一个队列保存已发送但尚未由所有侦听会话处理的通知。如果这个队列满了,事务调用`通知`提交时会失败。队列非常大(标准安装中为 8GB),应该足以满足几乎所有用例。但是,如果会话执行,则无法进行清理`听`然后进入一个交易很长时间。一旦队列半满,您将在日志文件中看到警告,指出您正在阻止清理的会话。在这种情况下,您应该确保此会话结束其当前事务,以便可以继续进行清理。 + +功能`pg_notification_queue_usage`返回队列中当前被挂起通知占用的部分。看[第 9.26 节](functions-info.html)了解更多信息。 + +已执行的交易`通知`无法为两阶段提交做好准备。 + +### 皮克\_通知 + +[](<>) + +要发送通知,您还可以使用该功能``pg_notify`(`文本`,`文本`)`.该函数将通道名称作为第一个参数,将有效负载作为第二个参数。该功能比`通知`如果您需要使用非常量通道名称和有效负载,请使用命令。 + +## 例子 + +从 psql 配置并执行监听/通知序列: + +``` +LISTEN virtual; +NOTIFY virtual; +Asynchronous notification "virtual" received from server process with PID 8448. +NOTIFY virtual, 'This is the payload'; +Asynchronous notification "virtual" with payload "This is the payload" received from server process with PID 8448. + +LISTEN foo; +SELECT pg_notify('fo' || 'o', 'pay' || 'load'); +Asynchronous notification "foo" with payload "payload" received from server process with PID 14728. +``` + +## 兼容性 + +没有`通知`SQL 标准中的语句。 + +## 也可以看看 + +[听](sql-listen.html),[不听](sql-unlisten.html) diff --git a/docs/X/sql-prepare-transaction.md b/docs/en/sql-prepare-transaction.md similarity index 100% rename from docs/X/sql-prepare-transaction.md rename to docs/en/sql-prepare-transaction.md diff --git a/docs/en/sql-prepare-transaction.zh.md b/docs/en/sql-prepare-transaction.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7c32c6f35bba9f8f714f725b76772d3fb0775c22 --- /dev/null +++ b/docs/en/sql-prepare-transaction.zh.md @@ -0,0 +1,59 @@ +## 准备交易 + +PREPARE TRANSACTION — 为两阶段提交准备当前事务 + +## 概要 + +``` +PREPARE TRANSACTION transaction_id +``` + +## 描述 + +`准备交易`为两阶段提交准备当前事务。执行此命令后,事​​务不再与当前会话关联;相反,它的状态完全存储在磁盘上,即使在请求提交之前发生数据库崩溃,它也有很高的概率可以成功提交。 + +一旦准备好,事务可以稍后提交或回滚[`提交准备`](sql-commit-prepared.html)或者[`已准备好回滚`](sql-rollback-prepared.html), 分别。这些命令可以从任何会话发出,而不仅仅是执行原始事务的那个。 + +从发行环节来看,`准备交易`不像一个`回滚`command:执行后,没有活动的当前事务,准备好的事务的效果不再可见。(如果事务提交,效果将再次可见。) + +如果`准备交易`命令因任何原因失败,它变成`回滚`: 当前交易被取消。 + +## 参数 + +*`transaction_id`* + +一个任意标识符,稍后标识此交易`提交准备`或者`已准备好回滚`.标识符必须写为字符串文字,并且长度必须小于 200 字节。它不能与用于任何当前准备的事务的标识符相同。 + +## 笔记 + +`准备交易`不适用于应用程序或交互式会话。其目的是允许外部事务管理器跨多个数据库或其他事务资源执行原子全局事务。除非您正在编写事务管理器,否则您可能不应该使用`准备交易`. + +此命令必须在事务块内使用。采用[`开始`](sql-begin.html)开始一个。 + +目前不允许`准备`执行了涉及临时表或会话的临时命名空间的任何操作的事务,创建了任何游标`按住`, 或执行`听`,`不听`, 或者`通知`.这些功能与当前会话联系得太紧密,无法在要准备的事务中使用。 + +如果事务修改了任何运行时参数`放`(没有`当地的`选项),这些效果在`准备交易`, 以后不会受到影响`提交准备`或者`已准备好回滚`.因此,在这一方面`准备交易`行为更像`犯罪`比`回滚`. + +所有当前可用的准备好的事务都列在[`pg_prepared_xacts`](view-pg-prepared-xacts.html)系统视图。 + +### 警告 + +让事务长时间处于准备状态是不明智的。这会干扰能力`真空`回收存储,在极端情况下可能导致数据库关闭以防止事务 ID 回绕(请参阅[第 25.1.5 节](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND))。还要记住,事务继续持有它持有的任何锁。该功能的预期用途是,一旦外部事务管理器验证其他数据库也准备好提交,准备好的事务通常将被提交或回滚。 + +如果您没有设置外部事务管理器来跟踪准备好的交易并确保它们及时关闭,最好通过设置禁用准备好的交易功能[最大限度\_准备好的\_交易](runtime-config-resource.html#GUC-MAX-PREPARED-TRANSACTIONS)为零。这将防止意外创建准备好的事务,这些事务可能会被遗忘并最终导致问题。 + +## 例子 + +为两阶段提交准备当前事务,使用`富吧`作为交易标识符: + +``` +PREPARE TRANSACTION 'foobar'; +``` + +## 兼容性 + +`准备交易`是一个 PostgreSQL 扩展。它旨在供外部事务管理系统使用,其中一些已被标准覆盖(例如 X/Open XA),但这些系统的 SQL 端并未标准化。 + +## 也可以看看 + +[提交准备](sql-commit-prepared.html),[已准备好回滚](sql-rollback-prepared.html) diff --git a/docs/X/sql-prepare.md b/docs/en/sql-prepare.md similarity index 100% rename from docs/X/sql-prepare.md rename to docs/en/sql-prepare.md diff --git a/docs/en/sql-prepare.zh.md b/docs/en/sql-prepare.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e651d61cb9543f31eb01c55a0e731492f854f53a --- /dev/null +++ b/docs/en/sql-prepare.zh.md @@ -0,0 +1,84 @@ +## 准备 + +PREPARE — 准备执行语句 + +## 概要 + +``` +PREPARE name [ ( data_type [, ...] ) ] AS statement +``` + +## 描述 + +`准备`创建一个准备好的语句。准备好的语句是可用于优化性能的服务器端对象。当。。。的时候`准备`语句被执行,指定的语句被解析、分析和重写。当一个`执行`随后发出命令,计划并执行准备好的语句。这种分工避免了重复的解析分析工作,同时允许执行计划依赖于提供的特定参数值。 + +准备好的语句可以带参数:在执行语句时替换到语句中的值。创建准备好的语句时,按位置引用参数,使用`1美元`,`2美元`等。可以选择指定相应的参数数据类型列表。当参数的数据类型未指定或声明为`未知`,类型是从第一次引用参数的上下文中推断出来的(如果可能的话)。执行语句时,在`执行`陈述。参考[执行](sql-execute.html)了解更多信息。 + +准备好的语句仅在当前数据库会话期间持续。当会话结束时,准备好的语句被遗忘,因此必须重新创建它才能再次使用。这也意味着单个准备好的语句不能被多个同时的数据库客户端使用;但是,每个客户都可以创建自己的准备好的语句来使用。准备好的语句可以使用手动清理[`解除分配`](sql-deallocate.html)命令。 + +当使用单个会话执行大量类似语句时,准备好的语句可能具有最大的性能优势。如果语句计划或重写很复杂,例如,如果查询涉及多个表的连接或需要应用多个规则,则性能差异将特别显着。如果语句的计划和重写相对简单,但执行起来相对昂贵,则准备好的语句的性能优势将不那么明显。 + +## 参数 + +*`姓名`* + +给这个特定的准备好的语句的任意名称。它在单个会话中必须是唯一的,并且随后用于执行或取消分配先前准备好的语句。 + +*`数据类型`* + +准备好的语句的参数的数据类型。如果特定参数的数据类型未指定或指定为`未知`,它将从第一次引用参数的上下文中推断出来。要引用准备好的语句本身中的参数,请使用`1美元`,`2美元`, 等等。 + +*`陈述`* + +任何`选择`,`插入`,`更新`,`删除`, 要么`价值观`陈述。 + +## 笔记 + +准备好的语句可以使用以下任一方式执行*通用计划*或一个*定制计划*.通用计划在所有执行中都是相同的,而使用该调用中给定的参数值为特定执行生成自定义计划。使用通用计划避免了计划开销,但在某些情况下,自定义计划的执行效率更高,因为计划者可以利用参数值的知识。(当然,如果准备好的语句没有参数,那么这是没有实际意义的,总是使用通用计划。) + +默认情况下(即,当[计划\_缓存\_mode](runtime-config-query.html#GUC-PLAN-CACHE_MODE)is set to`auto`), the server will automatically choose whether to use a generic or custom plan for a prepared statement that has parameters. The current rule for this is that the first five executions are done with custom plans and the average estimated cost of those plans is calculated. Then a generic plan is created and its estimated cost is compared to the average custom-plan cost. Subsequent executions use the generic plan if its cost is not so much higher than the average custom-plan cost as to make repeated replanning seem preferable. + +This heuristic can be overridden, forcing the server to use either generic or custom plans, by setting`plan_cache_mode`to`force_generic_plan`or`force_custom_plan`respectively. This setting is primarily useful if the generic plan's cost estimate is badly off for some reason, allowing it to be chosen even though its actual cost is much more than that of a custom plan. + +To examine the query plan PostgreSQL is using for a prepared statement, use[`EXPLAIN`](sql-explain.html), for example + +``` +EXPLAIN EXECUTE name(parameter_values); +``` + +If a generic plan is in use, it will contain parameter symbols`$*`n`*`, while a custom plan will have the supplied parameter values substituted into it. + +For more information on query planning and the statistics collected by PostgreSQL for that purpose, see the[ANALYZE](sql-analyze.html)documentation. + +Although the main point of a prepared statement is to avoid repeated parse analysis and planning of the statement, PostgreSQL will force re-analysis and re-planning of the statement before using it whenever database objects used in the statement have undergone definitional (DDL) changes or their planner statistics have been updated since the previous use of the prepared statement. Also, if the value of[search_path](runtime-config-client.html#GUC-SEARCH-PATH)changes from one use to the next, the statement will be re-parsed using the new`search_path`(从PostgreSQL 9.3开始,后一种行为是新的。)这些规则使用预先准备好的语句,在语义上几乎等同于反复提交相同的查询文本,但如果不更改对象定义,尤其是如果最佳计划在不同的使用中保持不变,则会带来性能优势。语义等价性不完美的一个例子是,如果语句以非限定名称引用表,然后在前面出现的模式中创建一个同名的新表`搜索路径`,不会自动重新分析,因为语句中使用的对象没有更改。但是,如果其他一些更改强制重新解析,则新表将在后续使用中被引用。 + +通过查询[`pg_准备的_声明`](view-pg-prepared-statements.html)系统视图。 + +## 例子 + +为项目创建准备好的语句`插入`语句,然后执行它: + +``` +PREPARE fooplan (int, text, bool, numeric) AS + INSERT INTO foo VALUES($1, $2, $3, $4); +EXECUTE fooplan(1, 'Hunter Valley', 't', 200.00); +``` + +为项目创建准备好的语句`选择`语句,然后执行它: + +``` +PREPARE usrrptplan (int) AS + SELECT * FROM users u, logs l WHERE u.usrid=$1 AND u.usrid=l.usrid + AND l.date = $2; +EXECUTE usrrptplan(1, current_date); +``` + +在本例中,没有指定第二个参数的数据类型,因此它是从`$2`被使用了。 + +## 兼容性 + +SQL标准包括`准备`语句,但它仅用于嵌入式SQL。这个版本的`准备`语句也使用了一些不同的语法。 + +## 另见 + +[解除分配](sql-deallocate.html),[处决](sql-execute.html) diff --git a/docs/X/sql-reassign-owned.md b/docs/en/sql-reassign-owned.md similarity index 100% rename from docs/X/sql-reassign-owned.md rename to docs/en/sql-reassign-owned.md diff --git a/docs/en/sql-reassign-owned.zh.md b/docs/en/sql-reassign-owned.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..17fb13078a518968de2b8edf4cdc45b75b2d01f6 --- /dev/null +++ b/docs/en/sql-reassign-owned.zh.md @@ -0,0 +1,44 @@ +## 重新分配拥有 + +REASSIGN OWNED — 更改数据库角色拥有的数据库对象的所有权 + +## 概要 + +``` +REASSIGN OWNED BY { old_role | CURRENT_ROLE | CURRENT_USER | SESSION_USER } [, ...] + TO { new_role | CURRENT_ROLE | CURRENT_USER | SESSION_USER } +``` + +## 描述 + +`重新分配拥有`指示系统更改任何数据库对象的所有权*`旧角色`*到*`新角色`*. + +## 参数 + +*`旧角色`* + +角色的名称。当前数据库中所有对象的所有权,以及所有共享对象(数据库、表空间)的所有权,由该角色拥有将被重新分配给*`新角色`*. + +*`新角色`* + +将成为受影响对象的新所有者的角色的名称。 + +## 笔记 + +`重新分配拥有`通常用于准备删除一个或多个角色。因为`重新分配拥有`不影响其他数据库中的对象,通常需要在每个包含要删除的角色拥有的对象的数据库中执行此命令。 + +`重新分配拥有`需要源角色和目标角色的成员资格。 + +这[`掉落拥有`](sql-drop-owned.html)command 是一种替代方法,它简单地删除一个或多个角色拥有的所有数据库对象。 + +这`重新分配拥有`命令不会影响授予*`旧角色`*在不属于他们的对象上。同样,它不会影响使用创建的默认权限`更改默认权限`.采用`掉落拥有`撤销此类特权。 + +看[第 22.4 节](role-removal.html)进行更多讨论。 + +## 兼容性 + +这`重新分配拥有`命令是 PostgreSQL 扩展。 + +## 也可以看看 + +[掉落拥有](sql-drop-owned.html),[删除角色](sql-droprole.html),[更改数据库](sql-alterdatabase.html) diff --git a/docs/X/sql-refreshmaterializedview.md b/docs/en/sql-refreshmaterializedview.md similarity index 100% rename from docs/X/sql-refreshmaterializedview.md rename to docs/en/sql-refreshmaterializedview.md diff --git a/docs/en/sql-refreshmaterializedview.zh.md b/docs/en/sql-refreshmaterializedview.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..92e40add40c1b573bea008d3a182bb3bcb48630c --- /dev/null +++ b/docs/en/sql-refreshmaterializedview.zh.md @@ -0,0 +1,58 @@ +## 刷新物化视图 + +REFRESH MATERIALIZED VIEW — 替换物化视图的内容 + +## 概要 + +``` +REFRESH MATERIALIZED VIEW [ CONCURRENTLY ] name + [ WITH [ NO ] DATA ] +``` + +## 描述 + +`刷新物化视图`完全替换物化视图的内容。要执行此命令,您必须是物化视图的所有者。旧内容被丢弃。如果`有数据`指定(或默认)后备查询被执行以提供新数据,并且物化视图处于可扫描状态。如果`没有数据`指定不生成新数据并且物化视图处于不可扫描状态。 + +`同时`和`没有数据`不能一起指定。 + +## 参数 + +`同时` + +刷新物化视图而不锁定物化视图上的并发选择。如果没有此选项,影响大量行的刷新将倾向于使用更少的资源并更快地完成,但可能会阻止尝试从物化视图读取的其他连接。在少量行受到影响的情况下,此选项可能会更快。 + +仅当至少有一个时才允许使用此选项`独特`仅使用列名并包括所有行的物化视图上的索引;也就是说,它不能是表达式索引或包含`在哪里`条款。 + +当物化视图尚未填充时,可能不会使用此选项。 + +即使有这个选项只有一个`刷新`一次可能会违背任何一种物化视图。 + +*`姓名`* + +要刷新的物化视图的名称(可选模式限定)。 + +## 笔记 + +如果有一个`订购方式`子句在物化视图的定义查询中,物化视图的原始内容将以这种方式排序;但`刷新物化视图`不保证保留该顺序。 + +## 例子 + +此命令将替换名为的物化视图的内容`order_summary`使用物化视图定义中的查询,并使其处于可扫描状态: + +``` +REFRESH MATERIALIZED VIEW order_summary; +``` + +此命令将释放与物化视图关联的存储空间`年度统计基础`并使其处于不可扫描状态: + +``` +REFRESH MATERIALIZED VIEW annual_statistics_basis WITH NO DATA; +``` + +## 兼容性 + +`刷新物化视图`是一个 PostgreSQL 扩展。 + +## 也可以看看 + +[创建物化视图](sql-creatematerializedview.html),[改变物化视图](sql-altermaterializedview.html),[删除物化视图](sql-dropmaterializedview.html) diff --git a/docs/X/sql-reindex.md b/docs/en/sql-reindex.md similarity index 100% rename from docs/X/sql-reindex.md rename to docs/en/sql-reindex.md diff --git a/docs/en/sql-reindex.zh.md b/docs/en/sql-reindex.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..771787c8f06f5add1b0b0397543be701459c1d48 --- /dev/null +++ b/docs/en/sql-reindex.zh.md @@ -0,0 +1,159 @@ +## 重新索引 + +REINDEX — 重建索引 + +## 概要 + +``` +REINDEX [ ( option [, ...] ) ] { INDEX | TABLE | SCHEMA | DATABASE | SYSTEM } [ CONCURRENTLY ] name + +where option can be one of: + + CONCURRENTLY [ boolean ] + TABLESPACE new_tablespace + VERBOSE [ boolean ] +``` + +## 描述 + +`重新索引`使用存储在索引表中的数据重建索引,替换索引的旧副本。有几种使用场景`重新索引`: + +- 索引已损坏,不再包含有效数据。虽然理论上这不应该发生,但实际上索引可能由于软件错误或硬件故障而损坏。`重新索引`提供了一种恢复方法。 + +- 索引变得“臃肿”,即它包含许多空页面或几乎空页面。在某些不常见的访问模式下,PostgreSQL 中的 B-tree 索引可能会发生这种情况。`重新索引`提供了一种通过编写没有死页的新版本索引来减少索引空间消耗的方法。看[第 25.2 节](routine-reindex.html)了解更多信息。 + +- 您已更改索引的存储参数(例如填充因子),并希望确保更改已完全生效。 + +- 如果索引构建失败`同时`选项,此索引保留为“无效”。这样的索引没用但是用起来很方便`重新索引`重建它们。请注意,仅`REINDEX 指数`能够在无效索引上执行并发构建。 + +## 参数 + +`指数` + +重新创建指定的索引。这种形式的`重新索引`与分区索引一起使用时,不能在事务块内执行。 + +`桌子` + +重新创建指定表的所有索引。如果该表有一个辅助“TOAST”表,那么它也会被重新索引。这种形式的`重新索引`与分区表一起使用时,不能在事务块内执行。 + +`架构` + +重新创建指定架构的所有索引。如果此模式的表具有辅助“TOAST”表,则该表也将被重新索引。共享系统目录上的索引也会被处理。这种形式的`重新索引`不能在事务块内执行。 + +`数据库` + +重新创建当前数据库中的所有索引。共享系统目录上的索引也会被处理。这种形式的`重新索引`不能在事务块内执行。 + +`系统` + +重新创建当前数据库中系统目录的所有索引。包括共享系统目录的索引。不处理用户表上的索引。这种形式的`重新索引`不能在事务块内执行。 + +*`姓名`* + +要重新编制索引的特定索引、表或数据库的名称。索引和表名可以是模式限定的。目前,`重新索引数据库`和`重新索引系统`只能重新索引当前数据库,因此它们的参数必须与当前数据库的名称匹配。 + +`同时` + +当使用此选项时,PostgreSQL 将重建索引而不使用任何阻止对表进行并发插入、更新或删除的锁;而标准索引重建会锁定表上的写入(但不读取),直到完成为止。使用此选项时需要注意几个注意事项 - 请参阅[同时重建索引](sql-reindex.html#SQL-REINDEX-CONCURRENTLY)以下。 + +对于临时表,`重新索引`总是非并发的,因为没有其他会话可以访问它们,并且非并发重新索引更便宜。 + +`表空间` + +指定将在新表空间上重建索引。 + +`详细` + +在重新索引每个索引时打印进度报告。 + +*`布尔值`* + +指定是否应打开或关闭所选选项。你可以写`真的`,`在`, 要么`1`启用该选项,并且`错误的`,`离开`, 要么`0`禁用它。这*`布尔值`*value 也可以省略,在这种情况下`真的`假设。 + +*`新表空间`* + +将重建索引的表空间。 + +## 笔记 + +如果您怀疑用户表上的索引损坏,您可以简单地重建该索引或表上的所有索引,使用`REINDEX 指数`要么`重新索引表`. + +如果您需要从系统表上的索引损坏中恢复,事情会更加困难。在这种情况下,系统本身不使用任何可疑索引很重要。(实际上,在这种情况下,您可能会发现由于依赖损坏的索引,服务器进程在启动时立即崩溃。)为了安全地恢复,服务器必须以`-P`选项,它阻止它使用索引进行系统目录查找。 + +一种方法是关闭服务器并使用以下命令启动单用户 PostgreSQL 服务器`-P`选项包含在其命令行中。然后,`重新索引数据库`,`重新索引系统`,`重新索引表`, 要么`REINDEX 指数`可以发出,取决于你要重建多少。如有疑问,请使用`重新索引系统`选择重建数据库中的所有系统索引。然后退出单用户服务器会话并重新启动常规服务器。见[postgres](app-postgres.html)参考页面以获取有关如何与单用户服务器界面交互的更多信息。 + +或者,可以使用以下命令启动常规服务器会话`-P`包含在其命令行选项中。执行此操作的方法因客户端而异,但在所有基于 libpq 的客户端中,都可以设置`选项`环境变量为`-P`在启动客户端之前。请注意,虽然此方法不需要锁定其他客户端,但在修复完成之前阻止其他用户连接到损坏的数据库可能仍然是明智之举。 + +`重新索引`类似于索引的删除和重新创建,因为索引内容是从头开始重建的。然而,锁定的考虑是相当不同的。`重新索引`锁定索引的父表的写入但不读取。它还需要一个`访问独家`锁定正在处理的特定索引,这将阻止尝试使用该索引的读取。相比之下,`删除索引`暂时需要一个`访问独家`锁定父表,阻止写入和读取。随后的`创建索引`锁定写入但不读取;由于索引不存在,没有读取将尝试使用它,这意味着不会阻塞,但读取可能会被迫进行昂贵的顺序扫描。 + +重新索引单个索引或表需要成为该索引或表的所有者。重新索引架构或数据库需要成为该架构或数据库的所有者。请特别注意,非超级用户因此可以重建其他用户拥有的表的索引。然而,作为一个特殊的例外,当`重新索引数据库`,`重新索引架构`或者`重新索引系统`由非超级用户发布,共享目录上的索引将被跳过,除非用户拥有该目录(通常情况并非如此)。当然,超级用户总是可以重新索引任何东西。 + +支持重新索引分区索引或分区表`REINDEX 指数`或者`重新索引表`, 分别。指定分区关系的每个分区都在单独的事务中重新索引。在处理分区表或索引时,这些命令不能在事务块内使用。 + +使用时`表空间`子句`重新索引`在分区索引或表上,仅更新叶分区的表空间引用。由于分区索引没有更新,建议单独使用`仅更改表`在它们上,以便附加的任何新分区继承新表空间。失败时,它可能没有将所有索引移动到新表空间。重新运行该命令将重建所有叶分区并将以前未处理的索引移动到新表空间。 + +如果`架构`,`数据库`要么`系统`与`表空间`, 系统关系被跳过并且单个`警告`会生成。TOAST 表上的索引会重建,但不会移动到新表空间。 + +### 同时重建索引 + +[](<>) + +重建索引可能会干扰数据库的常规操作。通常 PostgreSQL 锁定其索引针对写入重建的表,并通过对表的单次扫描来执行整个索引构建。其他事务仍然可以读取该表,但如果它们尝试在表中插入、更新或删除行,它们将阻塞,直到索引重建完成。如果系统是实时生产数据库,这可能会产生严重影响。对非常大的表进行索引可能需要花费数小时,即使对于较小的表,索引重建也会在生产系统无法接受的长时期内锁定写入者。 + +PostgreSQL 支持以最少的写入锁定来重建索引。通过指定`同时`选项`重新索引`.使用此选项时,PostgreSQL 必须对需要重建的每个索引执行两次表扫描,并等待可能使用该索引的所有现有事务的终止。与标准索引重建相比,此方法需要更多的总工作量,并且需要更长的时间才能完成,因为它需要等待可能修改索引的未完成事务。但是,由于它允许在重建索引时继续正常操作,因此此方法对于在生产环境中重建索引很有用。当然,索引重建带来的额外 CPU、内存和 I/O 负载可能会减慢其他操作。 + +以下步骤发生在并发重新索引中。每个步骤都在单独的事务中运行。如果有多个索引要重建,则每个步骤都会循环遍历所有索引,然后再进行下一步。 + +1. 将新的临时索引定义添加到目录中`pg_index`.此定义将用于替换旧索引。一种`共享更新独家`对正在重新索引的索引及其关联表进行会话级别的锁定,以防止在处理过程中进行任何模式修改。 + +2. 为每个新索引完成构建索引的第一遍。建立索引后,它的标志`pg_index.indisready`切换到“true”以使其准备好插入,一旦执行构建的事务完成,其他会话就可以看到它。此步骤在每个索引的单独事务中完成。 + +3. 然后执行第二遍以添加在第一遍运行时添加的元组。此步骤也在每个索引的单独事务中完成。 + +4. 引用索引的所有约束都更改为引用新的索引定义,并且索引的名称也更改了。在此刻,`pg_index.indisvalid`将新索引切换为“true”,将旧索引切换为“false”,并完成缓存失效,导致所有引用旧索引的会话失效。 + +5. 旧索引有`pg_index.indisready`在等待可能引用旧索引的运行查询完成后,切换到“false”以防止任何新的元组插入。 + +6. 旧索引被删除。这`共享更新独家`索引和表的会话锁被释放。 + + 如果在重建索引时出现问题,例如唯一索引中的唯一性违规,则`重新索引`命令将失败,但除了已有的索引之外,还会留下一个“无效”的新索引。出于查询目的,该索引将被忽略,因为它可能不完整;但是它仍然会消耗更新开销。psql`\d`命令将报告这样的索引`无效的`: + + +``` +postgres=# \d tab + Table "public.tab" + Column | Type | Modifiers +## Examples + + Rebuild a single index: +``` + +REINDEX 索引 my_index; + +``` + Rebuild all the indexes on the table `my_table`: +``` + +重新索引表 my_table; + +``` + Rebuild all indexes in a particular database, without trusting the system indexes to be valid already: +``` + +$ export PGOPTIONS="-P" $ psql broken_db ... broken_db=> REINDEX DATABASE broken_db;破碎数据库=> \\q + +``` + Rebuild indexes for a table, without blocking read and write operations on involved relations while reindexing is in progress: +``` + +同时重新索引表 my_broken_table; + +``` +## Compatibility + + There is no `REINDEX` command in the SQL standard. + +## See Also + +[CREATE INDEX](sql-createindex.html), [DROP INDEX](sql-dropindex.html), [reindexdb](app-reindexdb.html), [Section 28.4.2](progress-reporting.html#CREATE-INDEX-PROGRESS-REPORTING) +``` diff --git a/docs/X/sql-release-savepoint.md b/docs/en/sql-release-savepoint.md similarity index 100% rename from docs/X/sql-release-savepoint.md rename to docs/en/sql-release-savepoint.md diff --git a/docs/en/sql-release-savepoint.zh.md b/docs/en/sql-release-savepoint.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c0113d67097880560654a32b491f93e8257a1bd0 --- /dev/null +++ b/docs/en/sql-release-savepoint.zh.md @@ -0,0 +1,54 @@ +## 释放保存点 + +RELEASE SAVEPOINT — 销毁之前定义的保存点 + +## 概要 + +``` +RELEASE [ SAVEPOINT ] savepoint_name +``` + +## 描述 + +`释放保存点`销毁先前在当前事务中定义的保存点。 + +销毁保存点使其无法作为回滚点,但它没有其他用户可见行为。它不会撤消建立保存点后执行的命令的效果。(为此,请参阅[回滚到保存点](sql-rollback-to.html).) 在不再需要保存点时销毁它允许系统在事务结束之前回收一些资源。 + +`释放保存点`还会销毁在指定保存点建立后建立的所有保存点。 + +## 参数 + +*`保存点名称`* + +要销毁的保存点的名称。 + +## 笔记 + +指定以前未定义的保存点名称是错误的。 + +当事务处于中止状态时,无法释放保存点。 + +如果多个保存点具有相同的名称,则仅释放最近定义的一个。 + +## 例子 + +要建立并稍后销毁保存点: + +``` +BEGIN; + INSERT INTO table1 VALUES (3); + SAVEPOINT my_savepoint; + INSERT INTO table1 VALUES (4); + RELEASE SAVEPOINT my_savepoint; +COMMIT; +``` + +上述事务将插入 3 和 4。 + +## 兼容性 + +该命令符合 SQL 标准。标准规定了关键词`保存点`是强制性的,但 PostgreSQL 允许省略它。 + +## 也可以看看 + +[开始](sql-begin.html),[犯罪](sql-commit.html),[回滚](sql-rollback.html),[回滚到保存点](sql-rollback-to.html),[保存点](sql-savepoint.html) diff --git a/docs/X/sql-reset.md b/docs/en/sql-reset.md similarity index 100% rename from docs/X/sql-reset.md rename to docs/en/sql-reset.md diff --git a/docs/en/sql-reset.zh.md b/docs/en/sql-reset.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..2c5492045da211ac3b4f758b5279d1b924e98a3c --- /dev/null +++ b/docs/en/sql-reset.zh.md @@ -0,0 +1,50 @@ +## 重置 + +RESET — 将运行时参数的值恢复为默认值 + +## 概要 + +``` +RESET configuration_parameter +RESET ALL +``` + +## 描述 + +`重置`将运行时参数恢复为其默认值。`重置`是另一种拼写 + +``` +SET configuration_parameter TO DEFAULT +``` + +参考[放](sql-set.html)详情。 + +默认值被定义为参数将具有的值,如果没有`放`在本届会议上曾为它印发过。此值的实际来源可能是编译的默认值、配置文件、命令行选项或每个数据库或每个用户的默认设置。这与将其定义为“参数在会话开始时的值”略有不同,因为如果该值来自配置文件,它将被重置为现在配置文件指定的任何值。看[第 20 章](runtime-config.html)详情。 + +交易行为`重置`是相同的`放`:它的影响将被事务回滚撤销。 + +## 参数 + +*`配置参数`* + +可设置的运行时参数的名称。可用参数记录在[第 20 章](runtime-config.html)并且在[放](sql-set.html)参考页。 + +`全部` + +将所有可设置的运行时参数重置为默认值。 + +## 例子 + +设置`时区`配置变量为其默认值: + +``` +RESET timezone; +``` + +## 兼容性 + +`重启`是一个 PostgreSQL 扩展。 + +## 也可以看看 + +[放](sql-set.html),[显示](sql-show.html) diff --git a/docs/X/sql-revoke.md b/docs/en/sql-revoke.md similarity index 100% rename from docs/X/sql-revoke.md rename to docs/en/sql-revoke.md diff --git a/docs/en/sql-revoke.zh.md b/docs/en/sql-revoke.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..69bb21590863e7c28abb209a5a424655e3c06595 --- /dev/null +++ b/docs/en/sql-revoke.zh.md @@ -0,0 +1,177 @@ +## 撤销 + +REVOKE — 删除访问权限 + +## 概要 + +``` +REVOKE [ GRANT OPTION FOR ] + { { SELECT | INSERT | UPDATE | DELETE | TRUNCATE | REFERENCES | TRIGGER } + [, ...] | ALL [ PRIVILEGES ] } + ON { [ TABLE ] table_name [, ...] + | ALL TABLES IN SCHEMA schema_name [, ...] } + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { { SELECT | INSERT | UPDATE | REFERENCES } ( column_name [, ...] ) + [, ...] | ALL [ PRIVILEGES ] ( column_name [, ...] ) } + ON [ TABLE ] table_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { { USAGE | SELECT | UPDATE } + [, ...] | ALL [ PRIVILEGES ] } + ON { SEQUENCE sequence_name [, ...] + | ALL SEQUENCES IN SCHEMA schema_name [, ...] } + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { { CREATE | CONNECT | TEMPORARY | TEMP } [, ...] | ALL [ PRIVILEGES ] } + ON DATABASE database_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { USAGE | ALL [ PRIVILEGES ] } + ON DOMAIN domain_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { USAGE | ALL [ PRIVILEGES ] } + ON FOREIGN DATA WRAPPER fdw_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { USAGE | ALL [ PRIVILEGES ] } + ON FOREIGN SERVER server_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { EXECUTE | ALL [ PRIVILEGES ] } + ON { { FUNCTION | PROCEDURE | ROUTINE } function_name [ ( [ [ argmode ] [ arg_name ] arg_type [, ...] ] ) ] [, ...] + | ALL { FUNCTIONS | PROCEDURES | ROUTINES } IN SCHEMA schema_name [, ...] } + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { USAGE | ALL [ PRIVILEGES ] } + ON LANGUAGE lang_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { { SELECT | UPDATE } [, ...] | ALL [ PRIVILEGES ] } + ON LARGE OBJECT loid [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { { CREATE | USAGE } [, ...] | ALL [ PRIVILEGES ] } + ON SCHEMA schema_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { CREATE | ALL [ PRIVILEGES ] } + ON TABLESPACE tablespace_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ GRANT OPTION FOR ] + { USAGE | ALL [ PRIVILEGES ] } + ON TYPE type_name [, ...] + FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +REVOKE [ ADMIN OPTION FOR ] + role_name [, ...] FROM role_specification [, ...] + [ GRANTED BY role_specification ] + [ CASCADE | RESTRICT ] + +where role_specification can be: + + [ GROUP ] role_name + | PUBLIC + | CURRENT_ROLE + | CURRENT_USER + | SESSION_USER +``` + +## 描述 + +这`撤销`命令从一个或多个角色撤销先前授予的权限。关键词`民众`指隐含定义的所有角色组。 + +见说明[`授予`](sql-grant.html)命令为特权类型的含义。 + +请注意,任何特定角色都将拥有直接授予它的权限、授予其当前成员的任何角色的权限以及授予的权限的总和`民众`.因此,例如,撤销`选择`特权来自`民众`并不一定意味着所有角色都失去了`选择`对象的特权:直接或通过其他角色授予它的人仍然拥有它。同样,撤销`选择`来自用户可能不会阻止该用户使用`选择`如果`民众`或其他成员角色仍有`选择`权利。 + +如果`授予选项`被指定时,只有权限的授予选项被撤销,而不是权限本身。否则,特权和授予选项都被撤销。 + +如果用户拥有具有授予选项的权限并将其授予其他用户,则其他用户拥有的权限称为依赖权限。如果第一个用户持有的特权或授予选项被撤销并且存在依赖特权,则这些依赖特权也会被撤销,如果`级联`已指明;如果不是,撤销操作将失败。此递归撤销仅影响通过用户链授予的权限,该用户链可追溯到作为此主题的用户`撤销`命令。因此,如果该权限也通过其他用户授予,受影响的用户可能会有效地保留该权限。 + +撤销表的权限时,相应的列权限(如果有)也会自动撤销表的每一列。另一方面,如果角色已被授予对表的特权,那么从各个列撤销相同的特权将无效。 + +撤销角色的成员资格时,`授予选项`而是称为`管理选项`,但行为相似。这种形式的命令还允许`授予者`选项,但该选项当前被忽略(除了检查命名角色的存在)。另请注意,这种形式的命令不允许干扰词`团体`在*`角色规范`*. + +## 笔记 + +用户只能撤销该用户直接授予的权限。例如,如果用户 A 已向用户 B 授予了具有授予选项的权限,而用户 B 又将其授予用户 C,则用户 A 不能直接从 C 撤消该权限。相反,用户 A 可以撤消授予选项来自用户 B 并使用`级联`option so that the privilege is in turn revoked from user C. For another example, if both A and B have granted the same privilege to C, A can revoke their own grant but not B's grant, so C will still effectively have the privilege. + +When a non-owner of an object attempts to`REVOKE`privileges on the object, the command will fail outright if the user has no privileges whatsoever on the object. As long as some privilege is available, the command will proceed, but it will revoke only those privileges for which the user has grant options. The`REVOKE ALL PRIVILEGES`forms will issue a warning message if no grant options are held, while the other forms will issue a warning if grant options for any of the privileges specifically named in the command are not held. (In principle these statements apply to the object owner as well, but since the owner is always treated as holding all grant options, the cases can never occur.) + +If a superuser chooses to issue a`GRANT`or`REVOKE`command, the command is performed as though it were issued by the owner of the affected object. Since all privileges ultimately come from the object owner (possibly indirectly via chains of grant options), it is possible for a superuser to revoke all privileges, but this might require use of`CASCADE`as stated above. + +`REVOKE`can also be done by a role that is not the owner of the affected object, but is a member of the role that owns the object, or is a member of a role that holds privileges`WITH GRANT OPTION`on the object. In this case the command is performed as though it were issued by the containing role that actually owns the object or holds the privileges`WITH GRANT OPTION`. For example, if table`t1`is owned by role`g1`, of which role`u1`is a member, then`u1`可以撤销权限`t1`被记录为由`g1`.这将包括由`u1`以及其他角色成员`g1`. + +如果角色执行`撤销`通过多个角色成员路径间接持有特权,未指定哪个包含角色将用于执行命令。在这种情况下,最好使用`设定角色`成为你想做的特定角色`撤销`作为。不这样做可能会导致撤销您想要的特权之外的特权,或者根本不撤销任何东西。 + +看[第 5.7 节](ddl-priv.html)有关特定权限类型的更多信息,以及如何检查对象的权限。 + +## 例子 + +撤销公众对表的插入权限`电影`: + +``` +REVOKE INSERT ON films FROM PUBLIC; +``` + +撤销用户的所有权限`曼纽尔`正在查看`种类`: + +``` +REVOKE ALL PRIVILEGES ON kinds FROM manuel; +``` + +请注意,这实际上意味着“撤销我授予的所有权限”。 + +撤销角色成员资格`管理员`来自用户`乔`: + +``` +REVOKE admins FROM joe; +``` + +## 兼容性 + +兼容性说明[`授予`](sql-grant.html)命令类似地适用于`撤销`.关键字`严格`或者`级联`根据标准是必需的,但 PostgreSQL 假定`严格`默认情况下。 + +## 也可以看看 + +[授予](sql-grant.html),[更改默认权限](sql-alterdefaultprivileges.html) diff --git a/docs/X/sql-rollback-prepared.md b/docs/en/sql-rollback-prepared.md similarity index 100% rename from docs/X/sql-rollback-prepared.md rename to docs/en/sql-rollback-prepared.md diff --git a/docs/en/sql-rollback-prepared.zh.md b/docs/en/sql-rollback-prepared.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..1d801d235043a46f520ceb5adaded45657eb4384 --- /dev/null +++ b/docs/en/sql-rollback-prepared.zh.md @@ -0,0 +1,43 @@ +## 已准备好回滚 + +ROLLBACK PREPARED — 取消之前为两阶段提交准备的事务 + +## 概要 + +``` +ROLLBACK PREPARED transaction_id +``` + +## 描述 + +`已准备好回滚`回滚处于准备状态的事务。 + +## 参数 + +*`transaction_id`* + +要回滚的事务的事务标识符。 + +## 笔记 + +要回滚准备好的事务,您必须是最初执行该事务的同一用户,或者是超级用户。但您不必在执行事务的同一会话中。 + +此命令不能在事务块内执行。准备好的事务立即回滚。 + +所有当前可用的准备好的事务都列在[`pg_prepared_xacts`](view-pg-prepared-xacts.html)系统视图。 + +## 例子 + +回滚事务标识符标识的事务`富吧`: + +``` +ROLLBACK PREPARED 'foobar'; +``` + +## 兼容性 + +`已准备好回滚`是一个 PostgreSQL 扩展。它旨在供外部事务管理系统使用,其中一些已被标准覆盖(例如 X/Open XA),但这些系统的 SQL 端并未标准化。 + +## 也可以看看 + +[准备交易](sql-prepare-transaction.html),[提交准备](sql-commit-prepared.html) diff --git a/docs/X/sql-rollback-to.md b/docs/en/sql-rollback-to.md similarity index 100% rename from docs/X/sql-rollback-to.md rename to docs/en/sql-rollback-to.md diff --git a/docs/en/sql-rollback-to.zh.md b/docs/en/sql-rollback-to.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..88610a1355ec9631fb1bc938e75c91e9da1f635a --- /dev/null +++ b/docs/en/sql-rollback-to.zh.md @@ -0,0 +1,57 @@ +## 回滚到保存点 + +ROLLBACK TO SAVEPOINT — 回滚到保存点 + +## 概要 + +``` +ROLLBACK [ WORK | TRANSACTION ] TO [ SAVEPOINT ] savepoint_name +``` + +## 描述 + +回滚在保存点建立后执行的所有命令。保存点仍然有效,如果需要,可以稍后再次回滚。 + +`回滚到保存点`隐式销毁在命名保存点之后建立的所有保存点。 + +## 参数 + +*`保存点名称`* + +要回滚到的保存点。 + +## 笔记 + +采用[`释放保存点`](sql-release-savepoint.html)销毁保存点而不丢弃建立后执行的命令的效果。 + +指定尚未建立的保存点名称是错误的。 + +游标在保存点方面具有某种非事务性行为。当保存点回滚时,在保存点内打开的任何游标都将关闭。如果先前打开的游标受`拿来`或者`移动`命令在稍后回滚的保存点内,光标保持在`拿来`让它指向(即光标移动由`拿来`不回滚)。关闭游标也不会通过回滚来撤消。但是,游标的查询引起的其他副作用(例如查询调用的 volatile 函数的副作用)*是*如果它们发生在稍后回滚的保存点期间,则回滚。执行导致事务中止的游标被置于无法执行状态,因此可以使用以下命令恢复事务`回滚到保存点`,光标不能再使用。 + +## 例子 + +撤消之后执行的命令的效果`我的保存点`建立了: + +``` +ROLLBACK TO SAVEPOINT my_savepoint; +``` + +光标位置不受保存点回滚的影响: + +``` +BEGIN; + +DECLARE foo CURSOR FOR SELECT 1 UNION SELECT 2; + +SAVEPOINT foo; + +FETCH 1 FROM foo; + ?column? +## Compatibility + + The SQL standard specifies that the key word `SAVEPOINT` is mandatory, but PostgreSQL and Oracle allow it to be omitted. SQL allows only `WORK`, not `TRANSACTION`, as a noise word after `ROLLBACK`. Also, SQL has an optional clause `AND [ NO ] CHAIN` which is not currently supported by PostgreSQL. Otherwise, this command conforms to the SQL standard. + +## See Also + +[BEGIN](sql-begin.html), [COMMIT](sql-commit.html), [RELEASE SAVEPOINT](sql-release-savepoint.html), [ROLLBACK](sql-rollback.html), [SAVEPOINT](sql-savepoint.html) +``` diff --git a/docs/X/sql-rollback.md b/docs/en/sql-rollback.md similarity index 100% rename from docs/X/sql-rollback.md rename to docs/en/sql-rollback.md diff --git a/docs/en/sql-rollback.zh.md b/docs/en/sql-rollback.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f952c0f83c76a1a9d216b84b5313f8b0e8436a6b --- /dev/null +++ b/docs/en/sql-rollback.zh.md @@ -0,0 +1,48 @@ +## 回滚 + +ROLLBACK — 中止当前事务 + +## 概要 + +``` +ROLLBACK [ WORK | TRANSACTION ] [ AND [ NO ] CHAIN ] +``` + +## 描述 + +`回滚`回滚当前事务并导致该事务所做的所有更新被丢弃。 + +## 参数 + +[](<>) + +`工作`\ +`交易` + +可选关键字。它们没有效果。 + +`和链` + +如果`和链`指定时,立即启动具有相同事务特征的新事务(参见[设置交易](sql-set-transaction.html)) 作为刚刚完成的。否则,不会启动新事务。 + +## 笔记 + +采用[`犯罪`](sql-commit.html)成功终止交易。 + +发行`回滚`在事务块之外发出警告,否则无效。`回滚和连锁`在事务块之外是一个错误。 + +## 例子 + +要中止所有更改: + +``` +ROLLBACK; +``` + +## 兼容性 + +命令`回滚`符合 SQL 标准。表格`回滚交易`是一个 PostgreSQL 扩展。 + +## 也可以看看 + +[开始](sql-begin.html),[犯罪](sql-commit.html),[回滚到保存点](sql-rollback-to.html) diff --git a/docs/X/sql-savepoint.md b/docs/en/sql-savepoint.md similarity index 100% rename from docs/X/sql-savepoint.md rename to docs/en/sql-savepoint.md diff --git a/docs/en/sql-savepoint.zh.md b/docs/en/sql-savepoint.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e18bee1a803a6d93c043e78c88ba0b2b6ce6b60a --- /dev/null +++ b/docs/en/sql-savepoint.zh.md @@ -0,0 +1,64 @@ +## 保存点 + +SAVEPOINT — 在当前事务中定义一个新的保存点 + +## 概要 + +``` +SAVEPOINT savepoint_name +``` + +## 描述 + +`保存点`在当前事务中建立一个新的保存点。 + +保存点是事务中的一个特殊标记,它允许在建立后执行的所有命令回滚,将事务状态恢复到保存点时的状态。 + +## 参数 + +*`保存点名称`* + +赋予新保存点的名称。 + +## 笔记 + +采用[`回滚到`](sql-rollback-to.html)回滚到保存点。采用[`释放保存点`](sql-release-savepoint.html)销毁保存点,保留建立后执行的命令的效果。 + +保存点只能在事务块内建立。一个事务中可以定义多个保存点。 + +## 例子 + +要建立一个保存点并稍后撤消建立后执行的所有命令的效果: + +``` +BEGIN; + INSERT INTO table1 VALUES (1); + SAVEPOINT my_savepoint; + INSERT INTO table1 VALUES (2); + ROLLBACK TO SAVEPOINT my_savepoint; + INSERT INTO table1 VALUES (3); +COMMIT; +``` + +上述事务将插入值 1 和 3,但不会插入 2。 + +要建立并稍后销毁保存点: + +``` +BEGIN; + INSERT INTO table1 VALUES (3); + SAVEPOINT my_savepoint; + INSERT INTO table1 VALUES (4); + RELEASE SAVEPOINT my_savepoint; +COMMIT; +``` + +上述事务将插入 3 和 4。 + +## 兼容性 + +SQL 要求在建立另一个同名保存点时自动销毁一个保存点。在 PostgreSQL 中,旧的保存点被保留,尽管在回滚或释放时只会使用最近的保存点。(释放较新的保存点`释放保存点`将导致旧的再次变得可访问`回滚到保存点`和`释放保存点`。) 否则,`保存点`完全符合 SQL。 + +## 也可以看看 + +[开始](sql-begin.html),[犯罪](sql-commit.html),[释放保存点](sql-release-savepoint.html),[回滚](sql-rollback.html),[回滚到保存点](sql-rollback-to.html) diff --git a/docs/X/sql-security-label.md b/docs/en/sql-security-label.md similarity index 100% rename from docs/X/sql-security-label.md rename to docs/en/sql-security-label.md diff --git a/docs/en/sql-security-label.zh.md b/docs/en/sql-security-label.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..e6603a9492f88cfec5c24fabf71e73b6eae9a3f4 --- /dev/null +++ b/docs/en/sql-security-label.zh.md @@ -0,0 +1,103 @@ +## 安全标签 + +SECURITY LABEL — 定义或更改应用于对象的安全标签 + +## 概要 + +``` +SECURITY LABEL [ FOR provider ] ON +{ + TABLE object_name | + COLUMN table_name.column_name | + AGGREGATE aggregate_name ( aggregate_signature ) | + DATABASE object_name | + DOMAIN object_name | + EVENT TRIGGER object_name | + FOREIGN TABLE object_name + FUNCTION function_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | + LARGE OBJECT large_object_oid | + MATERIALIZED VIEW object_name | + [ PROCEDURAL ] LANGUAGE object_name | + PROCEDURE procedure_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | + PUBLICATION object_name | + ROLE object_name | + ROUTINE routine_name [ ( [ [ argmode ] [ argname ] argtype [, ...] ] ) ] | + SCHEMA object_name | + SEQUENCE object_name | + SUBSCRIPTION object_name | + TABLESPACE object_name | + TYPE object_name | + VIEW object_name +} IS 'label' + +where aggregate_signature is: + +* | +[ argmode ] [ argname ] argtype [ , ... ] | +[ [ argmode ] [ argname ] argtype [ , ... ] ] ORDER BY [ argmode ] [ argname ] argtype [ , ... ] +``` + +## 描述 + +`安全标签`将安全标签应用于数据库对象。任意数量的安全标签,每个标签提供者一个,可以与给定的数据库对象相关联。标签提供者是可加载的模块,它们使用函数注册自己`register_label_provider`. + +### 笔记 + +`register_label_provider`不是 SQL 函数;它只能从加载到后端的 C 代码中调用。 + +标签提供者确定给定标签是否有效以及是否允许将该标签分配给给定对象。给定标签的含义同样由标签提供者自行决定。PostgreSQL 对标签提供者是否或如何解释安全标签没有任何限制;它只是提供了一种存储它们的机制。在实践中,该工具旨在允许与 SELinux 等基于标签的强制访问控制 (MAC) 系统集成。此类系统基于对象标签而不是传统的自主访问控制 (DAC) 概念(例如用户和组)来做出所有访问控制决策。 + +## 参数 + +*`对象名`*\ +*`table_name.column_name`*\ +*`聚合名称`*\ +*`函数名`*\ +*`过程名称`*\ +*`例程名称`* + +要标记的对象的名称。驻留在模式(表、函数等)中的对象的名称可以是模式限定的。 + +*`提供者`* + +与此标签关联的提供者的名称。必须加载指定的提供程序,并且必须同意建议的标记操作。如果仅加载了一个提供程序,则为简洁起见,可以省略提供程序名称。 + +*`参数模式`* + +函数、过程或聚合参数的模式:`在`,`出去`,`进出`, 或者`杂音`.如果省略,则默认为`在`.注意`安全标签`实际上并没有注意`出去`参数,因为只需要输入参数来确定函数的身份。所以列出来就足够了`在`,`进出`, 和`杂音`论据。 + +*`参数名称`* + +函数、过程或聚合参数的名称。注意`安全标签`实际上并不关注参数名称,因为只需要参数数据类型来确定函数的身份。 + +*`参数类型`* + +函数、过程或聚合参数的数据类型。 + +*`large_object_oid`* + +大对象的 OID。 + +`程序` + +这是一个噪音词。 + +*`标签`* + +新的安全标签,写成字符串文字;要么`空值`删除安全标签。 + +## 例子 + +以下示例显示了如何更改表的安全标签。 + +``` +SECURITY LABEL FOR selinux ON TABLE mytable IS 'system_u:object_r:sepgsql_table_t:s0'; +``` + +## 兼容性 + +没有`安全标签`SQL 标准中的命令。 + +## 也可以看看 + +[sepgsql](sepgsql.html),`src/test/modules/dummy_seclabel` diff --git a/docs/X/sql-select.md b/docs/en/sql-select.md similarity index 100% rename from docs/X/sql-select.md rename to docs/en/sql-select.md diff --git a/docs/en/sql-select.zh.md b/docs/en/sql-select.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..1003b8ea09345bf259cef50df3a19173b4d78f1f --- /dev/null +++ b/docs/en/sql-select.zh.md @@ -0,0 +1,628 @@ +## 选择 + +SELECT, TABLE, WITH — 从表或视图中检索行 + +## 概要 + +``` +[ WITH [ RECURSIVE ] with_query [, ...] ] +SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ] + [ * | expression [ [ AS ] output_name ] [, ...] ] + [ FROM from_item [, ...] ] + [ WHERE condition ] + [ GROUP BY [ ALL | DISTINCT ] grouping_element [, ...] ] + [ HAVING condition ] + [ WINDOW window_name AS ( window_definition ) [, ...] ] + [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ] + [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ] + [ LIMIT { count | ALL } ] + [ OFFSET start [ ROW | ROWS ] ] + [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } { ONLY | WITH TIES } ] + [ FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE } [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ] [...] ] + +where from_item can be one of: + + [ ONLY ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ] + [ TABLESAMPLE sampling_method ( argument [, ...] ) [ REPEATABLE ( seed ) ] ] + [ LATERAL ] ( select ) [ AS ] alias [ ( column_alias [, ...] ) ] + with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ] + [ LATERAL ] function_name ( [ argument [, ...] ] ) + [ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ] + [ LATERAL ] function_name ( [ argument [, ...] ] ) [ AS ] alias ( column_definition [, ...] ) + [ LATERAL ] function_name ( [ argument [, ...] ] ) AS ( column_definition [, ...] ) + [ LATERAL ] ROWS FROM( function_name ( [ argument [, ...] ] ) [ AS ( column_definition [, ...] ) ] [, ...] ) + [ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ] + from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( join_column [, ...] ) [ AS join_using_alias ] ] + +and grouping_element can be one of: + + ( ) + expression + ( expression [, ...] ) + ROLLUP ( { expression | ( expression [, ...] ) } [, ...] ) + CUBE ( { expression | ( expression [, ...] ) } [, ...] ) + GROUPING SETS ( grouping_element [, ...] ) + +and with_query is: + + with_query_name [ ( column_name [, ...] ) ] AS [ [ NOT ] MATERIALIZED ] ( select | values | insert | update | delete ) + [ SEARCH { BREADTH | DEPTH } FIRST BY column_name [, ...] SET search_seq_col_name ] + [ CYCLE column_name [, ...] SET cycle_mark_col_name [ TO cycle_mark_value DEFAULT cycle_mark_default ] USING cycle_path_col_name ] + +TABLE [ ONLY ] table_name [ * ] +``` + +## 描述 + +`选择`从零个或多个表中检索行。一般处理`选择`如下: + +1. 中的所有查询`和`计算列表。这些有效地充当临时表,可以在`从`列表。一种`和`中多次引用的查询`从`只计算一次,除非另有说明`未物化`.(看[WITH 子句](sql-select.html#SQL-WITH)以下。) + +2. 中的所有元素`从`计算列表。(每个元素在`从`list 是一个真实的或虚拟的表。)如果在`从`列表,它们交叉连接在一起。(看[从子句](sql-select.html#SQL-FROM)以下。) + +3. 如果`在哪里`子句时,所有不满足条件的行都会从输出中剔除。(看[WHERE 子句](sql-select.html#SQL-WHERE)以下。) + +4. 如果`通过...分组`子句被指定,或者如果有聚合函数调用,则将输出组合成与一个或多个值匹配的行组,并计算聚合函数的结果。如果`拥有`子句存在时,它会消除不满足给定条件的组。(看[GROUP BY 条款](sql-select.html#SQL-GROUPBY)和[有条款](sql-select.html#SQL-HAVING)以下。) + +5. 实际输出行是使用`选择`每个选定行或行组的输出表达式。(看[选择列表](sql-select.html#SQL-SELECT-LIST)以下。) + +6. `选择不同的`从结果中消除重复行。`选择不同`消除与所有指定表达式匹配的行。`全选`(默认)将返回所有候选行,包括重复行。(看[区别条款](sql-select.html#SQL-DISTINCT)以下。) + +7. 使用运算符`联盟`,`相交`, 和`除了`, 一个以上的输出`选择`语句可以组合形成单个结果集。这`联盟`运算符返回一个或两个结果集中的所有行。这`相交`运算符返回严格在两个结果集中的所有行。这`除了`运算符返回第一个结果集中但不在第二个结果集中的行。在所有三种情况下,重复行都被消除,除非`全部`被指定。噪音词`清楚的`可以添加以明确指定消除重复行。请注意`清楚的`是这里的默认行为,即使`全部`是默认的`选择`本身。(看[联合条款](sql-select.html#SQL-UNION),[相交条款](sql-select.html#SQL-INTERSECT), 和[除外条款](sql-select.html#SQL-EXCEPT)以下。) + +8. 如果`订购方式`子句时,返回的行按指定的顺序排序。如果`订购方式`未给出,则以系统发现最快生成的任何顺序返回行。(看[ORDER BY 条款](sql-select.html#SQL-ORDERBY)以下。) + +9. 如果`限制`(或者`先取`) 或者`抵消`条款被指定,则`选择`语句只返回结果行的子集。(看[限制条款](sql-select.html#SQL-LIMIT)以下。) + +10. 如果`更新`,`无密钥更新`,`分享`或者`关键分享`被指定,`选择`语句锁定选定的行以防止并发更新。(看[锁定条款](sql-select.html#SQL-FOR-UPDATE-SHARE)以下。) + + 你必须有`选择`对 a 中使用的每一列的权限`选择`命令。指某东西的用途`无密钥更新`,`更新`,`分享`或者`关键分享`需要`更新`特权(对于如此选择的每个表的至少一列)。 + +## 参数 + +### `和`条款 + +这`和`子句允许您指定一个或多个子查询,这些子查询可以在主查询中按名称引用。在主查询期间,子查询有效地充当临时表或视图。每个子查询可以是`选择`,`桌子`,`价值观`,`插入`,`更新`或者`删除`陈述。编写数据修改语句时(`插入`,`更新`要么`删除`) 在`和`,通常包括一个`返回`条款。这是输出`返回`,*不是*该语句修改的基础表,它形成了由主查询读取的临时表。如果`返回`省略时,该语句仍会执行,但它不会产生任何输出,因此主查询不能将其作为表引用。 + +必须为每个指定名称(无模式限定)`和`询问。可选地,可以指定列名列表;如果省略,则从子查询中推断列名。 + +如果`递归的`被指定,它允许一个`选择`子查询以按名称引用自身。这样的子查询必须具有以下形式 + +``` +non_recursive_term UNION [ ALL | DISTINCT ] recursive_term +``` + +其中递归自引用必须出现在`联盟`.每个查询只允许一个递归自引用。不支持递归数据修改语句,但您可以使用递归的结果`选择`在数据修改语句中查询。看[第 7.8 节](queries-with.html)例如。 + +的另一个效果`递归的`就是它`和`查询不需要排序:一个查询可以引用列表中后面的另一个查询。(但是,没有实现循环引用或相互递归。)没有`递归的`,`和`查询只能引用兄弟`和`较早的查询`和`列表。 + +当有多个查询时`和`条款,`递归的`应该只写一次,紧接着`和`.它适用于`和`子句,尽管它对不使用递归或前向引用的查询没有影响。 + +可选的`搜索`子句计算一个*搜索序列列*可用于以广度优先或深度优先顺序对递归查询的结果进行排序。提供的列名列表指定用于跟踪已访问行的行键。一个名为的列*`search_seq_col_name`*将被添加到结果列列表中`和`询问。该列可以在外部查询中进行排序,以实现各自的排序。看[第 7.8.2.1 节](queries-with.html#QUERIES-WITH-SEARCH)举些例子。 + +可选的`循环`子句用于检测递归查询中的循环。提供的列名列表指定用于跟踪已访问行的行键。一个名为的列*`cycle_mark_col_name`*将被添加到结果列列表中`和`询问。此列将设置为*`周期标记值`*当检测到一个循环时,否则*`cycle_mark_default`*.此外,当检测到循环时,递归联合的处理将停止。*`周期标记值`*和*`cycle_mark_default`*必须是常量,并且它们必须可强制转换为通用数据类型,并且该数据类型必须具有不等式运算符。(SQL 标准要求它们是布尔常量或字符串,但 PostgreSQL 不要求。)默认情况下,`真的`和`错误的`(类型`布尔值`) 被使用。此外,有一列名为*`cycle_path_col_name`*将被添加到结果列列表中`和`询问。此列在内部用于跟踪访问的行。看[第 7.8.2.2 节](queries-with.html#QUERIES-WITH-CYCLE)举些例子。 + +这俩`搜索`和`循环`子句仅对递归有效`和`查询。这*`with_query`*必须是`联盟`(要么`联合所有`) 两个`选择`(或等效的)命令(无嵌套`联盟`s)。如果同时使用这两个子句,则由`搜索`子句出现在由`循环`条款。 + +主要查询和`和`查询都是(名义上)同时执行的。这意味着数据修改语句的影响`和`不能从查询的其他部分看到,除了阅读它的`返回`输出。如果两个这样的数据修改语句尝试修改同一行,则结果未指定。 + +的一个关键属性`和`查询是它们通常在每次执行主查询时只评估一次,即使主查询不止一次地引用它们。特别是,数据修改语句保证只执行一次,而不管主查询是读取它们的全部还是任何输出。 + +然而,一个`和`可以标记查询`未物化`取消此保证。在这种情况下,`和`查询可以折叠到主查询中,就好像它是一个简单的子查询一样`选择`在主查询中`从`条款。如果主查询引用那个,这会导致重复计算`和`多次查询;但如果每次这样的使用只需要几行`和`查询的总输出,`未物化`可以通过允许联合优化查询来提供净节省。`未物化`如果它附加到一个`和`递归或非无副作用的查询(即,不是普通的`选择`不包含 volatile 函数)。 + +默认情况下,无副作用`和`如果在主查询中仅使用一次,则查询将被折叠到主查询中`从`条款。这允许在语义上不可见的情况下对两个查询级别进行联合优化。然而,这种折叠可以通过标记来防止`和`查询为`物化`.这可能很有用,例如,如果`和`查询被用作优化围栏,以防止计划者选择错误的计划。v12 之前的 PostgreSQL 版本从未进行过这种折叠,因此为旧版本编写的查询可能依赖于`和`充当优化围栏。 + +看[第 7.8 节](queries-with.html)了解更多信息。 + +### `从`条款 + +这`从`子句指定一个或多个源表`选择`.如果指定了多个源,则结果是所有源的笛卡尔积(交叉连接)。但通常会添加资格条件(通过`在哪里`) 将返回的行限制为笛卡尔积的一小部分。 + +这`从`子句可以包含以下元素: + +*`表名`* + +现有表或视图的名称(可选模式限定)。如果`只要`在表名之前指定,仅扫描该表。如果`只要`如果未指定,则扫描该表及其所有后代表(如果有)。可选地,`*`可以在表名之后指定以明确指示包含后代表。 + +*`别名`* + +的替代名称`从`包含别名的项目。别名用于简洁或消除自连接的歧义(同一个表被扫描多次)。当提供别名时,它会完全隐藏表或函数的实际名称;例如给出`从 foo 作为 f`,其余的`选择`必须参考这个`从`项目为`f`不是`富`.如果编写了别名,则还可以编写列别名列表来为表的一个或多个列提供替代名称。 + +`表样 *`采样方法`* ( *`争论`* [, ...] ) [ 可重复 ( *`种子`* ) ]` + +一种`表样`a 后的子句*`表名`*表示指定的*`采样方法`*should be used to retrieve a subset of the rows in that table. This sampling precedes the application of any other filters such as`WHERE`clauses. The standard PostgreSQL distribution includes two sampling methods,`BERNOULLI`and`SYSTEM`, and other sampling methods can be installed in the database via extensions. + +The`BERNOULLI`and`SYSTEM`sampling methods each accept a single*`argument`*which is the fraction of the table to sample, expressed as a percentage between 0 and 100. This argument can be any`real`-valued expression. (Other sampling methods might accept more or different arguments.) These two methods each return a randomly-chosen sample of the table that will contain approximately the specified percentage of the table's rows. The`BERNOULLI`method scans the whole table and selects or ignores individual rows independently with the specified probability. The`SYSTEM`method does block-level sampling with each block having the specified chance of being selected; all rows in each selected block are returned. The`SYSTEM`method is significantly faster than the`BERNOULLI`method when small sampling percentages are specified, but it may return a less-random sample of the table as a result of clustering effects. + +The optional`REPEATABLE`clause specifies a*`seed`*number or expression to use for generating random numbers within the sampling method. The seed value can be any non-null floating-point value. Two queries that specify the same seed and*`argument`*如果同时没有更改表,则 values 将选择表的相同样本。但是不同的种子值通常会产生不同的样本。如果`可重复`没有给出,则根据系统生成的种子为每个查询选择一个新的随机样本。请注意,某些附加采样方法不接受`可重复`,并且每次使用都会产生新的样品。 + +*`选择`* + +一个子`选择`可以出现在`从`条款。这就像它的输出是在这个单一的持续时间内被创建为一个临时表一样`选择`命令。请注意,子`选择`必须用括号括起来,还有一个别名*必须*为其提供。一种[`价值观`](sql-values.html)命令也可以在这里使用。 + +*`with_query_name`* + +一种`和`查询通过写入其名称来引用,就像查询的名称是表名一样。(事实上​​,`和`query 出于主查询的目的隐藏任何同名的真实表。如有必要,您可以通过模式限定表名来引用同名的真实表。)可以以与表相同的方式提供别名。 + +*`函数名`* + +函数调用可以出现在`从`条款。(这对于返回结果集的函数特别有用,但可以使用任何函数。)这就像函数的输出在此单次执行期间被创建为临时表一样`选择`命令。如果函数的结果类型是复合的(包括一个函数有多个`出去`参数),每个属性成为隐式表中的单独列。 + +当可选`具有顺序性`子句被添加到函数调用,类型的附加列`大整数`将附加到函数的结果列。此列对函数结果集的行进行编号,从 1 开始。默认情况下,此列命名为序数`.`可以以与表相同的方式提供别名。 + +如果编写了别名,则还可以编写列别名列表以提供函数复合返回类型的一个或多个属性的替代名称,包括序数列(如果存在)。多个函数调用可以合并为一个 + +从`-条款项目通过包围他们`来自( ... ) 的行`.`这样一个项目的输出是每个函数的第一行的连接,然后是每个函数的第二行,等等。如果某些函数产生的行比其他函数少,则用空值代替缺失的数据,这样返回的总行数始终与生成最多行的函数相同。 + +如果函数已被定义为返回`记录`数据类型,然后是别名或关键字`作为`必须存在,后跟表格中的列定义列表`( *`列名`* *`数据类型`* [, ... ])`.列定义列表必须与函数返回的列的实际数量和类型相匹配。 + +使用时`来自( ... ) 的行`语法,如果其中一个函数需要列定义列表,最好将列定义列表放在函数调用之后`来自( ... ) 的行`.列定义列表可以放在`来自( ... ) 的行`仅当只有一个函数且没有`具有顺序性`条款。 + +使用`序数`连同列定义列表,您必须使用`来自( ... ) 的行`语法并将列定义列表放入其中`来自( ... ) 的行`. + +*`加入类型`* + +之一 + +- `[ 内部联接` + +- `左[外]加入` + +- `右[外]加入` + +- `全[外]加入` + +- `交叉连接` + + 为了`内`和`外`连接类型,必须指定连接条件,即恰好其中之一`自然`,`在 *`加入条件`*`, 要么`使用 (*`加入列`* [, ...])`.含义见下文。为了`交叉连接`, 这些子句都不能出现。 + + 一种`加入`子句结合了两个`从`项目,为方便起见,我们将其称为“表格”,但实际上它们可以是任何类型的`从`物品。必要时使用括号来确定嵌套顺序。在没有括号的情况下,`加入`s 从左到右嵌套。在任何情况下`加入`比逗号分隔得更紧密`从`- 列出项目。 + +`交叉连接`和`内部联接`产生一个简单的笛卡尔积,与在顶层列出两个表得到的结果相同`从`,但受连接条件(如果有)的限制。`交叉连接`相当于`内连接开启(真)`,也就是说,没有任何行被限定删除。这些连接类型只是一种符号上的便利,因为它们什么都不做,你不能用普通的`从`和`在哪里`. + +`左外连接`返回合格笛卡尔积中的所有行(即所有通过其连接条件的组合行),加上左侧表中没有通过连接条件的右侧行的每一行的副本。通过为右侧列插入空值,此左侧行扩展到连接表的整个宽度。请注意,只有`加入`在决定哪些行匹配时会考虑子句自身的条件。之后应用外部条件。 + +反过来,`右外连接`返回所有连接的行,加上每个不匹配的右侧行(左侧以空值扩展)的一行。这只是一种符号方便,因为您可以将其转换为`左外连接`通过切换左右表。 + +`全外连接`返回所有连接的行,加上每个不匹配的左侧行(右侧以空值扩展)的一行,以及每个不匹配的右侧行(左侧以空值扩展)的一行。 + +`在 *`加入条件`*` + +*`加入条件`*是导致类型值的表达式`布尔值`(类似于一个`在哪里`子句),它指定连接中的哪些行被认为是匹配的。 + +`使用 ( *`加入列`* [, ...] ) [ 作为 *`加入使用别名`* ]` + +形式子句`使用(a,b,...)`是简写`ON left_table.a = right_table.a AND left_table.b = right_table.b ...`.还,`使用`意味着每对等效列中只有一个将包含在连接输出中,而不是两者都包含。 + +如果一个*`加入使用别名`*指定名称时,它为连接列提供表别名。只有列中列出的连接列`使用`子句可通过此名称寻址。不同于一般的*`别名`*,这不会从查询的其余部分隐藏连接表的名称。也不同于常规*`别名`*,您不能编写列别名列表 - 连接列的输出名称与它们在`使用`列表。 + +`自然的` + +`自然的`是 a 的简写`使用`列出两个表中具有匹配名称的所有列。如果没有通用的列名,`自然的`相当于`正确`. + +`侧` + +这`侧`关键字可以在子之前`选择` `从`物品。这允许子`选择`引用列`从`出现在它前面的项目`从`列表。(没有`侧`, 每个子`选择`独立评估,因此不能交叉引用任何其他`从`物品。) + +`侧`也可以在函数调用之前`从`item,但在这种情况下它是一个干扰词,因为函数表达式可以参考前面`从`在任何情况下的项目。 + +一种`侧`项目可以出现在顶层`从`列表,或在一个`加入`树。在后一种情况下,它也可以指代左侧的任何项目`加入`它位于右侧。 + +当一个`从`项目包含`侧`交叉引用,评估过程如下:对于每一行`从`提供交叉引用的列或一组多行的项目`从`提供列的项目,`侧`使用该行或行集的列值评估项目。结果行像往常一样与计算它们的行连接。对列源表中的每一行或每组行重复此操作。 + +列源表必须是`内`或者`剩下`加入了`侧`项,否则将没有明确定义的行集来计算`侧`物品。因此,虽然一个结构如`*`X`* 右连接横向 *`是`*`在语法上是有效的,实际上是不允许的*`是`*参考*`X`*. + +### `在哪里`条款 + +可选的`在哪里`子句具有一般形式 + +``` +WHERE condition +``` + +在哪里*`(健康)状况`*是任何计算结果类型的表达式`布尔值`.任何不满足此条件的行都将从输出中删除。如果在实际行值替换任何变量引用时返回 true,则该行满足条件。 + +### `通过...分组`条款 + +可选的`通过...分组`子句具有一般形式 + +``` +GROUP BY [ ALL | DISTINCT ] grouping_element [, ...] +``` + +`通过...分组`将所有选定的行压缩为一行,这些行对分组表达式共享相同的值。一个*`表达`*在 a 内使用*`分组元素`*可以是输入列名称,也可以是输出列的名称或序号 (`选择`列表项),或由输入列值形成的任意表达式。如有歧义,`通过...分组`name 将被解释为输入列名而不是输出列名。 + +如果有任何一个`分组集`,`卷起`或者`立方体`作为分组元素存在,则`通过...分组`子句作为一个整体定义了一些独立的*`分组集`*.这样的效果相当于构造一个`联合所有`在以单个分组集为它们的子查询之间`通过...分组`条款。可选的`清楚的`子句在处理之前删除重复集;它确实*不是*改造`联合所有`成一个`工会区别`.有关分组集处理的更多详细信息,请参见[第 7.2.4 节](queries-table-expressions.html#QUERIES-GROUPING-SETS). + +如果使用了聚合函数,则在构成每个组的所有行中计算,为每个组生成一个单独的值。(如果有聚合函数但没有`通过...分组`子句,查询被视为具有包含所有选定行的单个组。)可以通过附加一个来进一步过滤提供给每个聚合函数的行集`筛选`聚合函数调用的子句;看[第 4.2.7 节](sql-expressions.html#SYNTAX-AGGREGATES)了解更多信息。当一个`筛选`子句存在时,只有与它匹配的那些行才会包含在该聚合函数的输入中。 + +什么时候`通过...分组`存在,或存在任何聚合函数,它对`选择`列出表达式以引用未分组的列,除非在聚合函数中或当未分组的列在功能上依赖于分组的列时,因为否则将为未分组的列返回多个可能的值。如果分组列(或其子集)是包含未分组列的表的主键,则存在功能依赖性。 + +请记住,所有聚合函数都在评估任何“标量”表达式之前进行评估`拥有`条款或`选择`列表。这意味着,例如,一个`案子`表达式不能用于跳过聚合函数的评估;看[第 4.2.14 节](sql-expressions.html#SYNTAX-EXPRESS-EVAL). + +目前,`无密钥更新`,`更新`,`分享`和`关键分享`不能指定`通过...分组`. + +### `拥有`条款 + +可选的`拥有`子句具有一般形式 + +``` +HAVING condition +``` + +在哪里*`(健康)状况`*与指定的相同`在哪里`条款。 + +`拥有`消除不满足条件的组行。`拥有`不同于`在哪里`:`在哪里`在应用之前过滤单个行`通过...分组`, 尽管`拥有`过滤器组创建的行`通过...分组`.中引用的每一列*`(健康)状况`*必须明确引用分组列,除非引用出现在聚合函数中或未分组列在功能上依赖于分组列。 + +的存在`拥有`将查询变成分组查询,即使没有`通过...分组`条款。这与查询包含聚合函数但不包含聚合函数时发生的情况相同`通过...分组`条款。所有选定的行都被视为一个组,并且`选择`列出和`拥有`子句只能从聚合函数中引用表列。如果`拥有`条件为真,如果不为真则零行。 + +目前,`无密钥更新`,`更新`,`分享`和`关键分享`不能指定`拥有`. + +### `窗户`条款 + +可选的`窗户`子句具有一般形式 + +``` +WINDOW window_name AS ( window_definition ) [, ...] +``` + +在哪里*`窗口名称`*是一个可以引用的名称`超过`子句或后续窗口定义,以及*`窗口定义`*是 + +``` +[ existing_window_name ] +[ PARTITION BY expression [, ...] ] +[ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ] +[ frame_clause ] +``` + +如果*`现有窗口名称`*被指定它必须引用一个较早的条目`窗户`列表;新窗口从该条目复制其分区子句,以及其排序子句(如果有)。在这种情况下,新窗口不能指定自己的`分区方式`子句,它可以指定`订购方式`仅当复制的窗口没有时。新窗口总是使用自己的框架子句;复制的窗口不得指定框架子句。 + +的元素`分区方式`list 的解释方式与 a 的元素大致相同[`通过...分组`](sql-select.html#SQL-GROUPBY)子句,除了它们总是简单的表达式,而不是输出列的名称或编号。另一个区别是这些表达式可以包含聚合函数调用,这在常规中是不允许的`通过...分组`条款。它们在这里是允许的,因为在分组和聚合之后发生窗口化。 + +同样,元素`订购方式`列表的解释方式与语句级别的元素大致相同[`订购方式`](sql-select.html#SQL-ORDERBY)子句,但表达式始终被视为简单表达式,而不是输出列的名称或编号。 + +可选的*`框架子句`*定义了*窗框*对于依赖于框架的窗口函数(并非全部)。窗口框架是查询的每一行的一组相关行(称为*当前行*)。这*`框架子句`*可以是其中之一 + +``` +{ RANGE | ROWS | GROUPS } frame_start [ frame_exclusion ] +{ RANGE | ROWS | GROUPS } BETWEEN frame_start AND frame_end [ frame_exclusion ] +``` + +在哪里*`frame_start`*和*`帧结束`*可以是其中之一 + +``` +UNBOUNDED PRECEDING +offset PRECEDING +CURRENT ROW +offset FOLLOWING +UNBOUNDED FOLLOWING +``` + +和*`框架排除`*可以是其中之一 + +``` +EXCLUDE CURRENT ROW +EXCLUDE GROUP +EXCLUDE TIES +EXCLUDE NO OTHERS +``` + +如果*`帧结束`*被省略,默认为`当前行`.限制是*`frame_start`*不可能是`无界跟随`,*`帧结束`*不可能是`前无界`, 和*`帧结束`*选择不能出现在上面的列表中*`frame_start`*和*`帧结束`*选项比*`frame_start`*选择确实——例如`当前行和 * 之间的范围`抵消`* 前文`不允许。 + +默认框架选项是`范围无界前`, 这与`无界前行和当前行之间的范围`;它将框架设置为从分区开始到当前行的最后一行的所有行*同行*(窗口的一行`订购方式`子句认为等同于当前行;如果没有,所有行都是对等的`订购方式`)。一般来说,`前无界`表示帧从分区的第一行开始,类似地`无界跟随`表示帧以分区的最后一行结束,无论`范围`,`行`或者`组`模式。在`行`模式,`当前行`表示帧以当前行开始或结束;但在`范围`或者`组`mode 表示帧以当前行的第一个或最后一个节点开始或结束`订购方式`订购。这*`抵消`* `前`和*`抵消`* `下列的`选项的含义因帧模式而异。在`行`模式*`抵消`*是一个整数,指示帧在当前行之前或之后开始或结束那么多行。在`组`模式*`抵消`*是一个整数,指示帧在当前行的对等组之前或之后开始或结束那么多对等组,其中*对等组*是一组根据窗口等效的行`订购方式`条款。在`范围`模式,使用一个*`抵消`*选项要求恰好有一个`订购方式`窗口定义中的列。然后框架包含那些排序列值不超过的行*`抵消`*小于(对于`前`) 或多于 (对于`下列的`) 当前行的排序列值。在这些情况下,数据类型*`抵消`*表达式取决于排序列的数据类型。对于数字排序列,它通常与排序列的类型相同,但对于日期时间排序列,它是`间隔`.在所有这些情况下,价值*`抵消`*必须为非空且非负数。此外,虽然*`抵消`*不必是简单的常量,也不能包含变量、聚合函数或窗口函数。 + +这*`框架排除`*选项允许从框架中排除当前行周围的行,即使根据框架开始和框架结束选项将它们包括在内。`排除当前行`从框架中排除当前行。`排除组`从框架中排除当前行及其排序对等方。`排除关系`从框架中排除当前行的任何对等方,但不排除当前行本身。`不排除其他人`简单地明确指定不排除当前行或其对等点的默认行为。 + +请注意,`行`模式可以产生不可预知的结果,如果`订购方式`ordering 不会唯一地对行进行排序。这`范围`和`组`模式旨在确保在`订购方式`排序被同等对待:给定对等组的所有行都将在框架中或从框架中排除。 + +一个目的`窗户`子句是指定的行为*窗口函数*出现在查询的[`选择`列表](sql-select.html#SQL-SELECT-LIST)或者[`订购方式`](sql-select.html#SQL-ORDERBY)条款。这些函数可以参考`窗户`子句条目中的名称`超过`条款。一种`窗户`但是,子句条目不必在任何地方引用;如果它没有在查询中使用,它会被忽略。可以不使用任何窗口函数`窗户`子句,因为窗口函数调用可以直接在它的`超过`条款。然而`窗户`当多个窗口函数需要相同的窗口定义时,子句可以节省输入。 + +目前,`无密钥更新`,`更新`,`分享`和`关键分享`不能指定`窗户`. + +窗口函数在详细描述[第 3.5 节](tutorial-window.html),[第 4.2.8 节](sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS), 和[第 7.2.5 节](queries-table-expressions.html#QUERIES-WINDOW). + +### `选择`列表 + +这`选择`列表(在关键词之间`选择`和`从`) 指定形成输出行的表达式`选择`陈述。表达式可以(并且通常会)引用在`从`条款。 + +就像在表中一样,a 的每个输出列`选择`有一个名字。在一个简单的`选择`此名称仅用于标记要显示的列,但当`选择`是较大查询的子查询,该名称被较大查询视为子查询产生的虚拟表的列名。要指定用于输出列的名称,请编写`作为` *`输出名称`*在列的表达式之后。(可以省略`作为`,但前提是所需的输出名称不匹配任何 PostgreSQL 关键字(请参阅[附录 C](sql-keywords-appendix.html))。为了防止将来可能添加关键字,建议您始终编写`作为`或双引号输出名称。)如果您不指定列名,则 PostgreSQL 会自动选择一个名称。如果列的表达式是一个简单的列引用,则选择的名称与该列的名称相同。在更复杂的情况下,可以使用函数或类型名称,或者系统可能会使用生成的名称,例如`?柱子?`. + +输出列的名称可用于引用该列的值`订购方式`和`通过...分组`条款,但不在`在哪里`要么`拥有`条款;在那里你必须写出表达式。 + +而不是一个表情,`*`可以写在输出列表中作为所选行的所有列的简写。另外,你可以写`*`表名`*.*`作为仅来自该表的列的简写。在这些情况下,无法指定新名称`作为`;输出列名将与表列名相同。 + +根据 SQL 标准,输出列表中的表达式应在应用前计算`清楚的`,`订购方式`, 或者`限制`.这在使用时显然是必要的`清楚的`,因为否则不清楚哪些值是不同的。然而,在许多情况下,如果输出表达式在`订购方式`和`限制`;特别是如果输出列表包含任何易失或昂贵的功能。通过这种行为,函数评估的顺序更加直观,并且不会有与从未出现在输出中的行相对应的评估。PostgreSQL 将在排序和限制后有效地评估输出表达式,只要这些表达式没有被引用`清楚的`,`订购方式`或者`通过...分组`.(作为反例,`从选项卡 ORDER BY 1 中选择 f(x)`显然必须评估`f(x)`排序之前。)包含返回集合函数的输出表达式在排序之后和限制之前有效地评估,因此`限制`将采取行动切断来自集合返回功能的输出。 + +### 笔记 + +PostgreSQL 9.6 之前的版本没有提供任何关于输出表达式求值时间与排序和限制的保证;它取决于所选查询计划的形式。 + +### `清楚的`条款 + +如果`选择不同的`指定时,从结果集中删除所有重复行(从每组重复项中保留一行)。`全选`指定相反:保留所有行;这是默认设置。 + +`选择不同(*`表达`* [, ...] )`仅保留给定表达式计算结果等于的每组行的第一行。这`区别开`表达式使用与 for 相同的规则进行解释`订购方式`(往上看)。请注意,每组的“第一行”是不可预测的,除非`订购方式`用于确保所需的行首先出现。例如: + +``` +SELECT DISTINCT ON (location) location, time, report + FROM weather_reports + ORDER BY location, time DESC; +``` + +检索每个位置的最新天气报告。但是如果我们没有使用`订购方式`为了强制每个位置的时间值降序排列,我们会从每个位置的不可预测时间得到一份报告。 + +这`区别开`表达式必须匹配最左边的`订购方式`表达式。这`订购方式`子句通常包含额外的表达式,这些表达式确定每个行中所需的优先级`区别开`团体。 + +目前,`无密钥更新`,`更新`,`分享`和`关键分享`不能指定`清楚的`. + +### `联盟`条款 + +这`联盟`子句具有以下一般形式: + +``` +select_statement UNION [ ALL | DISTINCT ] select_statement +``` + +*`选择语句`*是任何`选择`没有声明的`订购方式`,`限制`,`无密钥更新`,`更新`,`分享`, 或者`关键分享`条款。(`订购方式`和`限制`如果它被括在括号中,则可以附加到子表达式。如果没有括号,这些条款将被视为适用于`联盟`,而不是它的右手输入表达式。) + +这`联盟`运算符计算所涉及的返回行的集合并集`选择`陈述。如果一行出现在至少一个结果集中,则它位于两个结果集的集合并集中。他们俩`选择`表示直接操作数的语句`联盟`必须产生相同数量的列,并且对应的列必须是兼容的数据类型。 + +的结果`联盟`不包含任何重复的行,除非`全部`选项被指定。`全部`防止消除重复。(所以,`联合所有`通常明显快于`联盟`;采用`全部`什么时候可以。)`清楚的`可以写入显式指定消除重复行的默认行为。 + +多种的`联盟`同一运营商`选择`语句从左到右求值,除非括号中另有说明。 + +现在,`无密钥更新`, `更新`, `分享`和`关键分享`不能为 a 指定`联盟`结果或任何输入`联盟`. + +### `相交`条款 + +这`相交`子句具有以下一般形式: + +``` +select_statement INTERSECT [ ALL | DISTINCT ] select_statement +``` + +*`选择语句`*是任何`选择`没有声明的`订购方式`, `限制`, `无密钥更新`, `更新`,`分享`, 或者`关键分享`条款。 + +这`相交`运算符计算所涉及的返回的行的集合交集`选择`陈述。如果一行出现在两个结果集中,则它位于两个结果集中的交集。 + +的结果`相交`不包含任何重复的行,除非`全部`选项被指定。和`全部`, 一行有*`米`*左表中的重复项和*`n`*右表中的重复项将出现 min(*`米`*,*`n`*) 结果集中的次数。`清楚的`可以写入显式指定消除重复行的默认行为。 + +多`相交`同一运营商`选择`语句从左到右进行评估,除非括号另有说明。`相交`比绑得更紧`联盟`.那是,`A 联合 B 相交 C`将被读作`A UNION (B INTERSECT C)`. + +目前,`无密钥更新`,`更新`,`分享`和`关键分享`不能为`相交`结果或任何输入`相交`. + +### `除了`条款 + +这`除了`子句具有以下一般形式: + +``` +select_statement EXCEPT [ ALL | DISTINCT ] select_statement +``` + +*`选择语句`*是任何`选择`没有声明的`订购方式`,`限制`,`无密钥更新`,`更新`,`分享`, 要么`关键分享`条款。 + +这`除了`运算符计算左侧结果中的行集`选择`声明,但不在正确的结果中。 + +的结果`除了`不包含任何重复的行,除非`全部`选项被指定。和`全部`, 一行有*`米`*左表中的重复项和*`n`*右表中的重复项将出现 max(*`米`*-*`n`*,0) 次在结果集中。`清楚的`可以写入显式指定消除重复行的默认行为。 + +多种的`除了`同一运营商`选择`语句从左到右进行评估,除非括号另有说明。`除了`绑定在同一级别`联盟`. + +现在,`无密钥更新`, `更新`, `分享`和`关键分享`不能为`除了`结果或任何输入`除了`. + +### `订购方式`条款 + +可选的`订购方式`子句具有以下一般形式: + +``` +ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] +``` + +这`订购方式`子句使结果行根据指定的表达式进行排序。如果根据最左边的表达式,两行相等,则根据下一个表达式进行比较,依此类推。如果根据所有指定的表达式它们相等,则它们以与实现相关的顺序返回。 + +每个*`表达`*可以是输出列的名称或序号 (`选择`列表项),或者它可以是由输入列值形成的任意表达式。 + +序数是指输出列的序数(从左到右)位置。此功能可以根据没有唯一名称的列定义排序。这从来都不是绝对必要的,因为总是可以使用`作为`条款。 + +也可以使用任意表达式`订购方式`子句,包括未出现在`选择`输出列表。因此,以下陈述是有效的: + +``` +SELECT name FROM distributors ORDER BY code; +``` + +此功能的一个限制是,`订购方式`适用于 a 结果的子句`联盟`,`相交`, 或者`除了`子句只能指定输出列名或编号,不能指定表达式。 + +如果`订购方式`表达式是一个简单的名称,它同时匹配输出列名和输入列名,`订购方式`将其解释为输出列名。这与选择的相反`通过...分组`会在同样的情况下。这种不一致性是为了与 SQL 标准兼容。 + +可以选择添加关键字`ASC`(上升)或`DESC`(降序)在任何表达式之后`订购方式`条款。如果没有指定,`ASC`默认情况下假定。或者,可以在`使用`条款。排序运算符必须是某个 B 树运算符族的小于或大于成员。`ASC`通常相当于`使用 <`和`DESC`通常相当于`使用 >`.(但用户定义数据类型的创建者可以准确定义默认排序顺序是什么,并且它可能对应于具有其他名称的运算符。) + +如果`NULLS LAST`指定时,空值排在所有非空值之后;如果`NULLS FIRST`指定时,空值排在所有非空值之前。如果两者都没有指定,则默认行为是`NULLS LAST`什么时候`ASC`是指定的或暗示的,并且`NULLS FIRST`什么时候`DESC`是指定的(因此,默认情况下就像空值大于非空值一样)。什么时候`使用`指定时,默认空值排序取决于运算符是小于还是大于运算符。 + +请注意,排序选项仅适用于它们遵循的表达式;例如`按 x, y DESC 排序`与`按 x DESC、y DESC 排序`. + +字符串数据根据应用于被排序的列的排序规则进行排序。可以在需要时通过包含`整理`条款中的*`表达`*, 例如`ORDER BY mycolumn COLLATE "en_US"`.有关更多信息,请参阅[第 4.2.10 节](sql-expressions.html#SQL-SYNTAX-COLLATE-EXPRS)和[第 24.2 节](collation.html). + +### `限制`条款 + +这`限制`clause consists of two independent sub-clauses: + +``` +LIMIT { count | ALL } +OFFSET start +``` + +The parameter*`count`*specifies the maximum number of rows to return, while*`start`*specifies the number of rows to skip before starting to return rows. When both are specified,*`start`*rows are skipped before starting to count the*`count`*rows to be returned. + +If the*`count`*expression evaluates to NULL, it is treated as`LIMIT ALL`, i.e., no limit. If*`start`*evaluates to NULL, it is treated the same as`OFFSET 0`. + +SQL:2008 introduced a different syntax to achieve the same result, which PostgreSQL also supports. It is: + +``` +OFFSET start { ROW | ROWS } +FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } { ONLY | WITH TIES } +``` + +In this syntax, the*`start`*or*`count`*value is required by the standard to be a literal constant, a parameter, or a variable name; as a PostgreSQL extension, other expressions are allowed, but will generally need to be enclosed in parentheses to avoid ambiguity. If*`count`*is omitted in a`FETCH`clause, it defaults to 1. The`WITH TIES`选项用于根据`订购方式`条款;`订购方式`在这种情况下是强制性的,并且`跳过锁定`不允许。`排`和`行`也`第一的`和`下一个`是不影响这些条款效果的干扰词。根据标准,`抵消`子句必须在`拿来`条款,如果两者都存在;但 PostgreSQL 比较宽松,允许任何一种顺序。 + +使用时`限制`, 使用`订购方式`将结果行约束为唯一顺序的子句。否则,您将获得查询行的不可预测的子集——您可能会要求第 10 到第 20 行,但第 10 到第 20 行的顺序是什么?除非您指定,否则您不知道什么顺序`订购方式`. + +查询规划器采用`限制`在生成查询计划时考虑,因此您很可能会根据您的用途获得不同的计划(产生不同的行顺序)`限制`和`抵消`.因此,使用不同的`限制`/`抵消`选择查询结果的不同子集的值*会给出不一致的结果*除非您强制执行可预测的结果排序`订购方式`.这不是错误;这是 SQL 不承诺以任何特定顺序提供查询结果这一事实的固有结果,除非`订购方式`用于约束顺序。 + +甚至可以重复执行相同的`限制`查询返回表行的不同子集,如果没有`订购方式`强制选择确定性子集。同样,这不是错误。在这种情况下,根本无法保证结果的确定性。 + +### 锁定条款 + +`更新`,`无密钥更新`,`分享`和`关键分享`是*锁定条款*;它们影响如何`选择`锁定从表中获取的行。 + +锁定子句具有一般形式 + +``` +FOR lock_strength [ OF table_name [, ...] ] [ NOWAIT | SKIP LOCKED ] +``` + +在哪里*`lock_strength`*可以是其中之一 + +``` +UPDATE +NO KEY UPDATE +SHARE +KEY SHARE +``` + +有关每种行级锁定模式的详细信息,请参阅[第 13.3.2 节](explicit-locking.html#LOCKING-ROWS). + +要防止操作等待其他事务提交,请使用`现在等待`或者`跳过锁定`选项。和`现在等待`,如果不能立即锁定选定的行,该语句将报告错误,而不是等待。和`跳过锁定`,任何无法立即锁定的选定行都将被跳过。跳过锁定的行会提供不一致的数据视图,因此这不适用于通用工作,但可用于避免多个消费者访问类似队列的表时发生锁争用。注意`现在等待`和`跳过锁定`仅适用于行级锁 — 必需`行共享`表级锁仍然以普通方式获取(参见[第 13 章](mvcc.html))。您可以使用[`锁`](sql-lock.html)与`现在等待`第一个选项,如果您需要在不等待的情况下获取表级锁。 + +如果在锁定子句中命名了特定表,则只有来自这些表的行被锁定;中使用的任何其他表格`选择`像往常一样简单地阅读。没有表列表的锁定子句会影响语句中使用的所有表。如果锁定子句应用于视图或子查询,它会影响视图或子查询中使用的所有表。但是,这些条款不适用于`和`主查询引用的查询。如果您希望行锁定发生在一个`和`查询,在`和`询问。 + +如果需要为不同的表指定不同的锁定行为,可以编写多个锁定子句。如果同一个表被多个锁定子句提及(或隐式影响),则将其处理为好像它仅由最强的一个指定。类似地,一个表被处理为`现在等待`如果在影响它的任何条款中规定了这一点。否则,它被处理为`跳过锁定`如果在影响它的任何条款中规定了这一点。 + +锁定子句不能用于返回的行不能与单个表行明确标识的上下文中;例如,它们不能与聚合一起使用。 + +当锁定子句出现在`选择`查询,锁定的行正是查询返回的行;在连接查询的情况下,锁定的行是那些有助于返回连接行的行。此外,在查询快照时满足查询条件的行将被锁定,但如果在快照后更新且不再满足查询条件,则不会返回。如果一个`限制`使用时,一旦返回足够多的行以满足限制,锁定就会停止(但请注意,跳过的行`抵消`将被锁定)。同样,如果在游标的查询中使用了锁定子句,则只有游标实际获取或跳过的行才会被锁定。 + +当锁定子句出现在子`选择`,锁定的行是子查询返回给外部查询的行。这可能涉及比单独检查子查询所建议的更少的行,因为来自外部查询的条件可能用于优化子查询的执行。例如, + +``` +SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss WHERE col1 = 5; +``` + +将仅锁定具有`col1 = 5`,即使该条件在文本上不在子查询中。 + +以前的版本未能保留由以后的保存点升级的锁。例如,这段代码: + +``` +BEGIN; +SELECT * FROM mytable WHERE key = 1 FOR UPDATE; +SAVEPOINT s; +UPDATE mytable SET ... WHERE key = 1; +ROLLBACK TO s; +``` + +将无法保存`更新`锁定后`回滚到`.这已在 9.3 版中修复。 + +### 警告 + +这是可能的`选择`命令运行在`阅读已提交`事务隔离级别和使用`订购方式`和一个锁定子句以乱序返回行。这是因为`订购方式`首先应用。该命令对结果进行排序,但随后可能会阻止尝试获取一个或多个行的锁定。一旦`选择`解除阻塞时,某些排序列值可能已被修改,导致这些行看起来是乱序的(尽管它们在原始列值方面是按顺序排列的)。这可以在需要时通过放置`更新/分享`子查询中的子句,例如 + +``` +SELECT * FROM (SELECT * FROM mytable FOR UPDATE) ss ORDER BY column1; +``` + +请注意,这将导致锁定所有行`我的表`, 然而`更新`在顶层只会锁定实际返回的行。这可能会产生显着的性能差异,特别是如果`订购方式`与`限制`或其他限制。因此,仅当预期排序列的并发更新并且需要严格排序的结果时,才建议使用此技术。 + +在`可重复阅读`要么`可序列化`事务隔离级别,这将导致序列化失败(带有`SQLSTATE`的`'40001'`),因此在这些隔离级别下不可能接收到乱序的行。 + +### `桌子`命令 + +命令 + +``` +TABLE name +``` + +相当于 + +``` +SELECT * FROM name +``` + +它可以用作顶级命令或在复杂查询的部分中用作节省空间的语法变体。只有`和`,`联盟`,`相交`,`除了`,`订购方式`,`限制`,`抵消`,`拿来`和`为了`锁定子句可以与`桌子`;这`在哪里`子句和任何形式的聚合都不能使用。 + +## 例子 + +加入表`电影`与桌子`经销商`: + +``` +SELECT f.title, f.did, d.name, f.date_prod, f.kind + FROM distributors d, films f + WHERE f.did = d.did + + title | did | name | date_prod | kind +## Compatibility + + Of course, the `SELECT` statement is compatible with the SQL standard. But there are some extensions and some missing features. + +### Omitted `FROM` Clauses + +PostgreSQL allows one to omit the `FROM` clause. It has a straightforward use to compute the results of simple expressions: +``` + +选择 2+2; + +?柱子? + +### 空的`选择`列表 + +之后的输出表达式列表`选择`可以为空,生成一个零列的结果表。根据 SQL 标准,这不是有效的语法。PostgreSQL 允许它与允许零列表保持一致。但是,当`清楚的`用来。 + +### 省略`作为`关键词 + +在 SQL 标准中,可选关键字`作为`只要新列名是有效的列名(即,与任何保留关键字不同),就可以在输出列名之前省略。PostgreSQL 的限制稍微多一些:`作为`如果新的列名完全匹配任何关键字,无论是否保留,都是必需的。推荐的做法是使用`作为`或双引号输出列名,以防止与将来添加的关键字发生任何可能的冲突。 + +在`从`项目,标准和 PostgreSQL 都允许`作为`在作为非保留关键字的别名之前被省略。但这对于输出列名是不切实际的,因为语法不明确。 + +### `只要`和继承 + +SQL 标准在编写时要求在表名周围加上括号`只要`, 例如`SELECT * FROM ONLY (tab1), ONLY (tab2) WHERE ...`.PostgreSQL 认为这些括号是可选的。 + +PostgreSQL 允许尾随`*`被写入明确指定非`只要`包括子表的行为。标准不允许这样做。 + +(这些点同样适用于所有支持`只要`选项。) + +### `表样`条款限制 + +这`表样`子句目前仅在常规表和物化视图上被接受。根据 SQL 标准,应该可以将其应用于任何`从`物品。 + +### 函数调用`从` + +PostgreSQL 允许将函数调用直接编写为`从`列表。在 SQL 标准中,有必要将这样的函数调用包装在一个子类中。`选择`;也就是说,语法`从 *`功能`*(...) *`别名`*`大约相当于`从横向(选择 *`功能`*(...)) *`别名`*`.注意`侧`被认为是隐含的;这是因为标准要求`侧`语义`未嵌套()`项目在`从`.PostgreSQL 对待`未嵌套()`与其他集合返回函数相同。 + +### 可用的命名空间`通过...分组`和`订购方式` + +在 SQL-92 标准中,一个`订购方式`子句只能使用输出列名或数字,而`通过...分组`子句只能使用基于输入列名的表达式。PostgreSQL 扩展了这些子句中的每一个以允许其他选择(但如果存在歧义,它会使用标准的解释)。PostgreSQL 还允许两个子句指定任意表达式。请注意,出现在表达式中的名称将始终被视为输入列名称,而不是输出列名称。 + +SQL:1999 和更高版本使用稍微不同的定义,它不完全向上兼容 SQL-92。然而,在大多数情况下,PostgreSQL 会解释一个`订购方式`或者`通过...分组`表达方式与 SQL:1999 相同。 + +### 功能依赖 + +PostgreSQL 识别函数依赖(允许从`通过...分组`) 仅当表的主键包含在`通过...分组`列表。SQL 标准指定了应识别的附加条件。 + +### `限制`和`抵消` + +条款`限制`和`抵消`是特定于 PostgreSQL 的语法,也被 MySQL 使用。SQL:2008 标准引入了子句`偏移量 ... FETCH {FIRST|NEXT} ...`对于相同的功能,如上图所示[限制条款](sql-select.html#SQL-LIMIT).IBM DB2 也使用此语法。(为 Oracle 编写的应用程序经常使用涉及自动生成的`行数`列,在 PostgreSQL 中不可用,以实现这些子句的效果。) + +### `无密钥更新`,`更新`,`分享`,`关键分享` + +虽然`更新`出现在 SQL 标准中,标准只允许它作为一个选项`声明光标`.PostgreSQL 允许它在任何`选择`查询以及在子`选择`s,但这是一个扩展。这`无密钥更新`,`分享`和`关键分享`变体,以及`现在等待`和`跳过锁定`选项,不出现在标准中。 + +### 中的数据修改语句`和` + +PostgreSQL 允许`插入`,`更新`, 和`删除`用作`和`查询。这在 SQL 标准中找不到。 + +### 非标准条款 + +`区别开 ( ... )`是 SQL 标准的扩展。 + +`来自( ... ) 的行`是 SQL 标准的扩展。 + +这`物化`和`未物化`的选项`和`是 SQL 标准的扩展。 diff --git a/docs/X/sql-selectinto.md b/docs/en/sql-selectinto.md similarity index 100% rename from docs/X/sql-selectinto.md rename to docs/en/sql-selectinto.md diff --git a/docs/en/sql-selectinto.zh.md b/docs/en/sql-selectinto.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..44dd12d994d1deea0a2452eb69f0151955d008d0 --- /dev/null +++ b/docs/en/sql-selectinto.zh.md @@ -0,0 +1,65 @@ +## 选择进入 + +SELECT INTO — 根据查询结果定义一个新表 + +## 概要 + +``` +[ WITH [ RECURSIVE ] with_query [, ...] ] +SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ] + * | expression [ [ AS ] output_name ] [, ...] + INTO [ TEMPORARY | TEMP | UNLOGGED ] [ TABLE ] new_table + [ FROM from_item [, ...] ] + [ WHERE condition ] + [ GROUP BY expression [, ...] ] + [ HAVING condition ] + [ WINDOW window_name AS ( window_definition ) [, ...] ] + [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ] + [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | LAST } ] [, ...] ] + [ LIMIT { count | ALL } ] + [ OFFSET start [ ROW | ROWS ] ] + [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ] + [ FOR { UPDATE | SHARE } [ OF table_name [, ...] ] [ NOWAIT ] [...] ] +``` + +## 描述 + +`选择进入`创建一个新表并用查询计算的数据填充它。数据不会返回给客户端,因为它是正常的`选择`.新表的列的名称和数据类型与`选择`. + +## 参数 + +`暂时的`或者`温度` + +如果指定,则将该表创建为临时表。参考[创建表](sql-createtable.html)详情。 + +`未记录` + +如果指定,则将该表创建为未记录的表。参考[创建表](sql-createtable.html)详情。 + +*`新表`* + +要创建的表的名称(可选的模式限定)。 + +所有其他参数在下面详细描述[选择](sql-select.html). + +## 笔记 + +[`创建表为`](sql-createtableas.html)在功能上类似于`选择进入`.`创建表为`是推荐的语法,因为这种形式的`选择进入`在 ECPG 或 PL/pgSQL 中不可用,因为它们解释`进入`条款不同。此外,`创建表为`提供了由`选择进入`. + +相比之下`创建表为`,`选择进入`不允许指定像表的访问方法这样的属性[`使用 *`方法`*`](sql-createtable.html#SQL-CREATETABLE-METHOD)或表的表空间[`表空间 *`表空间名称`*`](sql-createtable.html#SQL-CREATETABLE-TABLESPACE).采用`创建表为`如有必要。因此,为新表选择默认表访问方法。看[默认\_桌子\_使用权\_方法](runtime-config-client.html#GUC-DEFAULT-TABLE-ACCESS-METHOD)了解更多信息。 + +## 例子 + +创建一个新表`电影_最近`仅由表中的最近条目组成`电影`: + +``` +SELECT * INTO films_recent FROM films WHERE date_prod >= '2002-01-01'; +``` + +## 兼容性 + +SQL 标准使用`选择进入`将选择值表示为主机程序的标量变量,而不是创建新表。这确实是 ECPG 中的用法(参见[第36章](ecpg.html)) 和 PL/pgSQL (参见[第 43 章](plpgsql.html))。PostgreSQL 的使用`选择进入`表示表创建是历史性的。其他一些 SQL 实现也使用`选择进入`以这种方式(但大多数 SQL 实现都支持`创建表为`反而)。除了这样的兼容性考虑,最好使用`创建表为`为此在新代码中。 + +## 也可以看看 + +[创建表为](sql-createtableas.html) diff --git a/docs/X/sql-set-constraints.md b/docs/en/sql-set-constraints.md similarity index 100% rename from docs/X/sql-set-constraints.md rename to docs/en/sql-set-constraints.md diff --git a/docs/en/sql-set-constraints.zh.md b/docs/en/sql-set-constraints.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..3e0f33cdef32fbc4cd59033752320f6aacc3613d --- /dev/null +++ b/docs/en/sql-set-constraints.zh.md @@ -0,0 +1,33 @@ +## 设置约束 + +SET CONSTRAINTS — 为当前事务设置约束检查时间 + +## 概要 + +``` +SET CONSTRAINTS { ALL | name [, ...] } { DEFERRED | IMMEDIATE } +``` + +## 描述 + +`设置约束`设置当前事务中约束检查的行为。`即时`在每个语句的末尾检查约束。`延期`直到事务提交才检查约束。每个约束都有自己的`即时`或者`延期`模式。 + +在创建时,约束被赋予以下三个特征之一:`可延期的初始延期`,`最初立即可延期`, 或者`不可延期`.第三课总是`即时`并且不受`设置约束`命令。前两个类以指示的模式启动每个事务,但它们的行为可以在事务中通过以下方式更改`设置约束`. + +`设置约束`使用约束名称列表仅更改那些约束的模式(它们都必须是可延迟的)。每个约束名称都可以是模式限定的。如果未指定模式名称,则使用当前模式搜索路径查找第一个匹配的名称。`全部设置约束`更改所有可延迟约束的模式。 + +什么时候`设置约束`将约束的模式从`延期`到`即时`,新模式追溯生效:在事务结束时检查的任何未完成的数据修改改为在执行期间检查`设置约束`命令。如果违反任何此类约束,则`设置约束`失败(并且不会更改约束模式)。因此,`设置约束`可用于强制检查约束发生在事务中的特定点。 + +目前,只有`独特`,`首要的关键`,`参考`(外键),和`排除`约束受此设置影响。`非空`和`查看`插入或修改行时总是立即检查约束(*不是*在声明的末尾)。未声明的唯一性和排除性约束`可延期的`也立即进行检查。 + +声明为“约束触发器”的触发器的触发也受此设置控制——它们在应检查关联约束的同时触发。 + +## 笔记 + +因为 PostgreSQL 不要求约束名称在模式中是唯一的(但只要求每个表),所以指定的约束名称可能有多个匹配项。在这种情况下`设置约束`将对所有比赛采取行动。对于非模式限定名称,一旦在搜索路径中的某个模式中找到一个或多个匹配项,则不会搜索稍后出现在该路径中的模式。 + +此命令仅更改当前事务中约束的行为。在事务块之外发出这个会发出警告,否则无效。 + +## 兼容性 + +该命令符合 SQL 标准中定义的行为,除了限制,在 PostgreSQL 中,它不适用于`非空`和`查看`约束。此外,PostgreSQL 会立即检查不可延迟的唯一性约束,而不是像标准建议的那样在语句结束时检查。 diff --git a/docs/X/sql-set-role.md b/docs/en/sql-set-role.md similarity index 100% rename from docs/X/sql-set-role.md rename to docs/en/sql-set-role.md diff --git a/docs/en/sql-set-role.zh.md b/docs/en/sql-set-role.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8ce89c09f35e317121801021a50d5f71dcc56071 --- /dev/null +++ b/docs/en/sql-set-role.zh.md @@ -0,0 +1,48 @@ +## 设定角色 + +SET ROLE — 设置当前会话的当前用户标识符 + +## 概要 + +``` +SET [ SESSION | LOCAL ] ROLE role_name +SET [ SESSION | LOCAL ] ROLE NONE +RESET ROLE +``` + +## 描述 + +此命令将当前 SQL 会话的当前用户标识符设置为*`角色名称`*.角色名称可以写为标识符或字符串文字。后`设定角色`,执行 SQL 命令的权限检查,就好像命名角色是最初登录的角色一样。 + +指定的*`角色名称`*必须是当前会话用户所属的角色。(如果会话用户是超级用户,则可以选择任何角色。) + +这`会议`和`当地的`修饰符的作用与常规相同[`放`](sql-set.html)命令。 + +`设置角色无`将当前用户标识符设置为当前会话用户标识符,由返回`会话用户`.`重置角色`将当前用户标识符设置为由[命令行选项](libpq-connect.html#LIBPQ-CONNECT-OPTIONS),[`改变角色`](sql-alterrole.html), 要么[`更改数据库`](sql-alterdatabase.html),如果存在任何此类设置。否则,`重置角色`将当前用户标识符设置为当前会话用户标识符。这些表格可以由任何用户执行。 + +## 笔记 + +使用此命令,可以添加权限或限制一个人的权限。如果会话用户角色具有`继承`属性,然后它自动拥有每个角色的所有权限,它可以`设定角色`到;在这种情况下`设定角色`有效地删除直接分配给会话用户及其所属的其他角色的所有权限,只保留指定角色可用的权限。另一方面,如果会话用户角色具有`非继承`属性,`设定角色`删除直接分配给会话用户的权限,而是获取指定角色可用的权限。 + +特别是,当超级用户选择`设定角色`对于非超级用户角色,他们将失去超级用户权限。 + +`设定角色`效果堪比[`设置会话授权`](sql-set-session-authorization.html),但所涉及的权限检查是完全不同的。还,`设置会话授权`确定以后允许哪些角色`设定角色`命令,而改变角色`设定角色`不会将允许的角色集更改为以后`设定角色`. + +`设定角色`不处理角色指定的会话变量[`改变角色`](sql-alterrole.html)设置;这仅在登录期间发生。 + +`设定角色`不能在一个内使用`安全定义器`功能。 + +## 例子 + +``` +SELECT SESSION_USER, CURRENT_USER; + + session_user | current_user +## Compatibility + +PostgreSQL allows identifier syntax (`"*`rolename`*"`), while the SQL standard requires the role name to be written as a string literal. SQL does not allow this command during a transaction; PostgreSQL does not make this restriction because there is no reason to. The `SESSION` and `LOCAL` modifiers are a PostgreSQL extension, as is the `RESET` syntax. + +## See Also + +[SET SESSION AUTHORIZATION](sql-set-session-authorization.html) +``` diff --git a/docs/X/sql-set-session-authorization.md b/docs/en/sql-set-session-authorization.md similarity index 100% rename from docs/X/sql-set-session-authorization.md rename to docs/en/sql-set-session-authorization.md diff --git a/docs/en/sql-set-session-authorization.zh.md b/docs/en/sql-set-session-authorization.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..4adf807ba74fd474ff75b671c61afc512632a566 --- /dev/null +++ b/docs/en/sql-set-session-authorization.zh.md @@ -0,0 +1,44 @@ +## 设置会话授权 + +SET SESSION AUTHORIZATION — 设置会话用户标识符和当前会话的当前用户标识符 + +## 概要 + +``` +SET [ SESSION | LOCAL ] SESSION AUTHORIZATION user_name +SET [ SESSION | LOCAL ] SESSION AUTHORIZATION DEFAULT +RESET SESSION AUTHORIZATION +``` + +## 描述 + +该命令将会话用户标识符和当前 SQL 会话的当前用户标识符设置为*`用户名`*.用户名可以写为标识符或字符串文字。例如,使用此命令可以暂时成为非特权用户,然后再切换回超级用户。 + +会话用户标识符最初设置为客户端提供的(可能经过身份验证的)用户名。当前用户标识符通常等于会话用户标识符,但可能会在上下文中临时更改`安全定义器`功能和类似机制;它也可以通过[`设定角色`](sql-set-role.html).当前用户标识符与权限检查相关。 + +只有当初始会话用户(*认证用户*) 拥有超级用户权限。否则,仅当该命令指定了经过身份验证的用户名时才接受该命令。 + +这`会议`和`当地的`修饰符的作用与常规相同[`放`](sql-set.html)命令。 + +这`默认`和`重启`表单将会话和当前用户标识符重置为最初经过身份验证的用户名。这些表单可以由任何用户执行。 + +## 笔记 + +`设置会话授权`不能在一个内使用`安全定义器`功能。 + +## 例子 + +``` +SELECT SESSION_USER, CURRENT_USER; + + session_user | current_user +## Compatibility + + The SQL standard allows some other expressions to appear in place of the literal *`user_name`*, but these options are not important in practice. PostgreSQL allows identifier syntax (`"*`username`*"`), which SQL does not. SQL does not allow this command during a transaction; PostgreSQL does not make this restriction because there is no reason to. The `SESSION` and `LOCAL` modifiers are a PostgreSQL extension, as is the `RESET` syntax. + + The privileges necessary to execute this command are left implementation-defined by the standard. + +## See Also + +[SET ROLE](sql-set-role.html) +``` diff --git a/docs/X/sql-set-transaction.md b/docs/en/sql-set-transaction.md similarity index 100% rename from docs/X/sql-set-transaction.md rename to docs/en/sql-set-transaction.md diff --git a/docs/en/sql-set-transaction.zh.md b/docs/en/sql-set-transaction.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..d014190f888ebb65f6fa452981f758d4b1922566 --- /dev/null +++ b/docs/en/sql-set-transaction.zh.md @@ -0,0 +1,76 @@ +## 设置交易 + +SET TRANSACTION — 设置当前事务的特征 + +## 概要 + +``` +SET TRANSACTION transaction_mode [, ...] +SET TRANSACTION SNAPSHOT snapshot_id +SET SESSION CHARACTERISTICS AS TRANSACTION transaction_mode [, ...] + +where transaction_mode is one of: + + ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED } + READ WRITE | READ ONLY + [ NOT ] DEFERRABLE +``` + +## 描述 + +这`设置交易`命令设置当前事务的特征。它对任何后续交易都没有影响。`设置会话特征`为会话的后续事务设置默认事务特征。这些默认值可以被覆盖`设置交易`对于个人交易。 + +可用的事务特性是事务隔离级别、事务访问模式(读/写或只读)和可延迟模式。此外,可以选择快照,但仅适用于当前事务,而不是会话默认值。 + +事务的隔离级别决定了当其他事务同时运行时该事务可以看到哪些数据: + +`阅读已提交` + +一条语句只能看到在它开始之前提交的行。这是默认设置。 + +`可重复阅读` + +当前事务的所有语句只能看到在该事务中执行第一个查询或数据修改语句之前提交的行。 + +`可序列化` + +当前事务的所有语句只能看到在该事务中执行第一个查询或数据修改语句之前提交的行。如果并发可序列化事务之间的读取和写入模式会造成这些事务的任何串行(一次一个)执行都不会发生的情况,则其中一个将被回滚`序列化失败`错误。 + +SQL 标准定义了一个额外的级别,`阅读未提交`.在 PostgreSQL 中`阅读未提交`被视为`阅读已提交`. + +在第一个查询或数据修改语句(`选择`,`插入`,`删除`,`更新`,`拿来`, 要么`复制`) 的交易已被执行。看[第 13 章](mvcc.html)有关事务隔离和并发控制的更多信息。 + +事务访问模式决定事务是读/写还是只读。读/写是默认值。当事务为只读时,不允许使用以下 SQL 命令:`插入`,`更新`,`删除`, 和`复制自`如果他们要写入的表不是临时表;全部`创建`,`改变`, 和`降低`命令;`评论`,`授予`,`撤销`,`截短`;和`解释分析`和`执行`如果他们将执行的命令在列出的命令中。这是只读的高级概念,它不会阻止所有写入磁盘。 + +这`可延期的`交易属性没有效果,除非交易也是`可序列化`和`只读`.当为事务选择了所有这三个属性时,事务可能会在第一次获取其快照时阻塞,之后它能够在没有正常开销的情况下运行`可序列化`交易,并且没有任何导致序列化失败或被序列化失败取消的风险。此模式非常适合长时间运行的报告或备份。 + +这`设置交易快照`命令允许新事务以相同的方式运行*快照*作为现有交易。预先存在的事务必须已使用`pg_export_snapshot`功能(见[第 9.27.5 节](functions-admin.html#FUNCTIONS-SNAPSHOT-SYNCHRONIZATION))。该函数返回一个快照标识符,必须将其提供给`设置交易快照`指定要导入的快照。在此命令中,标识符必须写为字符串文字,例如`'000003A1-1'`.`设置交易快照`只能在事务开始时执行,在第一个查询或数据修改语句之前(`选择`,`插入`,`删除`,`更新`,`拿来`, 要么`复制`) 的交易。此外,事务必须已经设置为`可序列化`或者`可重复阅读`隔离级别(否则,快照将被立即丢弃,因为`阅读已提交`模式为每个命令拍摄一个新快照)。如果进口交易使用`可序列化`隔离级别,那么导出快照的事务也必须使用该隔离级别。此外,非只读可序列化事务无法从只读事务导入快照。 + +## 笔记 + +如果`设置交易`没有事先执行`开始交易`或者`开始`,它会发出警告,否则无效。 + +可以免去`设置交易`而是指定所需的*`交易模式`*在`开始`或者`开始交易`.但该选项不适用于`设置交易快照`. + +会话默认事务模式也可以通过配置参数设置或检查[默认\_交易\_隔离](runtime-config-client.html#GUC-DEFAULT-TRANSACTION-ISOLATION),[默认\_交易\_读\_只要](runtime-config-client.html#GUC-DEFAULT-TRANSACTION-READ-ONLY), 和[默认\_交易\_可延期的](runtime-config-client.html#GUC-DEFAULT-TRANSACTION-DEFERRABLE).(实际上`设置会话特征`只是设置这些变量的详细等价物`放`.) 这意味着可以在配置文件中设置默认值,通过`更改数据库`等咨询[第 20 章](runtime-config.html)了解更多信息。 + +当前事务的模式可以类似地通过配置参数设置或检查[交易\_隔离](runtime-config-client.html#GUC-TRANSACTION-ISOLATION),[交易\_读\_只要](runtime-config-client.html#GUC-TRANSACTION-READ-ONLY), 和[交易\_可延期的](runtime-config-client.html#GUC-TRANSACTION-DEFERRABLE).设置这些参数之一的作用与对应的相同`设置交易`选项,对何时可以完成具有相同的限制。但是,这些参数不能在配置文件中设置,也不能从实时 SQL 以外的任何来源设置。 + +## 例子 + +要使用与现有事务相同的快照开始新事务,请首先从现有事务中导出快照。这将返回快照标识符,例如: + +``` +BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ; +SELECT pg_export_snapshot(); + pg_export_snapshot +## Compatibility + + These commands are defined in the SQL standard, except for the `DEFERRABLE` transaction mode and the `SET TRANSACTION SNAPSHOT` form, which are PostgreSQL extensions. + +`SERIALIZABLE` is the default transaction isolation level in the standard. In PostgreSQL the default is ordinarily `READ COMMITTED`, but you can change it as mentioned above. + + In the SQL standard, there is one other transaction characteristic that can be set with these commands: the size of the diagnostics area. This concept is specific to embedded SQL, and therefore is not implemented in the PostgreSQL server. + + The SQL standard requires commas between successive *`transaction_modes`*, but for historical reasons PostgreSQL allows the commas to be omitted. +``` diff --git a/docs/X/sql-set.md b/docs/en/sql-set.md similarity index 100% rename from docs/X/sql-set.md rename to docs/en/sql-set.md diff --git a/docs/en/sql-set.zh.md b/docs/en/sql-set.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..fc399a19b9ff96a040a9f02fc65c8480e8dc2aca --- /dev/null +++ b/docs/en/sql-set.zh.md @@ -0,0 +1,131 @@ +## 放 + +SET — 更改运行时参数 + +## 概要 + +``` +SET [ SESSION | LOCAL ] configuration_parameter { TO | = } { value | 'value' | DEFAULT } +SET [ SESSION | LOCAL ] TIME ZONE { timezone | LOCAL | DEFAULT } +``` + +## 描述 + +这`放`命令更改运行时配置参数。中列出的许多运行时参数[第 20 章](runtime-config.html)可以即时更改`放`.(但有些需要超级用户权限才能更改,有些在服务器或会话启动后无法更改。)`放`只影响当前会话使用的值。 + +如果`放`(或等效地`设置会话`) 在后来中止的交易中发行,`放`当事务回滚时命令消失。一旦提交了周围的事务,效果将持续到会话结束,除非被另一个事务覆盖`放`. + +的影响`设置本地`仅持续到当前事务结束,无论是否已提交。一个特殊情况是`放`其次是`设置本地`在单笔交易中:`设置本地`直到事务结束才会看到值,但之后(如果事务已提交)`放`值将生效。 + +的影响`放`或者`设置本地`也可以通过回滚到早于命令的保存点来取消。 + +如果`设置本地`在具有`放`相同变量的选项(参见[创建函数](sql-createfunction.html)), 的影响`设置本地`命令在函数退出时消失;也就是说,无论如何都会恢复调用函数时的有效值。这允许`设置本地`用于函数内参数的动态或重复变化,同时仍具有使用的便利性`放`保存和恢复调用者值的选项。然而,一个常规的`放`命令覆盖任何周围的功能`放`选项;除非回滚,否则其影响将持续存在。 + +### 笔记 + +在 PostgreSQL 版本 8.0 到 8.2 中,`设置本地`将通过释放较早的保存点或成功退出 PL/pgSQL 异常块来取消。此行为已更改,因为它被认为不直观。 + +## 参数 + +`会议` + +指定命令对当前会话生效。(如果两者都没有,这是默认值`会议`也不`当地的`出现。) + +`当地的` + +指定该命令仅对当前事务生效。后`犯罪`要么`回滚`,会话级设置再次生效。在事务块之外发出这个会发出警告,否则无效。 + +*`配置参数`* + +可设置的运行时参数的名称。可用参数记录在[第 20 章](runtime-config.html)及以下。 + +*`价值`* + +参数的新值。值可以指定为字符串常量、标识符、数字或这些的逗号分隔列表,视特定参数而定。`默认`可以编写以指定将参数重置为其默认值(即,如果没有,它将具有的任何值`放`已在当前会话中执行)。 + +除了记录在[第 20 章](runtime-config.html), 有一些只能使用`放`命令或具有特殊语法的: + +`架构` + +`设置架构'*`价值`*'`是一个别名`将搜索路径设置为 *`价值`*`.使用此语法只能指定一个模式。 + +`名称` + +`设置名称 *`价值`*`是一个别名`将 client_encoding 设置为 *`价值`*`. + +`种子` + +设置随机数生成器的内部种子(函数`随机的`)。允许的值是介于 -1 和 1 之间的浮点数,然后乘以 231-1。 + +种子也可以通过调用函数来设置`种子`: + +``` +SELECT setseed(value); +``` + +`时区` + +`设置时区 *`价值`*`是一个别名`将时区设置为 *`价值`*`.语法`设置时区`允许时区规范的特殊语法。以下是有效值的示例: + +`'PST8PDT'` + +加利福尼亚州伯克利的时区。 + +`“欧洲/罗马”` + +意大利的时区。 + +`-7` + +UTC 以西 7 小时的时区(相当于 PDT)。正值位于 UTC 以东。 + +`间隔“-08:00”小时到分钟` + +UTC 以西 8 小时的时区(相当于 PST)。 + +`当地的`\ +`默认` + +将时区设置为您的本地时区(即服务器的默认值`时区`)。 + +以数字或间隔给出的时区设置在内部转换为 POSIX 时区语法。例如,之后`设置时区 -7`,`显示时区`会报告`<-07>+07`. + +看[第 8.5.3 节](datatype-datetime.html#DATATYPE-TIMEZONES)有关时区的更多信息。 + +## 笔记 + +功能`设置配置`提供等效功能;看[第 9.27.1 节](functions-admin.html#FUNCTIONS-ADMIN-SET).此外,可以更新[`pg_settings`](view-pg-settings.html)系统视图执行相当于`放`. + +## 例子 + +设置模式搜索路径: + +``` +SET search_path TO my_schema, public; +``` + +使用“day before month”输入约定将日期样式设置为传统 POSTGRES: + +``` +SET datestyle TO postgres, dmy; +``` + +设置加州伯克利的时区: + +``` +SET TIME ZONE 'PST8PDT'; +``` + +设置意大利的时区: + +``` +SET TIME ZONE 'Europe/Rome'; +``` + +## 兼容性 + +`设置时区`扩展 SQL 标准中定义的语法。该标准仅允许数字时区偏移量,而 PostgreSQL 允许更灵活的时区规范。所有其他`放`features 是 PostgreSQL 扩展。 + +## 也可以看看 + +[重启](sql-reset.html),[显示](sql-show.html) diff --git a/docs/X/sql-show.md b/docs/en/sql-show.md similarity index 100% rename from docs/X/sql-show.md rename to docs/en/sql-show.md diff --git a/docs/en/sql-show.zh.md b/docs/en/sql-show.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..83c6b8a97947d59dc9006cd951b0b60098c818bd --- /dev/null +++ b/docs/en/sql-show.zh.md @@ -0,0 +1,64 @@ +## 显示 + +SHOW — 显示运行时参数的值 + +## 概要 + +``` +SHOW name +SHOW ALL +``` + +## 描述 + +`显示`将显示运行时参数的当前设置。这些变量可以使用`放`声明,通过编辑`postgresql.conf`配置文件,通过`选项`环境变量(使用 libpq 或基于 libpq 的应用程序时),或在启动`postgres`服务器。看[第 20 章](runtime-config.html)详情。 + +## 参数 + +*`姓名`* + +运行时参数的名称。可用参数记录在[第 20 章](runtime-config.html)并且在[放](sql-set.html)参考页。此外,还有几个参数可以显示但不能设置: + +`SERVER_VERSION` + +显示服务器的版本号。 + +`服务器编码` + +显示服务器端字符集编码。目前这个参数可以显示但不能设置,因为编码是在数据库创建时确定的。 + +`LC_COLLATE` + +显示用于排序规则(文本排序)的数据库区域设置。目前这个参数可以显示但不能设置,因为设置是在创建数据库时确定的。 + +`LC_CTYPE` + +显示数据库的字符分类区域设置。目前这个参数可以显示但不能设置,因为设置是在创建数据库时确定的。 + +`IS_SUPERUSER` + +如果当前角色具有超级用户权限,则为真。 + +`全部` + +显示所有配置参数的值,并附有说明。 + +## 笔记 + +功能`当前设置`产生等效的输出;看[第 9.27.1 节](functions-admin.html#FUNCTIONS-ADMIN-SET).此外,该[`pg_settings`](view-pg-settings.html)系统视图产生相同的信息。 + +## 例子 + +显示参数的当前设置`日期样式`: + +``` +SHOW DateStyle; + DateStyle +## Compatibility + + The `SHOW` command is a PostgreSQL extension. + +## See Also + +[SET](sql-set.html), [RESET](sql-reset.html) +``` diff --git a/docs/X/sql-start-transaction.md b/docs/en/sql-start-transaction.md similarity index 100% rename from docs/X/sql-start-transaction.md rename to docs/en/sql-start-transaction.md diff --git a/docs/en/sql-start-transaction.zh.md b/docs/en/sql-start-transaction.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..b173e4bfcf5874fa8f36caec86457881b40eec4b --- /dev/null +++ b/docs/en/sql-start-transaction.zh.md @@ -0,0 +1,37 @@ +## 开始交易 + +START TRANSACTION — 开始一个事务块 + +## 概要 + +``` +START TRANSACTION [ transaction_mode [, ...] ] + +where transaction_mode is one of: + + ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED } + READ WRITE | READ ONLY + [ NOT ] DEFERRABLE +``` + +## 描述 + +该命令开始一个新的事务块。如果指定了隔离级别、读/写模式或可延迟模式,则新事务具有这些特征,就像[`设置交易`](sql-set-transaction.html)被处决。这与[`开始`](sql-begin.html)命令。 + +## 参数 + +参考[设置交易](sql-set-transaction.html)有关此语句的参数含义的信息。 + +## 兼容性 + +在标准中,没有必要发布`开始交易`开始一个事务块:任何 SQL 命令都隐式地开始一个块。PostgreSQL 的行为可以看作是隐式发出一个`犯罪`在每个未遵循的命令之后`开始交易`(要么`开始`),因此通常称为“自动提交”。为了方便起见,其他关系数据库系统可能会提供自动提交功能。 + +这`可延期的` *`交易模式`*是 PostgreSQL 语言扩展。 + +SQL 标准要求在连续的*`交易模式`*, 但由于历史原因 PostgreSQL 允许省略逗号。 + +另请参阅的兼容性部分[设置交易](sql-set-transaction.html). + +## 也可以看看 + +[开始](sql-begin.html),[犯罪](sql-commit.html),[回滚](sql-rollback.html),[保存点](sql-savepoint.html),[设置交易](sql-set-transaction.html) diff --git a/docs/X/sql-syntax-calling-funcs.md b/docs/en/sql-syntax-calling-funcs.md similarity index 100% rename from docs/X/sql-syntax-calling-funcs.md rename to docs/en/sql-syntax-calling-funcs.md diff --git a/docs/en/sql-syntax-calling-funcs.zh.md b/docs/en/sql-syntax-calling-funcs.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..6a9c8b4129c78f542c214e9926961ec31c96b35d --- /dev/null +++ b/docs/en/sql-syntax-calling-funcs.zh.md @@ -0,0 +1,64 @@ +## 4.3. Calling Functions + +[4.3.1. Using Positional Notation](sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS-POSITIONAL) + +[4.3.2. Using Named Notation](sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS-NAMED) + +[4.3.3. Using Mixed Notation](sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS-MIXED) + +[](<>) + +PostgreSQL allows functions that have named parameters to be called using either*positional*or*named*notation. Named notation is especially useful for functions that have a large number of parameters, since it makes the associations between parameters and actual arguments more explicit and reliable. In positional notation, a function call is written with its argument values in the same order as they are defined in the function declaration. In named notation, the arguments are matched to the function parameters by name and can be written in any order. For each notation, also consider the effect of function argument types, documented in[Section 10.3](typeconv-func.html). + +In either notation, parameters that have default values given in the function declaration need not be written in the call at all. But this is particularly useful in named notation, since any combination of parameters can be omitted; while in positional notation parameters can only be omitted from right to left. + +PostgreSQL also supports*mixed*notation, which combines positional and named notation. In this case, positional parameters are written first and named parameters appear after them. + +The following examples will illustrate the usage of all three notations, using the following function definition: + +``` +CREATE FUNCTION concat_lower_or_upper(a text, b text, uppercase boolean DEFAULT false) +RETURNS text +AS +$$ + SELECT CASE + WHEN $3 THEN UPPER($1 || ' ' || $2) + ELSE LOWER($1 || ' ' || $2) + END; +$$ +LANGUAGE SQL IMMUTABLE STRICT; +``` + +Function`concat_lower_or_upper`has two mandatory parameters,`a`and`b`. Additionally there is one optional parameter`uppercase`which defaults to`false`. The`a`and`b`inputs will be concatenated, and forced to either upper or lower case depending on the`uppercase`parameter. The remaining details of this function definition are not important here (see[Chapter 38](extend.html)for more information). + +### 4.3.1. Using Positional Notation + +[](<>) + +Positional notation is the traditional mechanism for passing arguments to functions in PostgreSQL. An example is: + +``` +SELECT concat_lower_or_upper('Hello', 'World', true); + concat_lower_or_upper +### 4.3.2. Using Named Notation + +[]() + + In named notation, each argument's name is specified using `=>` to separate it from the argument expression. For example: +``` + +SELECT concat_lower_or_upper(a => 'Hello', b => 'World'); concat_lower_or_upper + +### 4.3.3. Using Mixed Notation + +[](<>) + +The mixed notation combines positional and named notation. However, as already mentioned, named arguments cannot precede positional arguments. For example: + +``` +SELECT concat_lower_or_upper('Hello', 'World', uppercase => true); + concat_lower_or_upper +### Note + + Named and mixed call notations currently cannot be used when calling an aggregate function (but they do work when an aggregate function is used as a window function). +``` diff --git a/docs/X/sql-syntax-lexical.md b/docs/en/sql-syntax-lexical.md similarity index 100% rename from docs/X/sql-syntax-lexical.md rename to docs/en/sql-syntax-lexical.md diff --git a/docs/en/sql-syntax-lexical.zh.md b/docs/en/sql-syntax-lexical.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..804269c676691d2ef71adcf31cd25349bad59969 --- /dev/null +++ b/docs/en/sql-syntax-lexical.zh.md @@ -0,0 +1,400 @@ +## 4.1。 + +[词汇结构4.1.1。](sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) + +[标识符和关键词4.1.2.](sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS) + +[常数4.1.3。](sql-syntax-lexical.html#SQL-SYNTAX-OPERATORS) + +[运营商4.1.4。](sql-syntax-lexical.html#SQL-SYNTAX-SPECIAL-CHARS) + +[特殊的角色4.1.5。](sql-syntax-lexical.html#SQL-SYNTAX-COMMENTS) + +[评论4.1.6。](sql-syntax-lexical.html#SQL-PRECEDENCE) + +[](<>) + +运算符优先级*SQL 输入由一系列*命令.*一个命令由一系列*代币, 以分号 (“;”) 结束。输入流的结尾也终止了一个命令。 + +哪些标记有效取决于特定命令的语法。*令牌可以是*关键词*, 一个*标识符*, 一种*带引号的标识符*, 一种*文字(或常量),或特殊字符符号。 + +标记通常由空格(空格、制表符、换行符)分隔,但如果没有歧义则不需要(通常只有在特殊字符与某些其他标记类型相邻时才会出现这种情况)。 + +``` +SELECT * FROM MY_TABLE; +UPDATE MY_TABLE SET A = 5; +INSERT INTO MY_TABLE VALUES (3, 'hi there'); +``` + +例如,以下是(语法上)有效的 SQL 输入:这是三个命令的序列,每行一个(尽管这不是必需的;一行上可以有多个命令,并且命令可以有效地跨行拆分)。 + +此外,*注释*可以出现在 SQL 输入中。它们不是标记,它们实际上等同于空格。 + +SQL 语法在哪些标记标识命令以及哪些是操作数或参数方面不是很一致。前几个标记通常是命令名称,因此在上面的示例中,我们通常会说“SELECT”、“UPDATE”和“INSERT”命令。但例如`更新`命令总是需要一个`放`令牌出现在某个位置,而这种特殊的变化`插入`还需要一个`价值观`为了完整。每个命令的精确语法规则在[第六部分](reference.html). + +### 4.1.1。 + +[](<>)[](<>)[](<>) + +标识符和关键词`代币如`选择`,`更新`, 或者`价值观*在上面的例子中是*关键词,即在 SQL 语言中具有固定含义的词。`代币`MY_TABLE`和`一种*是的例子*身份标识.它们根据使用的命令来标识表、列或其他数据库对象的名称。因此,它们有时简称为“名称”。关键字和标识符具有相同的词汇结构,这意味着如果不了解语言,就无法知道标记是标识符还是关键字。完整的关键词列表可以在[附录 C](sql-keywords-appendix.html). + +SQL 标识符和关键字必须以字母 (`一种`-`z`, 但也包括带有变音符号和非拉丁字母的字母)或下划线 (`_`)。标识符或关键字中的后续字符可以是字母、下划线、数字 (`0`-`9`) 或美元符号 (`$`)。请注意,根据 SQL 标准的字母,标识符中不允许使用美元符号,因此使用它们可能会降低应用程序的可移植性。SQL 标准不会定义包含数字或以下划线开头或结尾的关键字,因此这种形式的标识符可以避免与标准的未来扩展发生冲突。 + +[](<>)系统使用不超过`名称DATALEN`-1 字节的标识符;较长的名称可以写在命令中,但它们会被截断。默认,`名称DATALEN`为 64,因此最大标识符长度为 63 个字节。如果此限制有问题,可以通过更改`名称DATALEN`恒定在`src/include/pg_config_manual.h`. + +[](<>)关键字和未加引号的标识符不区分大小写。所以: + +``` +UPDATE MY_TABLE SET A = 5; +``` + +等价地可以写成: + +``` +uPDaTE my_TabLE SeT a = 5; +``` + +常用的约定是将关键字大写,名称小写,例如: + +``` +UPDATE my_table SET a = 5; +``` + +[](<>)还有第二种标识符:*分隔标识符*要么*带引号的标识符*.它是通过将任意字符序列括在双引号 (`"`)。分隔标识符始终是标识符,而不是关键字。所以`“选择”`可用于引用名为“select”的列或表,而未引用的`选择`将被视为关键字,因此在需要表或列名的地方使用时会引发解析错误。该示例可以使用带引号的标识符编写,如下所示: + +``` +UPDATE "my_table" SET "a" = 5; +``` + +带引号的标识符可以包含任何字符,但代码为零的字符除外。(要包含双引号,请写两个双引号。)这允许构造否则不可能的表或列名称,例如包含空格或&符号的名称。长度限制仍然适用。 + +引用标识符也使其区分大小写,而未引用的名称总是折叠为小写。例如,标识符`食品`,`富`, 和`“富”`被 PostgreSQL 认为是相同的,但是`“福”`和`“喂”`与这三个和彼此不同。(在 PostgreSQL 中将不带引号的名称折叠为小写与 SQL 标准不兼容,SQL 标准规定不带引号的名称应折叠为大写。因此,富`应该相当于`“喂”`不是`“富”`根据标准。`如果您想编写可移植的应用程序,建议您始终引用特定名称或永远不要引用它。)引用标识符的变体允许包含由其代码点标识的转义 Unicode 字符。 + +[](<>) + +此变体以你&`(大写或小写 U 后跟 & 号)紧接在开始双引号之前,中间没有任何空格,例如`你&“富”`.`(请注意,这会与操作员产生歧义&`.`在运算符周围使用空格以避免此问题。)在引号内,可以通过编写反斜杠后跟四位十​​六进制代码点号或反斜杠后跟加号后跟六来以转义形式指定 Unicode 字符-digit 十六进制代码点编号。例如,标识符`“数据”`可以写成 + +``` +U&"d\0061t\+000061" +``` + +以下不那么琐碎的示例用西里尔字母书写了俄语单词“slon”(大象): + +``` +U&"\0441\043B\043E\043D" +``` + +如果需要与反斜杠不同的转义字符,可以使用`UESCAPE`[](<>)clause after the string, for example: + +``` +U&"d!0061t!+000061" UESCAPE '!' +``` + +The escape character can be any single character other than a hexadecimal digit, the plus sign, a single quote, a double quote, or a whitespace character. Note that the escape character is written in single quotes, not double quotes, after`UESCAPE`. + +To include the escape character in the identifier literally, write it twice. + +Either the 4-digit or the 6-digit escape form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 6-digit form technically makes this unnecessary. (Surrogate pairs are not stored directly, but are combined into a single code point.) + +If the server encoding is not UTF-8, the Unicode code point identified by one of these escape sequences is converted to the actual server encoding; an error is reported if that's not possible. + +### 4.1.2. Constants + +[](<>) + +There are three kinds of*implicitly-typed constants*in PostgreSQL: strings, bit strings, and numbers. Constants can also be specified with explicit types, which can enable more accurate representation and more efficient handling by the system. These alternatives are discussed in the following subsections. + +#### 4.1.2.1. String Constants + +[](<>) + +[](<>)A string constant in SQL is an arbitrary sequence of characters bounded by single quotes (`'`), for example`'This is a string'`. To include a single-quote character within a string constant, write two adjacent single quotes, e.g.,`'Dianne''s horse'`. Note that this is*not*the same as a double-quote character (`"`). + +Two string constants that are only separated by whitespace*with at least one newline*are concatenated and effectively treated as if the string had been written as one constant. For example: + +``` +SELECT 'foo' +'bar'; +``` + +is equivalent to: + +``` +SELECT 'foobar'; +``` + +but: + +``` +SELECT 'foo' 'bar'; +``` + +is not valid syntax. (This slightly bizarre behavior is specified by SQL; PostgreSQL is following the standard.) + +#### 4.1.2.2. String Constants with C-Style Escapes + +[](<>)[](<>) + +PostgreSQL also accepts “escape” string constants, which are an extension to the SQL standard. An escape string constant is specified by writing the letter`E`(upper or lower case) just before the opening single quote, e.g.,`E'foo'`. (When continuing an escape string constant across lines, write`E`only before the first opening quote.) Within an escape string, a backslash character (`\`) begins a C-like*backslash escape*sequence, in which the combination of backslash and following character(s) represent a special byte value, as shown in[Table 4.1](sql-syntax-lexical.html#SQL-BACKSLASH-TABLE). + +**Table 4.1. Backslash Escape Sequences** + +| Backslash Escape Sequence | Interpretation | +| ------------------------- | -------------- | +| `\b` | backspace | +| `\f` | form feed | +| `\n` | newline | +| `\r` | carriage return | +| `\t` | tab | +| `\*`o`*`, `\*`哦`*`, `\*`哦哦`*` (*`○`* = 0–7) | 八进制字节值 | +| `\x*`h`*`, `\x*`呵呵`*` (*`h`*= 0–9,A–F) | 十六进制字节值 | +| `\u*`xxxx`*`, `\U*`xxxxxxxx`*`(*`x`*= 0–9,A–F) | 16 位或 32 位十六进制 Unicode 字符值 | + +反斜杠后面的任何其他字符都按字面意思表示。因此,要包含反斜杠字符,请编写两个反斜杠 (`\\`)。此外,可以通过编写将单引号包含在转义字符串中`\'`, 除了正常的方式`''`. + +您有责任创建的字节序列,尤其是在使用八进制或十六进制转义时,在服务器字符集编码中构成有效字符。一个有用的替代方法是使用 Unicode 转义或替代 Unicode 转义语法,在[第 4.1.2.3 节](sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-UESCAPE);然后服务器将检查字符转换是否可行。 + +### 警告 + +如果配置参数[标准\_符合的\_字符串](runtime-config-compatible.html#GUC-STANDARD-CONFORMING-STRINGS)是`离开`, 然后 PostgreSQL 在常规和转义字符串常量中识别反斜杠转义。但是,从 PostgreSQL 9.1 开始,默认值为`在`,这意味着反斜杠转义仅在转义字符串常量中被识别。这种行为更符合标准,但可能会破坏依赖历史行为的应用程序,其中反斜杠转义总是被识别。作为一种解决方法,您可以将此参数设置为`离开`,但最好不要使用反斜杠转义。如果您需要使用反斜杠转义来表示特殊字符,请将字符串常量写入`乙`. + +In addition to`standard_conforming_strings`, the configuration parameters[escape_string_warning](runtime-config-compatible.html#GUC-ESCAPE-STRING-WARNING)and[backslash_quote](runtime-config-compatible.html#GUC-BACKSLASH-QUOTE)govern treatment of backslashes in string constants. + +The character with the code zero cannot be in a string constant. + +#### 4.1.2.3. String Constants with Unicode Escapes + +[](<>) + +PostgreSQL also supports another type of escape syntax for strings that allows specifying arbitrary Unicode characters by code point. A Unicode escape string constant starts with`U&`(upper or lower case letter U followed by ampersand) immediately before the opening quote, without any spaces in between, for example`U&'foo'`. (Note that this creates an ambiguity with the operator`&`. Use spaces around the operator to avoid this problem.) Inside the quotes, Unicode characters can be specified in escaped form by writing a backslash followed by the four-digit hexadecimal code point number or alternatively a backslash followed by a plus sign followed by a six-digit hexadecimal code point number. For example, the string`'data'`could be written as + +``` +U&'d\0061t\+000061' +``` + +The following less trivial example writes the Russian word “slon” (elephant) in Cyrillic letters: + +``` +U&'\0441\043B\043E\043D' +``` + +If a different escape character than backslash is desired, it can be specified using the`UESCAPE`[](<>)clause after the string, for example: + +``` +U&'d!0061t!+000061' UESCAPE '!' +``` + +The escape character can be any single character other than a hexadecimal digit, the plus sign, a single quote, a double quote, or a whitespace character. + +To include the escape character in the string literally, write it twice. + +Either the 4-digit or the 6-digit escape form can be used to specify UTF-16 surrogate pairs to compose characters with code points larger than U+FFFF, although the availability of the 6-digit form technically makes this unnecessary. (Surrogate pairs are not stored directly, but are combined into a single code point.) + +If the server encoding is not UTF-8, the Unicode code point identified by one of these escape sequences is converted to the actual server encoding; an error is reported if that's not possible. + +Also, the Unicode escape syntax for string constants only works when the configuration parameter[standard_conforming_strings](runtime-config-compatible.html#GUC-STANDARD-CONFORMING-STRINGS)is turned on. This is because otherwise this syntax could confuse clients that parse the SQL statements to the point that it could lead to SQL injections and similar security issues. If the parameter is set to off, this syntax will be rejected with an error message. + +#### 4.1.2.4. Dollar-Quoted String Constants + +[](<>) + +While the standard syntax for specifying string constants is usually convenient, it can be difficult to understand when the desired string contains many single quotes or backslashes, since each of those must be doubled. To allow more readable queries in such situations, PostgreSQL provides another way, called “dollar quoting”, to write string constants. A dollar-quoted string constant consists of a dollar sign (`$`), an optional “tag” of zero or more characters, another dollar sign, an arbitrary sequence of characters that makes up the string content, a dollar sign, the same tag that began this dollar quote, and a dollar sign. For example, here are two different ways to specify the string “Dianne's horse” using dollar quoting: + +``` +$$Dianne's horse$$ +$SomeTag$Dianne's horse$SomeTag$ +``` + +Notice that inside the dollar-quoted string, single quotes can be used without needing to be escaped. Indeed, no characters inside a dollar-quoted string are ever escaped: the string content is always written literally. Backslashes are not special, and neither are dollar signs, unless they are part of a sequence matching the opening tag. + +It is possible to nest dollar-quoted string constants by choosing different tags at each nesting level. This is most commonly used in writing function definitions. For example: + +``` +$function$ +BEGIN + RETURN ($1 ~ $q$[\t\r\n\v\\]$q$); +END; +$function$ +``` + +Here, the sequence`$q$[\t\r\n\v\\]$q$`represents a dollar-quoted literal string`[\t\r\n\v\\]`, which will be recognized when the function body is executed by PostgreSQL. But since the sequence does not match the outer dollar quoting delimiter`$function$`, it is just some more characters within the constant so far as the outer string is concerned. + +The tag, if any, of a dollar-quoted string follows the same rules as an unquoted identifier, except that it cannot contain a dollar sign. Tags are case sensitive, so`$tag$String content$tag$`is correct, but`$TAG$String content$tag$`is not. + +A dollar-quoted string that follows a keyword or identifier must be separated from it by whitespace; otherwise the dollar quoting delimiter would be taken as part of the preceding identifier. + +Dollar quoting is not part of the SQL standard, but it is often a more convenient way to write complicated string literals than the standard-compliant single quote syntax. It is particularly useful when representing string constants inside other constants, as is often needed in procedural function definitions. With single-quote syntax, each backslash in the above example would have to be written as four backslashes, which would be reduced to two backslashes in parsing the original string constant, and then to one when the inner string constant is re-parsed during function execution. + +#### 4.1.2.5. Bit-String Constants + +[](<>) + +Bit-string constants look like regular string constants with a`B`(upper or lower case) immediately before the opening quote (no intervening whitespace), e.g.,`B'1001'`. The only characters allowed within bit-string constants are`0`and`1`. + +Alternatively, bit-string constants can be specified in hexadecimal notation, using a leading`X`(upper or lower case), e.g.,`X'1FF'`. This notation is equivalent to a bit-string constant with four binary digits for each hexadecimal digit. + +Both forms of bit-string constant can be continued across lines in the same way as regular string constants. Dollar quoting cannot be used in a bit-string constant. + +#### 4.1.2.6. Numeric Constants + +[](<>) + +Numeric constants are accepted in these general forms: + +``` +digits +digits.[digits][e[+-]digits] +[digits].digits[e[+-]digits] +digitse[+-]digits +``` + +where*`digits`*is one or more decimal digits (0 through 9). At least one digit must be before or after the decimal point, if one is used. At least one digit must follow the exponent marker (`e`), if one is present. There cannot be any spaces or other characters embedded in the constant. Note that any leading plus or minus sign is not actually considered part of the constant; it is an operator applied to the constant. + +These are some examples of valid numeric constants: + +42\ +3.5\ +4.\ +.001\ +5e2\ +1.925e-3 + +[](<>) [](<>) [](<>)A numeric constant that contains neither a decimal point nor an exponent is initially presumed to be type`integer`if its value fits in type`integer`(32 bits); otherwise it is presumed to be type`bigint`if its value fits in type`bigint`(64 bits); otherwise it is taken to be type`numeric`. Constants that contain decimal points and/or exponents are always initially presumed to be type`numeric`. + +The initially assigned data type of a numeric constant is just a starting point for the type resolution algorithms. In most cases the constant will be automatically coerced to the most appropriate type depending on context. When necessary, you can force a numeric value to be interpreted as a specific data type by casting it.[](<>)For example, you can force a numeric value to be treated as type`real`(`float4`) by writing: + +``` +REAL '1.23' -- string style +1.23::REAL -- PostgreSQL (historical) style +``` + +These are actually just special cases of the general casting notations discussed next. + +#### 4.1.2.7. Constants of Other Types + +[](<>) + +A constant of an*arbitrary*type can be entered using any one of the following notations: + +``` +type 'string' +'string'::type +CAST ( 'string' AS type ) +``` + +The string constant's text is passed to the input conversion routine for the type called*`type`*. The result is a constant of the indicated type. The explicit type cast can be omitted if there is no ambiguity as to the type the constant must be (for example, when it is assigned directly to a table column), in which case it is automatically coerced. + +The string constant can be written using either regular SQL notation or dollar-quoting. + +It is also possible to specify a type coercion using a function-like syntax: + +``` +typename ( 'string' ) +``` + +but not all type names can be used in this way; see[Section 4.2.9](sql-expressions.html#SQL-SYNTAX-TYPE-CASTS)详情。 + +这`::`,`投掷()`, 和函数调用语法也可用于指定任意表达式的运行时类型转换,如在[第 4.2.9 节](sql-expressions.html#SQL-SYNTAX-TYPE-CASTS).为避免语法歧义,`*`类型`*'*`细绳`*'`语法只能用于指定简单文字常量的类型。另一个限制条件`*`类型`*'*`细绳`*'`语法是它不适用于数组类型;采用`::`要么`投掷()`指定数组常量的类型。 + +这`投掷()`语法符合 SQL。这`*`类型`*'*`string`*'`syntax is a generalization of the standard: SQL specifies this syntax only for a few data types, but PostgreSQL allows it for all types. The syntax with`::`is historical PostgreSQL usage, as is the function-call syntax. + +### 4.1.3. Operators + +[](<>) + +An operator name is a sequence of up to`NAMEDATALEN`-1 (63 by default) characters from the following list: + +\+-\*/ \\\<>=~! @ # % ^ & | \` ? + +There are a few restrictions on operator names, however: + +- `--`and`/*`cannot appear anywhere in an operator name, since they will be taken as the start of a comment. + +- A multiple-character operator name cannot end in`+`or`-`, unless the name also contains at least one of these characters: + + ~! @ # % ^ & | \` ? + + For example,`@-`is an allowed operator name, but`*-`is not. This restriction allows PostgreSQL to parse SQL-compliant queries without requiring spaces between tokens. + + When working with non-SQL-standard operator names, you will usually need to separate adjacent operators with spaces to avoid ambiguity. For example, if you have defined a prefix operator named`@`, you cannot write`X*@Y`; you must write`X* @Y`to ensure that PostgreSQL reads it as two operator names not one. + +### 4.1.4. Special Characters + +Some characters that are not alphanumeric have a special meaning that is different from being an operator. Details on the usage can be found at the location where the respective syntax element is described. This section only exists to advise the existence and summarize the purposes of these characters. + +- A dollar sign (`$`) followed by digits is used to represent a positional parameter in the body of a function definition or a prepared statement. In other contexts the dollar sign can be part of an identifier or a dollar-quoted string constant. + +- Parentheses (`()`) have their usual meaning to group expressions and enforce precedence. In some cases parentheses are required as part of the fixed syntax of a particular SQL command. + +- Brackets (`[]`) are used to select the elements of an array. See[Section 8.15](arrays.html)for more information on arrays. + +- Commas (`,`) are used in some syntactical constructs to separate the elements of a list. + +- The semicolon (`;`) terminates an SQL command. It cannot appear anywhere within a command, except within a string constant or quoted identifier. + +- The colon (`:`) is used to select “slices” from arrays. (See[Section 8.15](arrays.html).) In certain SQL dialects (such as Embedded SQL), the colon is used to prefix variable names. + +- The asterisk (`*`) is used in some contexts to denote all the fields of a table row or composite value. It also has a special meaning when used as the argument of an aggregate function, namely that the aggregate does not require any explicit parameter. + +- The period (`.`) is used in numeric constants, and to separate schema, table, and column names. + +### 4.1.5. Comments + +[](<>) + +A comment is a sequence of characters beginning with double dashes and extending to the end of the line, e.g.: + +``` +-- This is a standard SQL comment +``` + +Alternatively, C-style block comments can be used: + +``` +/* multiline comment + * with nesting: /* nested block comment */ + */ +``` + +where the comment begins with`/*`and extends to the matching occurrence of`*/`. These block comments nest, as specified in the SQL standard but unlike C, so that one can comment out larger blocks of code that might contain existing block comments. + +A comment is removed from the input stream before further syntax analysis and is effectively replaced by whitespace. + +### 4.1.6. Operator Precedence + +[](<>) + +[Table 4.2](sql-syntax-lexical.html#SQL-PRECEDENCE-TABLE)shows the precedence and associativity of the operators in PostgreSQL. Most operators have the same precedence and are left-associative. The precedence and associativity of the operators is hard-wired into the parser. Add parentheses if you want an expression with multiple operators to be parsed in some other way than what the precedence rules imply. + +**Table 4.2. Operator Precedence (highest to lowest)** + +| Operator/Element | Associativity | Description | +| ---------------- | ------------- | ----------- | +| `.` | left | table/column name separator | +| `::` | 剩下 | PostgreSQL 风格的类型转换 | +| `[` `]` | 剩下 | 数组元素选择 | +| `+` `-` | 对 | 一元加,一元减 | +| `^` | 剩下 | 求幂 | +| `*` `/` `%` | 剩下 | 乘法、除法、取模 | +| `+` `-` | 剩下 | 加法,减法 | +| (任何其他运营商) | 剩下 | 所有其他本机和用户定义的运算符 | +| `之间` `在` `像` `我喜欢` `相似的` | | 范围包含、集合成员资格、字符串匹配 | +| `<` `>` `=` `<=` `>=` `<>` | | 比较运算符 | +| `是` `一片空白` `非空` | | `是真的`,`是假的`,`一片空白`,`区别于`, 等等 | +| `不是` | 正确的 | 逻辑否定 | +| `和` | 剩下 | 逻辑合取 | +| `要么` | 剩下 | 逻辑析取 | + +请注意,运算符优先级规则也适用于与上述内置运算符同名的用户定义运算符。例如,如果您为某些自定义数据类型定义“+”运算符,无论您做什么,它都将具有与内置“+”运算符相同的优先级。 + +当在`操作员`语法,例如: + +``` +SELECT 3 OPERATOR(pg_catalog.+) 4; +``` + +这`操作员`构造被认为具有显示的默认优先级[表 4.2](sql-syntax-lexical.html#SQL-PRECEDENCE-TABLE)对于“任何其他运营商”。无论内部出现哪个特定运算符,这都是正确的`操作员()`. + +### 笔记 + +9.5 之前的 PostgreSQL 版本使用略有不同的运算符优先级规则。特别是,`<=` `>=`和`<>`曾经被视为通用运算符;`是`过去的测试具有更高的优先级;和`不在`和相关的构造行为不一致,在某些情况下被认为具有优先权`不是`而不是`之间`.为了更好地符合 SQL 标准并减少对逻辑等效结构的不一致处理造成的混淆,这些规则已更改。在大多数情况下,这些更改不会导致行为改变,或者可能会导致“没有这样的操作员”失败,这可以通过添加括号来解决。但是,在某些极端情况下,查询可能会更改行为而不会报告任何解析错误。 diff --git a/docs/X/sql-syntax.md b/docs/en/sql-syntax.md similarity index 100% rename from docs/X/sql-syntax.md rename to docs/en/sql-syntax.md diff --git a/docs/en/sql-syntax.zh.md b/docs/en/sql-syntax.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..751119a6497781554f31b9eb1f2778db6806cc8d --- /dev/null +++ b/docs/en/sql-syntax.zh.md @@ -0,0 +1,61 @@ +## 第 4 章 SQL 语法 + +**目录** + +[4.1。词汇结构](sql-syntax-lexical.html) + +[4.1.1。标识符和关键词](sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) + +[4.1.2.常数](sql-syntax-lexical.html#SQL-SYNTAX-CONSTANTS) + +[4.1.3。运营商](sql-syntax-lexical.html#SQL-SYNTAX-OPERATORS) + +[4.1.4。特殊字符](sql-syntax-lexical.html#SQL-SYNTAX-SPECIAL-CHARS) + +[4.1.5。评论](sql-syntax-lexical.html#SQL-SYNTAX-COMMENTS) + +[4.1.6。运算符优先级](sql-syntax-lexical.html#SQL-PRECEDENCE) + +[4.2.值表达式](sql-expressions.html) + +[4.2.1。列参考](sql-expressions.html#SQL-EXPRESSIONS-COLUMN-REFS) + +[4.2.2.位置参数](sql-expressions.html#SQL-EXPRESSIONS-PARAMETERS-POSITIONAL) + +[4.2.3。下标](sql-expressions.html#SQL-EXPRESSIONS-SUBSCRIPTS) + +[4.2.4.字段选择](sql-expressions.html#FIELD-SELECTION) + +[4.2.5。运算符调用](sql-expressions.html#SQL-EXPRESSIONS-OPERATOR-CALLS) + +[4.2.6。函数调用](sql-expressions.html#SQL-EXPRESSIONS-FUNCTION-CALLS) + +[4.2.7。聚合表达式](sql-expressions.html#SYNTAX-AGGREGATES) + +[4.2.8。窗口函数调用](sql-expressions.html#SYNTAX-WINDOW-FUNCTIONS) + +[4.2.9.类型转换](sql-expressions.html#SQL-SYNTAX-TYPE-CASTS) + +[4.2.10。排序规则表达式](sql-expressions.html#SQL-SYNTAX-COLLATE-EXPRS) + +[4.2.11。标量子查询](sql-expressions.html#SQL-SYNTAX-SCALAR-SUBQUERIES) + +[4.2.12。数组构造函数](sql-expressions.html#SQL-SYNTAX-ARRAY-CONSTRUCTORS) + +[4.2.13。行构造函数](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS) + +[4.2.14。表达式评估规则](sql-expressions.html#SYNTAX-EXPRESS-EVAL) + +[4.3.调用函数](sql-syntax-calling-funcs.html) + +[4.3.1。使用位置符号](sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS-POSITIONAL) + +[4.3.2.使用命名符号](sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS-NAMED) + +[4.3.3.使用混合符号](sql-syntax-calling-funcs.html#SQL-SYNTAX-CALLING-FUNCS-MIXED) + +[](<>) + +本章介绍 SQL 的语法。它为理解以下章节奠定了基础,这些章节将详细介绍如何应用 SQL 命令来定义和修改数据。 + +我们还建议已经熟悉 SQL 的用户仔细阅读本章,因为它包含几个在 SQL 数据库之间实现不一致或特定于 PostgreSQL 的规则和概念。 diff --git a/docs/X/sql-truncate.md b/docs/en/sql-truncate.md similarity index 100% rename from docs/X/sql-truncate.md rename to docs/en/sql-truncate.md diff --git a/docs/en/sql-truncate.zh.md b/docs/en/sql-truncate.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7c48f8568c25ad6a4c814a312e3b48521db8df1c --- /dev/null +++ b/docs/en/sql-truncate.zh.md @@ -0,0 +1,82 @@ +## 截短 + +TRUNCATE — 清空一个表或一组表 + +## 概要 + +``` +TRUNCATE [ TABLE ] [ ONLY ] name [ * ] [, ... ] + [ RESTART IDENTITY | CONTINUE IDENTITY ] [ CASCADE | RESTRICT ] +``` + +## 描述 + +`截短`从一组表中快速删除所有行。和不合格的效果一样`删除`在每个表上,但由于它实际上并不扫描表,因此速度更快。此外,它会立即回收磁盘空间,而不需要后续的`真空`手术。这在大表上最有用。 + +## 参数 + +*`姓名`* + +要截断的表的名称(可选模式限定)。如果`只要`在表名之前指定,只有该表被截断。如果`只要`如果未指定,则该表及其所有后代表(如果有)将被截断。可选地,`*`可以在表名之后指定以明确指示包含后代表。 + +`重新开始身份` + +自动重新启动截断表的列所拥有的序列。 + +`继续身份` + +不要更改序列的值。这是默认设置。 + +`级联` + +自动截断所有具有对任何命名表或添加到组中的表的外键引用的表,因为`级联`. + +`严格` + +如果任何表具有来自命令中未列出的表的外键引用,则拒绝截断。这是默认设置。 + +## 笔记 + +你必须拥有`截短`对表进行截断的特权。 + +`截短`获得一个`访问独家`锁定它操作的每个表,这会阻止该表上的所有其他并发操作。什么时候`重新开始身份`被指定时,任何要重新启动的序列同样被排他地锁定。如果需要同时访问一个表,那么`删除`应改为使用命令。 + +`截短`不能在具有来自其他表的外键引用的表上使用,除非所有此类表也在同一命令中被截断。在这种情况下检查有效性将需要表扫描,而重点不是这样做。这`级联`选项可用于自动包含所有依赖表 - 但使用此选项时要非常小心,否则您可能会丢失您不打算丢失的数据!特别注意,当要截断的表是一个分区时,兄弟分区保持不变,但所有引用表及其所有分区都会发生级联,没有区别。 + +`截短`不会开火`删除时`表中可能存在的触发器。但是会火`截断`触发器。如果`截断`触发器是为任何表定义的,然后所有`截断前`在任何截断发生之前触发触发器,并且所有`截断后`在执行最后一次截断并重置任何序列后触发触发器。触发器将按照要处理的表的顺序触发(首先是命令中列出的那些,然后是由于级联而添加的任何表)。 + +`截短`不是 MVCC 安全的。截断后,如果并发事务使用截断发生之前拍摄的快照,表将显示为空。看[第 13.5 节](mvcc-caveats.html)更多细节。 + +`截短`对于表中的数据是事务安全的:如果周围的事务没有提交,截断将被安全地回滚。 + +什么时候`重新开始身份`是指定的,隐含的`改变序列重启`操作也是事务性的;也就是说,如果周围的事务没有提交,它们将被回滚。请注意,如果在事务回滚之前对重新启动的序列进行了任何额外的序列操作,则这些操作对序列的影响将被回滚,但它们对`曲线()`;也就是交易之后`曲线()`将继续反映在失败事务中获得的最后一个序列值,即使序列本身可能不再与之一致。这类似于通常的行为`曲线()`交易失败后。 + +`截短`如果外部数据包装器支持,可以用于外部表,例如,请参阅[postgres_fdw](postgres-fdw.html). + +## 例子 + +截断表格`大表`和`胖胖的`: + +``` +TRUNCATE bigtable, fattable; +``` + +同样,也重置任何关联的序列生成器: + +``` +TRUNCATE bigtable, fattable RESTART IDENTITY; +``` + +截断表格`其他表`, 并级联到任何引用的表`其他表`通过外键约束: + +``` +TRUNCATE othertable CASCADE; +``` + +## 兼容性 + +SQL:2008 标准包括一个`截短`命令的语法`截断表 *`表名`*`.条款`继续身份`/`重新开始身份`也出现在该标准中,但尽管相关含义略有不同。该命令的一些并发行为是由标准实现定义的,因此应考虑上述注意事项,并在必要时与其他实现进行比较。 + +## 也可以看看 + +[删除](sql-delete.html) diff --git a/docs/X/sql-unlisten.md b/docs/en/sql-unlisten.md similarity index 100% rename from docs/X/sql-unlisten.md rename to docs/en/sql-unlisten.md diff --git a/docs/en/sql-unlisten.zh.md b/docs/en/sql-unlisten.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..6534fa0df4551d7ba65c6b3e43714dc8b032357b --- /dev/null +++ b/docs/en/sql-unlisten.zh.md @@ -0,0 +1,59 @@ +## 不听 + +UNLISTEN — 停止监听通知 + +## 概要 + +``` +UNLISTEN { channel | * } +``` + +## 描述 + +`不听`用于删除现有的注册`通知`事件。`不听`取消当前 PostgreSQL 会话的任何现有注册,作为名为的通知通道上的侦听器*`渠道`*.特殊通配符`*`取消当前会话的所有侦听器注册。 + +[通知](sql-notify.html)包含对使用的更广泛的讨论`听`和`通知`. + +## 参数 + +*`渠道`* + +通知通道的名称(任何标识符)。 + +`*` + +此会话的所有当前侦听注册都将被清除。 + +## 笔记 + +你可以不听你不听的东西;不会出现警告或错误。 + +在每节课结束时,`不听*`是自动执行的。 + +已执行的交易`不听`无法为两阶段提交做好准备。 + +## 例子 + +进行注册: + +``` +LISTEN virtual; +NOTIFY virtual; +Asynchronous notification "virtual" received from server process with PID 8448. +``` + +一次`不听`已被执行,进一步`通知`消息将被忽略: + +``` +UNLISTEN virtual; +NOTIFY virtual; +-- no NOTIFY event is received +``` + +## 兼容性 + +没有`不听`SQL 标准中的命令。 + +## 也可以看看 + +[听](sql-listen.html),[通知](sql-notify.html) diff --git a/docs/X/sql-update.md b/docs/en/sql-update.md similarity index 100% rename from docs/X/sql-update.md rename to docs/en/sql-update.md diff --git a/docs/en/sql-update.zh.md b/docs/en/sql-update.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a6f622c0b63cd0599dec20c40fce2522341ee54b --- /dev/null +++ b/docs/en/sql-update.zh.md @@ -0,0 +1,201 @@ +## 更新 + +UPDATE — 更新表的行 + +## 概要 + +``` +[ WITH [ RECURSIVE ] with_query [, ...] ] +UPDATE [ ONLY ] table_name [ * ] [ [ AS ] alias ] + SET { column_name = { expression | DEFAULT } | + ( column_name [, ...] ) = [ ROW ] ( { expression | DEFAULT } [, ...] ) | + ( column_name [, ...] ) = ( sub-SELECT ) + } [, ...] + [ FROM from_item [, ...] ] + [ WHERE condition | WHERE CURRENT OF cursor_name ] + [ RETURNING * | output_expression [ [ AS ] output_name ] [, ...] ] +``` + +## 描述 + +`更新`更改满足条件的所有行中指定列的值。只有需要修改的列需要在`放`条款;未显式修改的列保留其先前的值。 + +使用数据库中其他表中包含的信息来修改表有两种方法:使用子选择,或在`从`条款。哪种技术更合适取​​决于具体情况。 + +可选的`返回`条款原因`更新`根据实际更新的每一行计算和返回值。使用表的列和/或中提到的其他表的列的任何表达式`从`, 可以计算。使用表列的新(更新后)值。的语法`返回`列表与输出列表的列表相同`选择`. + +你必须拥有`更新`表或至少列出要更新的列的特权。您还必须拥有`选择`对其值在*`表达式`*要么*`健康)状况`*. + +## 参数 + +*`with_query`* + +这`和`子句允许您指定一个或多个子查询,这些子查询可以在`更新`询问。看[第 7.8 节](queries-with.html)和[选择](sql-select.html)详情。 + +*`表名`* + +要更新的表的名称(可选的模式限定)。如果`只要`在表名之前指定,匹配的行仅在命名表中更新。如果`只要`未指定,匹配的行也会在从命名表继承的任何表中更新。可选地,`*`可以在表名之后指定以明确指示包含后代表。 + +*`别名`* + +目标表的替代名称。提供别名时,它会完全隐藏表的实际名称。例如,给定`更新 foo AS f`,其余的`更新`声明必须将此表称为`f`不是`富`. + +*`列名`* + +表中的列名*`表名`*.如果需要,列名可以用子字段名或数组下标限定。不要在目标列的规范中包含表的名称——例如,`更新表名 SET 表名.col = 1`是无效的。 + +*`表达`* + +分配给列的表达式。表达式可以使用表中此列和其他列的旧值。 + +`默认` + +将列设置为其默认值(如果没有为其分配特定的默认表达式,则该列将为 NULL)。标识列将设置为关联序列生成的新值。对于生成的列,允许指定这一点,但仅指定从其生成表达式计算列的正常行为。 + +*`子选择`* + +一种`选择`子查询产生的输出列与前面带括号的列列表中列出的一样多。子查询在执行时必须产生不超过一行。如果它产生一行,则将其列值分配给目标列;如果它没有产生任何行,则将 NULL 值分配给目标列。子查询可以引用正在更新的表的当前行的旧值。 + +*`from_item`* + +允许其他表中的列出现在表中的表表达式`在哪里`条件和更新表达式。这使用与[`从`](sql-select.html#SQL-FROM)a的从句`选择`陈述;例如,可以指定表名的别名。不要将目标表重复为*`from_item`*除非您打算自加入(在这种情况下,它必须在*`from_item`*)。 + +*`(健康)状况`* + +返回类型值的表达式`布尔值`.仅此表达式返回的行`真的`将会被更新。 + +*`游标名`* + +在 a 中使用的游标名称`当前位置`健康)状况。要更新的行是最近从该游标中获取的行。游标必须是非分组查询`更新`的目标表。注意`当前位置`不能与布尔条件一起指定。看[宣布](sql-declare.html)有关使用游标的更多信息`当前位置`. + +*`输出表达式`* + +要计算并返回的表达式`更新`每行更新后的命令。表达式可以使用名为的表的任何列名*`表名`*或表中列出`从`.写`*`返回所有列。 + +*`输出名称`* + +用于返回列的名称。 + +## 输出 + +成功完成后,`更新`命令返回形式的命令标签 + +``` +UPDATE count +``` + +这*`数数`*是更新的行数,包括值未更改的匹配行。请注意,该数字可能小于匹配的行数*`(健康)状况`*当更新被`更新前`扳机。如果*`数数`*为 0,查询没有更新任何行(这不被视为错误)。 + +如果`更新`命令包含一个`返回`子句,结果将类似于`选择`包含在`返回`列表,根据命令更新的行计算。 + +## 笔记 + +当一个`从`子句存在,本质上发生的事情是目标表连接到表中提到的表*`from_item`*列表,连接的每个输出行代表目标表的更新操作。使用时`从`您应该确保连接最多为要修改的每一行生成一个输出行。换句话说,目标行不应连接到来自其他表的多个行。如果是这样,那么只有一个连接行将用于更新目标行,但将使用哪一个是不容易预测的。 + +由于这种不确定性,仅在子选择中引用其他表更安全,但通常比使用连接更难阅读且速度较慢。 + +在分区表的情况下,更新行可能会导致它不再满足包含分区的分区约束。在这种情况下,如果分区树中有某个其他分区,该行满足其分区约束,则该行将移动到该分区。如果没有这样的分区,就会出错。在幕后,排运动实际上是一个`删除`和`插入`手术。 + +有可能并发`更新`或者`删除`在被移动的行上会出现序列化失败错误。假设会话 1 正在执行`更新`在分区键上,同时该行可见的并发会话 2 执行`更新`或者`删除`对这一行进行操作。在这种情况下,会话 2`更新`要么`删除`将检测行移动并引发序列化失败错误(始终返回 SQLSTATE 代码“40001”)。如果发生这种情况,应用程序可能希望重试事务。在表未分区或没有行移动的通常情况下,会话 2 将识别新更新的行并执行`更新`/`删除`在这个新的行版本上。 + +请注意,虽然行可以从本地分区移动到外部表分区(假设外部数据包装器支持元组路由),但它们不能从外部表分区移动到另一个分区。 + +## 例子 + +换个词`戏剧`到`戏剧性`在列`种类`表的`电影`: + +``` +UPDATE films SET kind = 'Dramatic' WHERE kind = 'Drama'; +``` + +在表格的一行中调整温度条目并将降水重置为其默认值`天气`: + +``` +UPDATE weather SET temp_lo = temp_lo+1, temp_hi = temp_lo+15, prcp = DEFAULT + WHERE city = 'San Francisco' AND date = '2003-07-03'; +``` + +执行相同的操作并返回更新的条目: + +``` +UPDATE weather SET temp_lo = temp_lo+1, temp_hi = temp_lo+15, prcp = DEFAULT + WHERE city = 'San Francisco' AND date = '2003-07-03' + RETURNING temp_lo, temp_hi, prcp; +``` + +使用替代的列列表语法进行相同的更新: + +``` +UPDATE weather SET (temp_lo, temp_hi, prcp) = (temp_lo+1, temp_lo+15, DEFAULT) + WHERE city = 'San Francisco' AND date = '2003-07-03'; +``` + +增加管理 Acme Corporation 帐户的销售人员的销售计数,使用`从`子句语法: + +``` +UPDATE employees SET sales_count = sales_count + 1 FROM accounts + WHERE accounts.name = 'Acme Corporation' + AND employees.id = accounts.sales_person; +``` + +执行相同的操作,在`在哪里`条款: + +``` +UPDATE employees SET sales_count = sales_count + 1 WHERE id = + (SELECT sales_person FROM accounts WHERE name = 'Acme Corporation'); +``` + +更新客户表中的联系人姓名以匹配当前分配的销售人员: + +``` +UPDATE accounts SET (contact_first_name, contact_last_name) = + (SELECT first_name, last_name FROM salesmen + WHERE salesmen.id = accounts.sales_id); +``` + +类似的结果可以通过连接来实现: + +``` +UPDATE accounts SET contact_first_name = first_name, + contact_last_name = last_name + FROM salesmen WHERE salesmen.id = accounts.sales_id; +``` + +但是,第二个查询可能会产生意外的结果,如果`推销员`.`id`不是唯一键,而如果有多个查询,则保证第一个查询会引发错误`id`火柴。此外,如果没有特定的匹配项`帐户`.`sales_id`条目,第一个查询会将相应的名称字段设置为 NULL,而第二个查询根本不会更新该行。 + +更新汇总表中的统计信息以匹配当前数据: + +``` +UPDATE summary s SET (sum_x, sum_y, avg_x, avg_y) = + (SELECT sum(x), sum(y), avg(x), avg(y) FROM data d + WHERE d.group_id = s.group_id); +``` + +尝试插入新的库存项目以及库存数量。如果该项目已存在,则改为更新现有项目的库存计数。要在不失败整个事务的情况下执行此操作,请使用保存点: + +``` +BEGIN; +-- other operations +SAVEPOINT sp1; +INSERT INTO wines VALUES('Chateau Lafite 2003', '24'); +-- Assume the above fails because of a unique key violation, +-- so now we issue these commands: +ROLLBACK TO sp1; +UPDATE wines SET stock = stock + 24 WHERE winename = 'Chateau Lafite 2003'; +-- continue with other operations, and eventually +COMMIT; +``` + +更改`种类`表格的列`电影`在光标所在的行`c_films`目前定位: + +``` +UPDATE films SET kind = 'Dramatic' WHERE CURRENT OF c_films; +``` + +## 兼容性 + +此命令符合 SQL 标准,除了`从`和`返回`子句是 PostgreSQL 扩展,使用的能力也是如此`和`和`更新`. + +其他一些数据库系统提供了一个`从`应该在其中再次列出目标表的选项`从`.这不是 PostgreSQL 的解释方式`从`.移植使用此扩展的应用程序时要小心。 + +根据标准,带括号的目标列名称子列表的源值可以是产生正确列数的任何行值表达式。PostgreSQL 只允许源值是[行构造函数](sql-expressions.html#SQL-SYNTAX-ROW-CONSTRUCTORS)或子`选择`.单个列的更新值可以指定为`默认`在行构造函数的情况下,但不在子内部`选择`. diff --git a/docs/X/sql-vacuum.md b/docs/en/sql-vacuum.md similarity index 100% rename from docs/X/sql-vacuum.md rename to docs/en/sql-vacuum.md diff --git a/docs/en/sql-vacuum.zh.md b/docs/en/sql-vacuum.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..2c74a4587377d3e061898ae2b7db3db00aa4d3f4 --- /dev/null +++ b/docs/en/sql-vacuum.zh.md @@ -0,0 +1,141 @@ +## 真空 + +VACUUM — 垃圾收集和可选分析数据库 + +## 概要 + +``` +VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ] +VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ] + +where option can be one of: + + FULL [ boolean ] + FREEZE [ boolean ] + VERBOSE [ boolean ] + ANALYZE [ boolean ] + DISABLE_PAGE_SKIPPING [ boolean ] + SKIP_LOCKED [ boolean ] + INDEX_CLEANUP { AUTO | ON | OFF } + PROCESS_TOAST [ boolean ] + TRUNCATE [ boolean ] + PARALLEL integer + +and table_and_columns is: + + table_name [ ( column_name [, ...] ) ] +``` + +## 描述 + +`真空`回收死元组占用的存储空间。在正常的 PostgreSQL 操作中,被更新删除或废弃的元组不会从它们的表中物理删除;他们一直在场,直到`真空`已经完成了。因此有必要做`真空`定期,尤其是在频繁更新的表上。 + +没有*`表和列`*列表,`真空`处理当前数据库中当前用户有权清理的每个表和物化视图。带着清单,`真空`仅处理那些表。 + +`真空分析`执行一个`真空`然后一个`分析`对于每个选定的表。这是一种方便的日常维护脚本组合形式。看[分析](sql-analyze.html)有关其处理的更多详细信息。 + +清楚的`真空`(没有`满的`) 只是回收空间并使其可供重复使用。这种形式的命令可以与正常的表读写并行操作,因为没有获得排他锁。但是,额外的空间不会返回给操作系统(在大多数情况下);它只是保留在同一张表中以供重复使用。它还允许我们利用多个 CPU 来处理索引。此功能被称为*平行真空*.要禁用此功能,可以使用`平行线`选项并将并行工作者指定为零。`真空已满`将表的全部内容重写到没有额外空间的新磁盘文件中,允许将未使用的空间返回给操作系统。这种形式要慢得多,需要一个`访问独家`在处理每个表时锁定它。 + +When the option list is surrounded by parentheses, the options can be written in any order.没有括号,选项必须完全按照上面显示的顺序指定。括号中的语法是在 PostgreSQL 9.0 中添加的;不推荐使用带括号的语法。 + +## 参数 + +`满的` + +选择“full”vacuum,它可以回收更多空间,但需要更长的时间并专门锁定表。此方法还需要额外的磁盘空间,因为它会写入表的新副本并且在操作完成之前不会释放旧副本。通常这应该只在需要从表中回收大量空间时使用。 + +`冻结` + +选择元组的激进“冻结”。指定`冻结`相当于执行`真空`与[真空\_冻结\_分钟\_年龄](runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE)和[真空\_冻结\_桌子\_年龄](runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE)参数设置为零。表被重写时总是执行主动冻结,所以这个选项是多余的`满的`被指定。 + +`详细` + +为每个表打印详细的真空活动报告。 + +`分析` + +更新计划程序使用的统计信息,以确定执行查询的最有效方式。 + +`DISABLE_PAGE_SKIPPING` + +一般,`真空`将根据[能见度图](routine-vacuuming.html#VACUUM-FOR-VISIBILITY-MAP).总是可以跳过已知所有元组都被冻结的页面,并且可以跳过已知所有元组对所有事务可见的页面,除非执行激进的清理。此外,除了执行激进的清理时,可能会跳过某些页面以避免等待其他会话完成使用它们。此选项禁用所有页面跳过行为,并且仅在可见性映射的内容可疑时使用,只有在存在导致数据库损坏的硬件或软件问题时才会发生这种情况。 + +`SKIP_LOCKED` + +指定`真空`在开始处理关系时不应等待释放任何冲突的锁:如果不等待就无法立即锁定关系,则跳过该关系。请注意,即使使用此选项,`真空`打开关系的索引时可能仍会阻塞。此外,`真空分析`从分区、表继承子表和某些类型的外部表获取样本行时,仍可能会阻塞。另外,虽然`真空`通常处理指定分区表的所有分区,该选项将导致`真空`如果分区表上存在冲突锁,则跳过所有分区。 + +`INDEX_CLEANUP` + +一般,`真空`当表中的死元组很少时,将跳过索引清理。当发生这种情况时,处理所有表索引的成本预计将大大超过删除死索引元组的好处。此选项可用于强制`真空`当死元组超过零时处理索引。默认是`汽车`, 这使得`真空`在适当的时候跳过索引清理。如果`INDEX_CLEANUP`被设定为`在`,`真空`将保守地从索引中删除所有死元组。这对于向后兼容 PostgreSQL 的早期版本(这是标准行为)可能很有用。 + +`INDEX_CLEANUP`也可以设置为`离开`强迫`真空`到*总是*跳过索引清理,即使表中有许多死元组。这在需要时可能很有用`真空`尽可能快地运行以避免即将发生的事务 ID 环绕(请参阅[第 25.1.5 节](routine-vacuuming.html#VACUUM-FOR-WRAPAROUND))。然而,由控制的环绕式故障安全机制[真空\_故障保护\_年龄](runtime-config-client.html#GUC-VACUUM-FAILSAFE-AGE)通常会自动触发以避免事务 ID 环绕失败,应该是首选。如果不定期执行索引清理,性能可能会受到影响,因为随着表的修改,索引会积累死元组,而表本身也会积累死线指针,这些指针在索引清理完成之前无法删除。 + +此选项对没有索引的表无效,如果`满的`使用选项。它对事务 ID 环绕故障安全机制也没有影响。触发时,它将跳过索引清理,即使`INDEX_CLEANUP`设定为`在`. + +`PROCESS_TOAST` + +指定`真空`应该尝试处理相应的`吐司`每个关系的表(如果存在)。这通常是所需的行为,并且是默认设置。当只需要清理主关系时,将此选项设置为 false 可能很有用。此选项是必需的,当`满的`使用选项。 + +`截短` + +指定`真空`应该尝试截断表末尾的任何空页面,并允许将截断页面的磁盘空间返回给操作系统。这通常是所需的行为,并且是默认行为,除非`真空截断`对于要清理的表,选项已设置为 false。将此选项设置为 false 可能有助于避免`访问独家`锁定截断所需的表。如果`满的`使用选项。 + +`平行线` + +执行索引真空和索引清理阶段`真空`并行使用*`整数`*后台工作人员(每个真空阶段的详细信息,请参阅[表 28.39](progress-reporting.html#VACUUM-PHASES))。用于执行操作的工作人员数量等于支持并行真空的关系上的索引数量,该数量受使用指定的工作人员数量限制`平行线`如果有任何进一步限制的选项[最大限度\_平行线\_维护\_工作人员](runtime-config-resource.html#GUC-MAX-PARALLEL-MAINTENANCE-WORKERS).当且仅当索引的大小大于[分钟\_平行线\_指数\_扫描\_尺寸](runtime-config-query.html#GUC-MIN-PARALLEL-INDEX-SCAN-SIZE).请注意,不能保证指定的并行工作者数量*`整数`*将在执行期间使用。真空吸尘器运行的工人数量可能少于规定的数量,甚至根本没有工人。每个索引只能使用一名工作人员。所以只有在至少有`2`表中的索引。真空工人在每个阶段开始之前启动并在阶段结束时退出。这些行为可能会在未来的版本中改变。此选项不能与`满的`选项。 + +*`布尔值`* + +指定是否应打开或关闭所选选项。你可以写`真的`,`在`, 或者`1`启用该选项,并且`错误的`,`离开`, 或者`0`禁用它。这*`布尔值`*value 也可以省略,在这种情况下`真的`假设。 + +*`整数`* + +指定传递给选定选项的非负整数值。 + +*`表名`* + +要清理的特定表或物化视图的名称(可选的模式限定)。如果指定的表是一个分区表,它的所有叶子分区都会被清理。 + +*`列名`* + +要分析的特定列的名称。默认为所有列。如果指定了列列表,`分析`也必须指定。 + +## 输出 + +什么时候`详细`被指定,`真空`发出进度消息以指示当前正在处理哪个表。还打印了有关表的各种统计信息。 + +## 笔记 + +要清理表,通常必须是表的所有者或超级用户。但是,允许数据库所有者清空其数据库中的所有表,共享目录除外。(共享目录的限制意味着真正的数据库范围`真空`只能由超级用户执行。)`真空`将跳过调用用户无权清理的任何表。 + +`真空`不能在事务块内执行。 + +对于具有 GIN 索引的表,`真空`(以任何形式)还通过将挂起的索引条目移动到主 GIN 索引结构中的适当位置来完成任何挂起的索引插入。看[第 67.4.1 节](gin-implementation.html#GIN-FAST-UPDATE)详情。 + +我们建议经常(至少每晚)清理活动的生产数据库,以删除死行。添加或删除大量行后,最好发出`真空分析`受影响表的命令。这将使用所有最近更改的结果更新系统目录,并允许 PostgreSQL 查询规划器在规划查询时做出更好的选择。 + +这`满的`不建议常规使用该选项,但在特殊情况下可能有用。例如,当您删除或更新了表中的大部分行并希望表在物理上缩小以占用更少的磁盘空间并允许更快的表扫描时。`真空已满`通常会比普通的缩小桌子更多`真空`将。 + +这`平行线`选项仅用于真空目的。如果此选项与`分析`选项,不影响`分析`. + +`真空`导致 I/O 流量大幅增加,这可能会导致其他活动会话的性能下降。因此,有时建议使用基于成本的真空延迟功能。对于并行真空,每个工人的睡眠与该工人所做的工作成比例。看[第 20.4.4 节](runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST)详情。 + +PostgreSQL 包含一个“autovacuum”工具,可以自动执行日常真空维护。有关自动和手动吸尘的更多信息,请参阅[第 25.1 节](routine-vacuuming.html). + +每个后端运行`真空`没有`满的`选项将报告其进度`pg_stat_progress_vacuum`看法。后端运行`真空已满`而是报告他们在`pg_stat_progress_cluster`看法。看[第 28.4.3 节](progress-reporting.html#VACUUM-PROGRESS-REPORTING)和[第 28.4.4 节](progress-reporting.html#CLUSTER-PROGRESS-REPORTING)详情。 + +## 例子 + +清理单个表`一克`,为优化器分析它并打印详细的真空活动报告: + +``` +VACUUM (VERBOSE, ANALYZE) onek; +``` + +## 兼容性 + +没有`真空`SQL 标准中的语句。 + +## 也可以看看 + +[真空数据库](app-vacuumdb.html), [第 20.4.4 节](runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-VACUUM-COST), [第 25.1.6 节](routine-vacuuming.html#AUTOVACUUM), [第 28.4.3 节](progress-reporting.html#VACUUM-PROGRESS-REPORTING), [第 28.4.4 节](progress-reporting.html#CLUSTER-PROGRESS-REPORTING) diff --git a/docs/X/sql-values.md b/docs/en/sql-values.md similarity index 100% rename from docs/X/sql-values.md rename to docs/en/sql-values.md diff --git a/docs/en/sql-values.zh.md b/docs/en/sql-values.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7ad0fcc8bc67e9f6baee61a70ba66ca510953189 --- /dev/null +++ b/docs/en/sql-values.zh.md @@ -0,0 +1,113 @@ +## 价值观 + +VALUES — 计算一组行 + +## 概要 + +``` +VALUES ( expression [, ...] ) [, ...] + [ ORDER BY sort_expression [ ASC | DESC | USING operator ] [, ...] ] + [ LIMIT { count | ALL } ] + [ OFFSET start [ ROW | ROWS ] ] + [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ] +``` + +## 描述 + +`价值观`计算由值表达式指定的行值或行值集。它最常用于在较大的命令中生成“常量表”,但也可以单独使用。 + +当指定多行时,所有行必须具有相同数量的元素。结果表的列的数据类型是通过组合出现在该列中的表达式的显式或推断类型来确定的,使用与`联盟`(看[第 10.5 节](typeconv-union-case.html))。 + +在更大的命令中,`价值观`在语法上允许的任何地方`选择`是。因为它被当作一个`选择`根据语法,可以使用`订购方式`,`限制`(或等效地`先取`), 和`抵消`带有 a 的子句`价值观`命令。 + +## 参数 + +*`表达`* + +要在结果表(行集)中的指定位置计算和插入的常量或表达式。在一个`价值观`出现在顶层的列表`插入`, 一个*`表达`*可以替换为`默认`指示应插入目标列的默认值。`默认`时不能使用`价值观`出现在其他上下文中。 + +*`排序表达式`* + +一个表达式或整数常量,指示如何对结果行进行排序。这个表达式可以引用`价值观`结果为`第 1 列`,`第 2 列`等。有关详细信息,请参阅[ORDER BY 条款](sql-select.html#SQL-ORDERBY)在里面[选择](sql-select.html)文档。 + +*`操作员`* + +排序运算符。详情见[ORDER BY 条款](sql-select.html#SQL-ORDERBY)在里面[选择](sql-select.html)文档。 + +*`数数`* + +要返回的最大行数。详情见[限制条款](sql-select.html#SQL-LIMIT)在里面[选择](sql-select.html)文档。 + +*`开始`* + +在开始返回行之前要跳过的行数。详情见[限制条款](sql-select.html#SQL-LIMIT)在里面[选择](sql-select.html)文档。 + +## 笔记 + +`价值观`应避免包含大量行的列表,因为您可能会遇到内存不足故障或性能不佳。`价值观`出现在`插入`是一种特殊情况(因为所需的列类型从`插入`的目标表,不需要通过扫描来推断`价值观`list),因此它可以处理比其他情况下实际更大的列表。 + +## 例子 + +一个光秃秃的`价值观`命令: + +``` +VALUES (1, 'one'), (2, 'two'), (3, 'three'); +``` + +这将返回一个两列三行的表格。它实际上等效于: + +``` +SELECT 1 AS column1, 'one' AS column2 +UNION ALL +SELECT 2, 'two' +UNION ALL +SELECT 3, 'three'; +``` + +更常见的是,`价值观`在较大的 SQL 命令中使用。最常见的用途是在`插入`: + +``` +INSERT INTO films (code, title, did, date_prod, kind) + VALUES ('T_601', 'Yojimbo', 106, '1961-06-16', 'Drama'); +``` + +在上下文中`插入`,一个条目`价值观`列表可以是`默认`表示此处应使用列默认值而不是指定值: + +``` +INSERT INTO films VALUES + ('UA502', 'Bananas', 105, DEFAULT, 'Comedy', '82 minutes'), + ('T_601', 'Yojimbo', 106, DEFAULT, 'Drama', DEFAULT); +``` + +`价值观`也可以用在一个子`选择`可以写成,例如`从`条款: + +``` +SELECT f.* + FROM films f, (VALUES('MGM', 'Horror'), ('UA', 'Sci-Fi')) AS t (studio, kind) + WHERE f.studio = t.studio AND f.kind = t.kind; + +UPDATE employees SET salary = salary * v.increase + FROM (VALUES(1, 200000, 1.2), (2, 400000, 1.4)) AS v (depno, target, increase) + WHERE employees.depno = v.depno AND employees.sales >= v.target; +``` + +请注意,一个`作为`子句是必需的`价值观`用于一个`从`子句,就像对于`选择`.不需要`作为`子句为所有列指定名称,但最好这样做。(默认的列名`价值观`是`第 1 列`,`第 2 列`等在 PostgreSQL 中,但这些名称在其他数据库系统中可能不同。) + +什么时候`价值观`用于`插入`,这些值都会自动强制转换为相应目标列的数据类型。在其他上下文中使用它时,可能需要指定正确的数据类型。如果条目都是引用的文字常量,则强制第一个足以确定所有的假定类型: + +``` +SELECT * FROM machines +WHERE ip_address IN (VALUES('192.168.0.1'::inet), ('192.168.0.10'), ('192.168.1.43')); +``` + +### 提示 + +对于简单的`在`测试,最好依靠[标量列表](functions-comparisons.html#FUNCTIONS-COMPARISONS-IN-SCALAR)的形式`在`而不是写一个`价值观`如上图查询。标量列表方法需要更少的编写并且通常更有效。 + +## 兼容性 + +`价值观`符合 SQL 标准。`限制`和`抵消`是 PostgreSQL 扩展;另见下[选择](sql-select.html). + +## 也可以看看 + +[插入](sql-insert.html),[选择](sql-select.html) diff --git a/docs/X/sql.md b/docs/en/sql.md similarity index 100% rename from docs/X/sql.md rename to docs/en/sql.md diff --git a/docs/en/sql.zh.md b/docs/en/sql.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..4da5c07e92bf70b289f99363f7d57c01e68b38fd --- /dev/null +++ b/docs/en/sql.zh.md @@ -0,0 +1,281 @@ +# Part II. The SQL Language + +This part describes the use of the SQL language in PostgreSQL. We start with describing the general syntax of SQL, then explain how to create the structures to hold data, how to populate the database, and how to query it. The middle part lists the available data types and functions for use in SQL commands. The rest treats several aspects that are important for tuning a database for optimal performance. + +The information in this part is arranged so that a novice user can follow it start to end to gain a full understanding of the topics without having to refer forward too many times. The chapters are intended to be self-contained, so that advanced users can read the chapters individually as they choose. The information in this part is presented in a narrative fashion in topical units. Readers looking for a complete description of a particular command should see[Part VI](reference.html). + +Readers of this part should know how to connect to a PostgreSQL database and issue SQL commands. Readers that are unfamiliar with these issues are encouraged to read[Part I](tutorial.html)first. SQL commands are typically entered using the PostgreSQL interactive terminal psql, but other programs that have similar functionality can be used as well. + +**Table of Contents** + +[4. SQL Syntax](sql-syntax.html) + +[4.1. Lexical Structure](sql-syntax-lexical.html) + +[4.2. Value Expressions](sql-expressions.html) + +[4.3. Calling Functions](sql-syntax-calling-funcs.html) + +[5. Data Definition](ddl.html) + +[5.1. Table Basics](ddl-basics.html) + +[5.2. Default Values](ddl-default.html) + +[5.3. Generated Columns](ddl-generated-columns.html) + +[5.4. Constraints](ddl-constraints.html) + +[5.5. System Columns](ddl-system-columns.html) + +[5.6. Modifying Tables](ddl-alter.html) + +[5.7. Privileges](ddl-priv.html) + +[5.8. Row Security Policies](ddl-rowsecurity.html) + +[5.9. Schemas](ddl-schemas.html) + +[5.10. Inheritance](ddl-inherit.html) + +[5.11. Table Partitioning](ddl-partitioning.html) + +[5.12. Foreign Data](ddl-foreign-data.html) + +[5.13. Other Database Objects](ddl-others.html) + +[5.14. Dependency Tracking](ddl-depend.html) + +[6. Data Manipulation](dml.html) + +[6.1. Inserting Data](dml-insert.html) + +[6.2. Updating Data](dml-update.html) + +[6.3. Deleting Data](dml-delete.html) + +[6.4. Returning Data from Modified Rows](dml-returning.html) + +[7. Queries](queries.html) + +[7.1. Overview](queries-overview.html) + +[7.2. Table Expressions](queries-table-expressions.html) + +[7.3. Select Lists](queries-select-lists.html) + +[7.4. Combining Queries (`UNION`,`INTERSECT`,`EXCEPT`)](queries-union.html) + +[7.5. Sorting Rows (`ORDER BY`)](queries-order.html) + +[7.6.`LIMIT`and`OFFSET`](queries-limit.html) + +[7.7.`VALUES`Lists](queries-values.html) + +[7.8.`WITH`Queries (Common Table Expressions)](queries-with.html) + +[8. Data Types](datatype.html) + +[8.1。数值类型](datatype-numeric.html) + +[8.2.货币类型](datatype-money.html) + +[8.3.字符类型](datatype-character.html) + +[8.4.二进制数据类型](datatype-binary.html) + +[8.5。日期/时间类型](datatype-datetime.html) + +[8.6.布尔类型](datatype-boolean.html) + +[8.7.枚举类型](datatype-enum.html) + +[8.8.几何类型](datatype-geometric.html) + +[8.9。网络地址类型](datatype-net-types.html) + +[8.10。位串类型](datatype-bit.html) + +[8.11。文本搜索类型](datatype-textsearch.html) + +[8.12。UUID 类型](datatype-uuid.html) + +[8.13。XML 类型](datatype-xml.html) + +[8.14。JSON 类型](datatype-json.html) + +[8.15。数组](arrays.html) + +[8.16。复合类型](rowtypes.html) + +[8.17。范围类型](rangetypes.html) + +[8.18。域类型](domains.html) + +[8.19。对象标识符类型](datatype-oid.html) + +[8.20。`pg_lsn`类型](datatype-pg-lsn.html) + +[8.21。伪类型](datatype-pseudo.html) + +[9. 函数和运算符9.1。](functions.html) + +[逻辑运算符9.2.](functions-logical.html) + +[比较函数和运算符9.3.](functions-comparison.html) + +[数学函数和运算符9.4。](functions-math.html) + +[字符串函数和运算符9.5。](functions-string.html) + +[二进制字符串函数和运算符](functions-binarystring.html) + +[9.6。位串函数和运算符](functions-bitstring.html) + +[9.7。模式匹配](functions-matching.html) + +[9.8。数据类型格式化函数](functions-formatting.html) + +[9.9。日期/时间函数和运算符](functions-datetime.html) + +[9.10。枚举支持函数](functions-enum.html) + +[9.11。几何函数和运算符](functions-geometry.html) + +[9.12。网络地址函数和运算符](functions-net.html) + +[9.13。文本搜索函数和运算符](functions-textsearch.html) + +[9.14。UUID 函数](functions-uuid.html) + +[9.15。XML 函数](functions-xml.html) + +[9.16。JSON 函数和运算符](functions-json.html) + +[9.17。序列操作函数](functions-sequence.html) + +[9.18。条件表达式](functions-conditional.html) + +[9.19。数组函数和运算符](functions-array.html) + +[9.20。范围/多范围函数和运算符](functions-range.html) + +[9.21。聚合函数](functions-aggregate.html) + +[9.22。窗口函数](functions-window.html) + +[9.23。子查询表达式](functions-subquery.html) + +[9.24。行和数组比较](functions-comparisons.html) + +[9.25。设置返回函数](functions-srf.html) + +[9.26。系统信息函数和运算符](functions-info.html) + +[9.27。系统管理功能](functions-admin.html) + +[9.28。触发函数](functions-trigger.html) + +[9.29。事件触发函数](functions-event-triggers.html) + +[9.30。统计信息功能](functions-statistics.html) + +[10. 类型转换10.1。](typeconv.html) + +[概述10.2.](typeconv-overview.html) + +[运营商10.3.](typeconv-oper.html) + +[职能](typeconv-func.html) + +[10.4. Value Storage](typeconv-query.html) + +[10.5.`UNION`,`CASE`, and Related Constructs](typeconv-union-case.html) + +[10.6.`SELECT`Output Columns](typeconv-select.html) + +[11. Indexes](indexes.html) + +[11.1. Introduction](indexes-intro.html) + +[11.2. Index Types](indexes-types.html) + +[11.3. Multicolumn Indexes](indexes-multicolumn.html) + +[11.4. Indexes and`ORDER BY`](indexes-ordering.html) + +[11.5. Combining Multiple Indexes](indexes-bitmap-scans.html) + +[11.6. Unique Indexes](indexes-unique.html) + +[11.7. Indexes on Expressions](indexes-expressional.html) + +[11.8. Partial Indexes](indexes-partial.html) + +[11.9. Index-Only Scans and Covering Indexes](indexes-index-only-scans.html) + +[11.10. Operator Classes and Operator Families](indexes-opclass.html) + +[11.11. Indexes and Collations](indexes-collations.html) + +[11.12. Examining Index Usage](indexes-examine.html) + +[12. Full Text Search](textsearch.html) + +[12.1. Introduction](textsearch-intro.html) + +[12.2. Tables and Indexes](textsearch-tables.html) + +[12.3. Controlling Text Search](textsearch-controls.html) + +[12.4. Additional Features](textsearch-features.html) + +[12.5. Parsers](textsearch-parsers.html) + +[12.6. Dictionaries](textsearch-dictionaries.html) + +[12.7. Configuration Example](textsearch-configuration.html) + +[12.8. Testing and Debugging Text Search](textsearch-debugging.html) + +[12.9. GIN and GiST Index Types](textsearch-indexes.html) + +[12.10. psql Support](textsearch-psql.html) + +[12.11. Limitations](textsearch-limitations.html) + +[13. Concurrency Control](mvcc.html) + +[13.1. Introduction](mvcc-intro.html) + +[13.2. Transaction Isolation](transaction-iso.html) + +[13.3. Explicit Locking](explicit-locking.html) + +[13.4. Data Consistency Checks at the Application Level](applevel-consistency.html) + +[13.5. Caveats](mvcc-caveats.html) + +[13.6. Locking and Indexes](locking-indexes.html) + +[14. Performance Tips](performance-tips.html) + +[14.1. Using`EXPLAIN`](using-explain.html) + +[14.2. Statistics Used by the Planner](planner-stats.html) + +[14.3. Controlling the Planner with Explicit`JOIN`Clauses](explicit-joins.html) + +[14.4. Populating a Database](populate.html) + +[14.5. Non-Durable Settings](non-durability.html) + +[15. Parallel Query](parallel-query.html) + +[15.1. How Parallel Query Works](how-parallel-query-works.html) + +[15.2. When Can Parallel Query Be Used?](when-can-parallel-query-be-used.html) + +[15.3. Parallel Plans](parallel-plans.html) + +[15.4. Parallel Safety](parallel-safety.html) diff --git a/docs/X/ssh-tunnels.md b/docs/en/ssh-tunnels.md similarity index 100% rename from docs/X/ssh-tunnels.md rename to docs/en/ssh-tunnels.md diff --git a/docs/en/ssh-tunnels.zh.md b/docs/en/ssh-tunnels.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8c9c887a4a8df5549d6f328802a4f8824db1feed --- /dev/null +++ b/docs/en/ssh-tunnels.zh.md @@ -0,0 +1,41 @@ +## 19.11. Secure TCP/IP Connections with SSH Tunnels + +[](<>) + +It is possible to use SSH to encrypt the network connection between clients and a PostgreSQL server. Done properly, this provides an adequately secure network connection, even for non-SSL-capable clients. + +First make sure that an SSH server is running properly on the same machine as the PostgreSQL server and that you can log in using`ssh`as some user; you then can establish a secure tunnel to the remote server. A secure tunnel listens on a local port and forwards all traffic to a port on the remote machine. Traffic sent to the remote port can arrive on its`localhost`address, or different bind address if desired; it does not appear as coming from your local machine. This command creates a secure tunnel from the client machine to the remote machine`foo.com`: + +``` +ssh -L 63333:localhost:5432 joe@foo.com +``` + +The first number in the`-L`argument, 63333, is the local port number of the tunnel; it can be any unused port. (IANA reserves ports 49152 through 65535 for private use.) The name or IP address after this is the remote bind address you are connecting to, i.e.,`localhost`, which is the default. The second number, 5432, is the remote end of the tunnel, e.g., the port number your database server is using. In order to connect to the database server using this tunnel, you connect to port 63333 on the local machine: + +``` +psql -h localhost -p 63333 postgres +``` + +To the database server it will then look as though you are user`joe`on host`foo.com`connecting to the`localhost`bind address, and it will use whatever authentication procedure was configured for connections by that user to that bind address. Note that the server will not think the connection is SSL-encrypted, since in fact it is not encrypted between the SSH server and the PostgreSQL server. This should not pose any extra security risk because they are on the same machine. + +为了使隧道设置成功,您必须被允许通过以下方式连接`ssh`作为`joe@foo.com`,就像你试图使用`ssh`创建终端会话。 + +您还可以将端口转发设置为 + +``` +ssh -L 63333:foo.com:5432 joe@foo.com +``` + +但随后数据库服务器将看到连接进入其`foo.com`绑定地址,默认不开启`听地址 = '本地主机'`.这通常不是你想要的。 + +如果您必须通过某个登录主机“跳”到数据库服务器,一种可能的设置可能如下所示: + +``` +ssh -L 63333:db.foo.com:5432 joe@shell.foo.com +``` + +请注意,这种方式从`shell.foo.com`到`db.foo.com`不会被 SSH 隧道加密。当网络以各种方式受到限制时,SSH 提供了相当多的配置可能性。有关详细信息,请参阅 SSH 文档。 + +### 提示 + +存在几个其他应用程序,它们可以使用与刚刚描述的概念类似的过程来提供安全隧道。 diff --git a/docs/X/ssl-tcp.md b/docs/en/ssl-tcp.md similarity index 100% rename from docs/X/ssl-tcp.md rename to docs/en/ssl-tcp.md diff --git a/docs/en/ssl-tcp.zh.md b/docs/en/ssl-tcp.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..5d8512f44cff4f1080d763b8c2561c22849bfb55 --- /dev/null +++ b/docs/en/ssl-tcp.zh.md @@ -0,0 +1,154 @@ +## 19.9. Secure TCP/IP Connections with SSL + +[19.9.1. Basic Setup](ssl-tcp.html#SSL-SETUP) + +[19.9.2. OpenSSL Configuration](ssl-tcp.html#SSL-OPENSSL-CONFIG) + +[19.9.3. Using Client Certificates](ssl-tcp.html#SSL-CLIENT-CERTIFICATES) + +[19.9.4. SSL Server File Usage](ssl-tcp.html#SSL-SERVER-FILES) + +[19.9.5. Creating Certificates](ssl-tcp.html#SSL-CERTIFICATE-CREATION) + +[](<>) + +PostgreSQL has native support for using SSL connections to encrypt client/server communications for increased security. This requires that OpenSSL is installed on both client and server systems and that support in PostgreSQL is enabled at build time (see[Chapter 17](installation.html)). + +### 19.9.1. Basic Setup + +With SSL support compiled in, the PostgreSQL server can be started with SSL enabled by setting the parameter[ssl](runtime-config-connection.html#GUC-SSL)to`on`in`postgresql.conf`. The server will listen for both normal and SSL connections on the same TCP port, and will negotiate with any connecting client on whether to use SSL. By default, this is at the client's option; see[Section 21.1](auth-pg-hba-conf.html)about how to set up the server to require use of SSL for some or all connections. + +To start in SSL mode, files containing the server certificate and private key must exist. By default, these files are expected to be named`server.crt`and`server.key`, respectively, in the server's data directory, but other names and locations can be specified using the configuration parameters[ssl_cert\_文件](runtime-config-connection.html#GUC-SSL-CERT-FILE)和[ssl\_钥匙\_文件](runtime-config-connection.html#GUC-SSL-KEY-FILE). + +在 Unix 系统上,权限`服务器密钥`必须禁止对世界或群组的任何访问;通过命令实现这一点`chmod 0600 server.key`.或者,该文件可以由 root 拥有并具有组读取访问权限(即,`0640`权限)。该设置适用于证书和密钥文件由操作系统管理的安装。然后应该使运行 PostgreSQL 服务器的用户成为有权访问这些证书和密钥文件的组的成员。 + +如果数据目录允许组读取访问,则证书文件可能需要位于数据目录之外,以符合上述安全要求。通常,启用组访问以允许非特权用户备份数据库,在这种情况下,备份软件将无法读取证书文件并且可能会出错。 + +如果私钥受密码保护,服务器将提示输入密码并且在输入密码之前不会启动。默认情况下使用密码会禁用在不重新启动服务器的情况下更改服务器的 SSL 配置的功能,但请参阅[ssl\_密码\_命令\_支持\_重新加载](runtime-config-connection.html#GUC-SSL-PASSPHRASE-COMMAND-SUPPORTS-RELOAD).此外,受密码保护的私钥根本无法在 Windows 上使用。 + +第一张证书在`server.crt`must be the server's certificate because it must match the server's private key. The certificates of “intermediate” certificate authorities can also be appended to the file. Doing this avoids the necessity of storing intermediate certificates on clients, assuming the root and intermediate certificates were created with`v3_ca`extensions. (This sets the certificate's basic constraint of`CA`to`true`.) This allows easier expiration of intermediate certificates. + +It is not necessary to add the root certificate to`server.crt`. Instead, clients must have the root certificate of the server's certificate chain. + +### 19.9.2. OpenSSL Configuration + +PostgreSQL reads the system-wide OpenSSL configuration file. By default, this file is named`openssl.cnf`and is located in the directory reported by`openssl version -d`. This default can be overridden by setting environment variable`OPENSSL_CONF`to the name of the desired configuration file. + +OpenSSL supports a wide range of ciphers and authentication algorithms, of varying strength. While a list of ciphers can be specified in the OpenSSL configuration file, you can specify ciphers specifically for use by the database server by modifying[ssl_ciphers](runtime-config-connection.html#GUC-SSL-CIPHERS)in`postgresql.conf`. + +### Note + +It is possible to have authentication without encryption overhead by using`NULL-SHA`or`NULL-MD5`ciphers. However, a man-in-the-middle could read and pass communications between client and server. Also, encryption overhead is minimal compared to the overhead of authentication. For these reasons NULL ciphers are not recommended. + +### 19.9.3. Using Client Certificates + +To require the client to supply a trusted certificate, place certificates of the root certificate authorities (CAs) you trust in a file in the data directory, set the parameter[ssl_ca_file](runtime-config-connection.html#GUC-SSL-CA-FILE)in`postgresql.conf`to the new file name, and add the authentication option`clientcert=verify-ca`or`clientcert=verify-full`to the appropriate`hostssl`line(s) in`pg_hba.conf`. A certificate will then be requested from the client during SSL connection startup. (See[Section 34.19](libpq-ssl.html)for a description of how to set up certificates on the client.) + +For a`hostssl`entry with`clientcert=verify-ca`,服务器将验证客户端的证书是否由受信任的证书颁发机构之一签名。如果`clientcert=验证完整`指定时,服务器不仅会验证证书链,还会检查用户名或其映射是否匹配`cn`所提供证书的(通用名称)。请注意,证书链验证始终确保当`证书`使用身份验证方法(请参阅[第 21.12 节](auth-cert.html))。 + +链接到现有根证书的中间证书也可以出现在[ssl\_约\_文件](runtime-config-connection.html#GUC-SSL-CA-FILE)如果您希望避免将它们存储在客户端上(假设根证书和中间证书是使用`v3_ca`扩展名)。如果参数[ssl_crl\_文件](runtime-config-connection.html#GUC-SSL-CRL-FILE)要么[ssl_crl\_目录](runtime-config-connection.html#GUC-SSL-CRL-DIR)已设置。 + +这`客户证书`身份验证选项可用于所有身份验证方法,但仅在`pg_hba.conf`指定为的行`主机`.什么时候`客户证书`如果未指定,则服务器仅在提供客户端证书并配置 CA 时才根据其 CA 文件验证客户端证书。 + +有两种方法可以强制用户在登录期间提供证书。 + +第一种方法利用`证书`认证方法为`主机`中的条目`pg_hba.conf`,以便证书本身用于身份验证,同时还提供 ssl 连接安全性。看[第 21.12 节](auth-cert.html)详情。(没有必要指定任何`客户证书`使用时明确的选项`证书`验证方法。)在这种情况下,`cn`证书中提供的(通用名称)会根据用户名或适用的映射进行检查。 + +第二种方法结合了任何身份验证方法`hostssl`entries with the verification of client certificates by setting the`clientcert`authentication option to`verify-ca`or`verify-full`. The former option only enforces that the certificate is valid, while the latter also ensures that the`cn`(Common Name) in the certificate matches the user name or an applicable mapping. + +### 19.9.4. SSL Server File Usage + +[Table 19.2](ssl-tcp.html#SSL-FILE-USAGE)summarizes the files that are relevant to the SSL setup on the server. (The shown file names are default names. The locally configured names could be different.) + +**Table 19.2. SSL Server File Usage** + +| File | Contents | Effect | +| ---- | -------- | ------ | +| [ssl_cert_file](runtime-config-connection.html#GUC-SSL-CERT-FILE)(`$PGDATA/server.crt`) | server certificate | sent to client to indicate server's identity | +| [ssl_key_file](runtime-config-connection.html#GUC-SSL-KEY-FILE)(`$PGDATA/server.key`) | server private key | proves server certificate was sent by the owner; does not indicate certificate owner is trustworthy | +| [ssl_ca_file](runtime-config-connection.html#GUC-SSL-CA-FILE) | trusted certificate authorities | checks that client certificate is signed by a trusted certificate authority | +| [ssl_crl_file](runtime-config-connection.html#GUC-SSL-CRL-FILE) | certificates revoked by certificate authorities | client certificate must not be on this list | + +The server reads these files at server start and whenever the server configuration is reloaded. On Windows systems, they are also re-read whenever a new backend process is spawned for a new client connection. + +If an error in these files is detected at server start, the server will refuse to start. But if an error is detected during a configuration reload, the files are ignored and the old SSL configuration continues to be used. On Windows systems, if an error in these files is detected at backend start, that backend will be unable to establish an SSL connection. In all these cases, the error condition is reported in the server log. + +### 19.9.5. Creating Certificates + +To create a simple self-signed certificate for the server, valid for 365 days, use the following OpenSSL command, replacing*`dbhost.yourdomain.com`*使用服务器的主机名: + +``` +openssl req -new -x509 -days 365 -nodes -text -out server.crt \ + -keyout server.key -subj "/CN=dbhost.yourdomain.com" +``` + +然后做: + +``` +chmod og-rwx server.key +``` + +因为如果文件的权限比这更自由,服务器将拒绝该文件。有关如何创建服务器私钥和证书的更多详细信息,请参阅 OpenSSL 文档。 + +虽然自签名证书可用于测试,但应在生产中使用由证书颁发机构 (CA)(通常是企业范围的根 CA)签名的证书。 + +要创建其身份可以由客户端验证的服务器证书,请首先创建证书签名请求 (CSR) 和公钥/私钥文件: + +``` +openssl req -new -nodes -text -out root.csr \ + -keyout root.key -subj "/CN=root.yourdomain.com" +chmod og-rwx root.key +``` + +然后,使用密钥签署请求以创建根证书颁发机构(使用 Linux 上的默认 OpenSSL 配置文件位置): + +``` +openssl x509 -req -in root.csr -text -days 3650 \ + -extfile /etc/ssl/openssl.cnf -extensions v3_ca \ + -signkey root.key -out root.crt +``` + +最后,创建一个由新的根证书颁发机构签名的服务器证书: + +``` +openssl req -new -nodes -text -out server.csr \ + -keyout server.key -subj "/CN=dbhost.yourdomain.com" +chmod og-rwx server.key + +openssl x509 -req -in server.csr -text -days 365 \ + -CA root.crt -CAkey root.key -CAcreateserial \ + -out server.crt +``` + +`服务器.crt`和`服务器密钥`应该存储在服务器上,并且`根.crt`应该存储在客户端上,以便客户端可以验证服务器的叶证书是否由其受信任的根证书签名。`根密钥`应离线存储以用于创建未来的证书。 + +还可以创建包含中间证书的信任链: + +``` +# root +openssl req -new -nodes -text -out root.csr \ + -keyout root.key -subj "/CN=root.yourdomain.com" +chmod og-rwx root.key +openssl x509 -req -in root.csr -text -days 3650 \ + -extfile /etc/ssl/openssl.cnf -extensions v3_ca \ + -signkey root.key -out root.crt + +# intermediate +openssl req -new -nodes -text -out intermediate.csr \ + -keyout intermediate.key -subj "/CN=intermediate.yourdomain.com" +chmod og-rwx intermediate.key +openssl x509 -req -in intermediate.csr -text -days 1825 \ + -extfile /etc/ssl/openssl.cnf -extensions v3_ca \ + -CA root.crt -CAkey root.key -CAcreateserial \ + -out intermediate.crt + +# leaf +openssl req -new -nodes -text -out server.csr \ + -keyout server.key -subj "/CN=dbhost.yourdomain.com" +chmod og-rwx server.key +openssl x509 -req -in server.csr -text -days 365 \ + -CA intermediate.crt -CAkey intermediate.key -CAcreateserial \ + -out server.crt +``` + +`服务器.crt`和`中级.crt`应连接成证书文件包并存储在服务器上。`服务器密钥`也应该存储在服务器上。`根.crt`应该存储在客户端上,以便客户端可以验证服务器的叶证书是否由链接到其受信任的根证书的证书链签名。`根密钥`和`中间密钥`应离线存储以用于创建未来的证书。 diff --git a/docs/X/sslinfo.md b/docs/en/sslinfo.md similarity index 100% rename from docs/X/sslinfo.md rename to docs/en/sslinfo.md diff --git a/docs/en/sslinfo.zh.md b/docs/en/sslinfo.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..9d997711b376e5fbce05b58fcd9428721529d868 --- /dev/null +++ b/docs/en/sslinfo.zh.md @@ -0,0 +1,95 @@ +## F.39. sslinfo + +[F.39.1. Functions Provided](sslinfo.html#id-1.11.7.48.6)[F.39.2. Author](sslinfo.html#id-1.11.7.48.7) + +[](<>) + +The`sslinfo`module provides information about the SSL certificate that the current client provided when connecting to PostgreSQL. The module is useless (most functions will return NULL) if the current connection does not use SSL. + +Some of the information available through this module can also be obtained using the built-in system view[`pg_stat_ssl`](monitoring-stats.html#MONITORING-PG-STAT-SSL-VIEW). + +This extension won't build at all unless the installation was configured with`--with-ssl=openssl`. + +### F.39.1. Functions Provided + +`ssl_is_used() returns boolean` [](<>) + +Returns true if current connection to server uses SSL, and false otherwise. + +`ssl_version() returns text` [](<>) + +Returns the name of the protocol used for the SSL connection (e.g., TLSv1.0, TLSv1.1, TLSv1.2 or TLSv1.3). + +`ssl_cipher() returns text` [](<>) + +Returns the name of the cipher used for the SSL connection (e.g., DHE-RSA-AES256-SHA). + +`ssl_client_cert_present() returns boolean` [](<>) + +Returns true if current client has presented a valid SSL client certificate to the server, and false otherwise. (The server might or might not be configured to require a client certificate.) + +`ssl_client_serial() returns numeric` [](<>) + +Returns serial number of current client certificate. The combination of certificate serial number and certificate issuer is guaranteed to uniquely identify a certificate (but not its owner — the owner ought to regularly change their keys, and get new certificates from the issuer). + +So, if you run your own CA and allow only certificates from this CA to be accepted by the server, the serial number is the most reliable (albeit not very mnemonic) means to identify a user. + +`ssl_client_dn() returns text` [](<>) + +Returns the full subject of the current client certificate, converting character data into the current database encoding. It is assumed that if you use non-ASCII characters in the certificate names, your database is able to represent these characters, too. If your database uses the SQL_ASCII encoding, non-ASCII characters in the name will be represented as UTF-8 sequences. + +The result looks like`/CN=Somebody /C=Some country/O=Some organization`. + +`ssl_issuer_dn() returns text` [](<>) + +Returns the full issuer name of the current client certificate, converting character data into the current database encoding. Encoding conversions are handled the same as for`ssl_client_dn`. + +The combination of the return value of this function with the certificate serial number uniquely identifies the certificate. + +This function is really useful only if you have more than one trusted CA certificate in your server's certificate authority file, or if this CA has issued some intermediate certificate authority certificates. + +`ssl_client_dn_field(fieldname text) returns text` [](<>) + +This function returns the value of the specified field in the certificate subject, or NULL if the field is not present. Field names are string constants that are converted into ASN1 object identifiers using the OpenSSL object database. The following values are acceptable: + +``` +commonName (alias CN) +surname (alias SN) +name +givenName (alias GN) +countryName (alias C) +localityName (alias L) +stateOrProvinceName (alias ST) +organizationName (alias O) +organizationalUnitName (alias OU) +title +description +initials +postalCode +streetAddress +generationQualifier +description +dnQualifier +x500UniqueIdentifier +pseudonym +role +emailAddress +``` + +All of these fields are optional, except`commonName`. It depends entirely on your CA's policy which of them would be included and which wouldn't. The meaning of these fields, however, is strictly defined by the X.500 and X.509 standards, so you cannot just assign arbitrary meaning to them. + +`ssl_issuer_field(fieldname text) returns text` [](<>) + +Same as`ssl_client_dn_field`, but for the certificate issuer rather than the certificate subject. + +`ssl_extension_info() returns setof record` [](<>) + +Provide information about extensions of client certificate: extension name, extension value, and if it is a critical extension. + +### F.39.2. Author + +Victor Wagner`<[vitus@cryptocom.ru](mailto:vitus@cryptocom.ru)>`, Cryptocom LTD + +Dmitry Voronin`<[carriingfate92@yandex.ru](mailto:carriingfate92@yandex.ru)>` + +E-Mail of Cryptocom OpenSSL development group:`<[openssl@cryptocom.ru](mailto:openssl@cryptocom.ru)>` diff --git a/docs/X/sspi-auth.md b/docs/en/sspi-auth.md similarity index 100% rename from docs/X/sspi-auth.md rename to docs/en/sspi-auth.md diff --git a/docs/en/sspi-auth.zh.md b/docs/en/sspi-auth.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..71185b9a7c0951a1e22f5c1d7eaa080f9d37408e --- /dev/null +++ b/docs/en/sspi-auth.zh.md @@ -0,0 +1,33 @@ +## 21.7. SSPI Authentication + +[](<>) + +SSPI is a Windows technology for secure authentication with single sign-on. PostgreSQL will use SSPI in`negotiate`mode, which will use Kerberos when possible and automatically fall back to NTLM in other cases. SSPI authentication only works when both server and client are running Windows, or, on non-Windows platforms, when GSSAPI is available. + +When using Kerberos authentication, SSPI works the same way GSSAPI does; see[Section 21.6](gssapi-auth.html)for details. + +The following configuration options are supported for SSPI: + +`include_realm` + +If set to 0, the realm name from the authenticated user principal is stripped off before being passed through the user name mapping ([Section 21.2](auth-username-maps.html)). This is discouraged and is primarily available for backwards compatibility, as it is not secure in multi-realm environments unless`krb_realm`is also used. It is recommended to leave`include_realm`set to the default (1) and to provide an explicit mapping in`pg_ident.conf`to convert principal names to PostgreSQL user names. + +`compat_realm` + +If set to 1, the domain's SAM-compatible name (also known as the NetBIOS name) is used for the`include_realm`option. This is the default. If set to 0, the true realm name from the Kerberos user principal name is used. + +Do not disable this option unless your server runs under a domain account (this includes virtual service accounts on a domain member system) and all clients authenticating through SSPI are also using domain accounts, or authentication will fail. + +`upn_username` + +如果此选项与`兼容领域`,来自 Kerberos UPN 的用户名用于身份验证。如果禁用(默认),则使用与 SAM 兼容的用户名。默认情况下,这两个名称对于新用户帐户是相同的。 + +请注意,如果未指定显式用户名,libpq 将使用与 SAM 兼容的名称。如果您使用 libpq 或基于它的驱动程序,您应该禁用此选项或在连接字符串中明确指定用户名。 + +`地图` + +允许在系统和数据库用户名之间进行映射。看[第 21.2 节](auth-username-maps.html)详情。对于 SSPI/Kerberos 主体,例如`用户名@EXAMPLE.COM`(或者,不太常见的是,`用户名/hostbased@EXAMPLE.COM`),用于映射的用户名是`用户名@EXAMPLE.COM`(要么`用户名/hostbased@EXAMPLE.COM`, 分别), 除非`包括领域`已设置为 0,在这种情况下`用户名`(要么`用户名/基于主机`) 是映射时所看到的系统用户名。 + +`krb_realm` + +设置域以匹配用户主体名称。如果设置了此参数,则仅接受该领域的用户。如果未设置,则任何领域的用户都可以连接,这取决于完成的用户名映射。 diff --git a/docs/X/storage-file-layout.md b/docs/en/storage-file-layout.md similarity index 100% rename from docs/X/storage-file-layout.md rename to docs/en/storage-file-layout.md diff --git a/docs/en/storage-file-layout.zh.md b/docs/en/storage-file-layout.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a30af5694bcebbda719653d04a896f547c6eef3b --- /dev/null +++ b/docs/en/storage-file-layout.zh.md @@ -0,0 +1,56 @@ +## 70.1. Database File Layout + +This section describes the storage format at the level of files and directories. + +Traditionally, the configuration and data files used by a database cluster are stored together within the cluster's data directory, commonly referred to as`PGDATA`(after the name of the environment variable that can be used to define it). A common location for`PGDATA`is`/var/lib/pgsql/data`. Multiple clusters, managed by different server instances, can exist on the same machine. + +The`PGDATA`directory contains several subdirectories and control files, as shown in[Table 70.1](storage-file-layout.html#PGDATA-CONTENTS-TABLE). In addition to these required items, the cluster configuration files`postgresql.conf`,`pg_hba.conf`, and`pg_ident.conf`are traditionally stored in`PGDATA`, although it is possible to place them elsewhere. + +**Table 70.1. Contents of`PGDATA`** + +| Item | Description | +| ---- | ----------- | +| `PG_VERSION` | A file containing the major version number of PostgreSQL | +| `base` | Subdirectory containing per-database subdirectories | +| `current_logfiles` | 记录日志收集器当前写入的日志文件的文件 | +| `全球的` | 包含集群范围表的子目录,例如`pg_database` | +| `pg_commit_ts` | 包含事务提交时间戳数据的子目录 | +| `pg_dynshmem` | 包含动态共享内存子系统使用的文件的子目录 | +| `pg_logical` | 包含用于逻辑解码的状态数据的子目录 | +| `pg_multixact` | 包含多事务状态数据的子目录(用于共享行锁) | +| `pg_notify` | 包含 LISTEN/NOTIFY 状态数据的子目录 | +| `pg_replslot` | 包含复制槽数据的子目录 | +| `pg_serial` | 包含有关已提交可序列化事务信息的子目录 | +| `pg_snapshots` | 包含导出快照的子目录 | +| `pg_stat` | 包含统计子系统的永久文件的子目录 | +| `pg_stat_tmp` | 包含统计子系统临时文件的子目录 | +| `pg_subtrans` | 包含子事务状态数据的子目录 | +| `pg_tblspc` | 包含指向表空间的符号链接的子目录 | +| `pg_twophase` | 包含准备交易的状态文件的子目录 | +| `pg_wal` | 包含 WAL(预写日志)文件的子目录 | +| `pg_xact` | 包含事务提交状态数据的子目录 | +| `postgresql.auto.conf` | 用于存储由设置的配置参数的文件`改变系统` | +| `postmaster.opts` | 记录服务器命令行选项的文件
最后开始于 | +| `邮递员.pid` | 记录当前 postmaster 进程 ID (PID)、集群数据目录路径、postmaster 启动时间戳、端口号、Unix 域套接字目录路径(可以为空)、第一次有效侦听的锁定文件\_地址(IP 地址或`*`, 如果不监听 TCP,则为空)和共享内存段 ID(服务器关闭后此文件不存在) | + +集群中的每个数据库都有一个子目录```PGD​​ATA``/base```, 以数据库的 OID 命名`pg_database`.该子目录是数据库文件的默认位置;特别是,它的系统目录存储在那里。 + +请注意,以下部分描述了内置的行为`堆` [表访问方法](tableam.html), 和内置[索引访问方法](indexam.html).由于 PostgreSQL 的可扩展性,其他访问方法可能会有所不同。 + +每个表和索引都存储在一个单独的文件中。对于普通关系,这些文件以表或索引的名称命名*文件节点*编号,可以在`pg_class`.`relfilenode`.但对于临时关系,文件名的形式为`时间*`BBB`*_*`FFF`*`, 在哪里*`BBB`*是创建文件的后端的后端 ID,并且*`FFF`*是文件节点号。无论哪种情况,除了主文件(a/k/a main fork)之外,每个表和索引都有一个*自由空间地图*(看[第 70.3 节](storage-fsm.html)),它存储有关关系中可用空间的信息。可用空间映射存储在以文件节点号加后缀命名的文件中`_fsm`.表也​​有*能见度图*, 存储在带有后缀的 fork 中`_vm`,以跟踪哪些页面已知没有死元组。可见性地图在中进一步描述[第 70.4 节](storage-vm.html).未记录的表和索引有第三个叉子,称为初始化叉子,它存储在带有后缀的叉子中`_在里面`(看[第 70.5 节](storage-init.html)). + +### Caution + +Note that while a table's filenode often matches its OID, this is*not*necessarily the case; some operations, like`TRUNCATE`,`REINDEX`,`CLUSTER`and some forms of`ALTER TABLE`, can change the filenode while preserving the OID. Avoid assuming that filenode and table OID are the same. Also, for certain system catalogs including`pg_class`itself,`pg_class`.`relfilenode`contains zero. The actual filenode number of these catalogs is stored in a lower-level data structure, and can be obtained using the`pg_relation_filenode()`function. + +When a table or index exceeds 1 GB, it is divided into gigabyte-sized*segments*. The first segment's file name is the same as the filenode; subsequent segments are named filenode.1, filenode.2, etc. This arrangement avoids problems on platforms that have file size limitations. (Actually, 1 GB is just the default segment size. The segment size can be adjusted using the configuration option`--with-segsize`when building PostgreSQL.) In principle, free space map and visibility map forks could require multiple segments as well, though this is unlikely to happen in practice. + +A table that has columns with potentially large entries will have an associated*TOAST*table, which is used for out-of-line storage of field values that are too large to keep in the table rows proper.`pg_class`.`瑞曲林`从表链接到其 TOAST 表(如果有)。看[第 70.2 节](storage-toast.html)了解更多信息。 + +表和索引的内容将在[第 70.6 节](storage-page-layout.html). + +表空间使场景更加复杂。每个用户定义的表空间在```PGD​​ATA``/pg_tblspc```目录,它指向物理表空间目录(即,在表空间的`创建表空间`命令)。此符号链接以表空间的 OID 命名。在物理表空间目录中有一个子目录,其名称取决于 PostgreSQLserver 版本,例如`PG_9.0_201008051`.(使用这个子目录的原因是为了使数据库的后续版本可以使用相同的`创建表空间`位置值没有冲突。)在特定于版本的子目录中,每个数据库都有一个子目录,该子目录在表空间中具有元素,以数据库的 OID 命名。表和索引存储在该目录中,使用文件节点命名方案。这`pg_default`表空间不通过访问`pg_tblspc`, 但对应于```PGD​​ATA``/base```.同样,`pg_global`表空间不通过访问`pg_tblspc`, 但对应于```PGD​​ATA``/全局```. + +这`pg_relation_filepath()`函数显示整个路径(相对于`PGD​​ATA`) 的任何关系。它通常可以用来替代记住上述许多规则。但请记住,此函数仅给出关系主分支的第一段的名称——您可能需要附加段号和/或`_fsm`,`_vm`, 要么`_在里面`查找与该关系关联的所有文件。 + +临时文件(用于对超出内存容量的数据进行排序等操作)在内部创建```PGD​​ATA``/base/pgsql_tmp```, 或在一个`pgsql_tmp`表空间目录的子目录,如果表空间不是`pg_default`是为他们指定的。临时文件的名称具有以下形式`pgsql_tmp*`购买力平价`*.*`神经网络`*`, 在哪里*`购买力平价`*是拥有后端的 PID,并且*`神经网络`*区分该后端的不同临时文件。 diff --git a/docs/X/storage-fsm.md b/docs/en/storage-fsm.md similarity index 100% rename from docs/X/storage-fsm.md rename to docs/en/storage-fsm.md diff --git a/docs/en/storage-fsm.zh.md b/docs/en/storage-fsm.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a64f8cb10aca74d80ca9d05b4abed12d0ca1246c --- /dev/null +++ b/docs/en/storage-fsm.zh.md @@ -0,0 +1,11 @@ +## 70.3.自由空间地图 + +[](<>)[](<>) + +每个堆和索引关系(哈希索引除外)都有一个可用空间映射(FSM),用于跟踪关系中的可用空间。它与主关系数据一起存储在一个单独的关系分支中,以关系的文件节点号命名,并加上`_fsm`后缀例如,如果关系的filenode是12345,则FSM存储在名为`12345_fsm`,与主关系文件位于同一目录中。 + +自由空间地图组织为FSM页面树。底层FSM页面存储每个堆(或索引)页面上的可用空间,使用一个字节表示每个此类页面。上层汇总来自下层的信息。 + +在每个FSM页面中都有一个二叉树,存储在一个数组中,每个节点有一个字节。每个叶节点代表一个堆页,或一个较低级别的FSM页。在每个非叶节点中,存储其子节点中较高的值。因此,叶节点中的最大值存储在根节点。 + +看见`src/backend/storage/freespace/README`有关FSM的结构以及如何更新和搜索的更多详细信息。这个[pg\_自由空间地图](pgfreespacemap.html)模块可用于检查自由空间地图中存储的信息。 diff --git a/docs/X/storage-init.md b/docs/en/storage-init.md similarity index 100% rename from docs/X/storage-init.md rename to docs/en/storage-init.md diff --git a/docs/en/storage-init.zh.md b/docs/en/storage-init.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..c7ab353541379a41cdf44d511a7b160bb4771647 --- /dev/null +++ b/docs/en/storage-init.zh.md @@ -0,0 +1,5 @@ +## 70.5. The Initialization Fork + +[](<>) + +Each unlogged table, and each index on an unlogged table, has an initialization fork. The initialization fork is an empty table or index of the appropriate type. When an unlogged table must be reset to empty due to a crash, the initialization fork is copied over the main fork, and any other forks are erased (they will be recreated automatically as needed). diff --git a/docs/X/storage-page-layout.md b/docs/en/storage-page-layout.md similarity index 100% rename from docs/X/storage-page-layout.md rename to docs/en/storage-page-layout.md diff --git a/docs/en/storage-page-layout.zh.md b/docs/en/storage-page-layout.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..8a12200eb69084e02326529fd86f6885a74cef17 --- /dev/null +++ b/docs/en/storage-page-layout.zh.md @@ -0,0 +1,71 @@ +## 70.6. Database Page Layout + +[70.6.1. Table Row Layout](storage-page-layout.html#STORAGE-TUPLE-LAYOUT) + +This section provides an overview of the page format used withinPostgreSQL tables and indexes.[\[17\]](#ftn.id-1.10.22.8.2.2)Sequences and TOAST tables are formatted just like a regular table. + +In the following explanation, a*byte*is assumed to contain 8 bits. In addition, the term*item*refers to an individual data value that is stored on a page. In a table, an item is a row; in an index, an item is an index entry. + +Every table and index is stored as an array of*pages*of a fixed size (usually 8 kB, although a different page size can be selected when compiling the server). In a table, all the pages are logically equivalent, so a particular item (row) can be stored in any page. In indexes, the first page is generally reserved as a*metapage*holding control information, and there can be different types of pages within the index, depending on the index access method. + +[Table 70.2](storage-page-layout.html#PAGE-TABLE)shows the overall layout of a page. There are five parts to each page. + +**Table 70.2. Overall Page Layout** + +| Item | Description | +| ---- | ----------- | +| PageHeaderData | 24 bytes long. Contains general information about the page, including
free space pointers. | +| ItemIdData | Array of item identifiers pointing to the actual items. Each
entry is an (offset,length) pair. 4 bytes per item. | +| Free space | The unallocated space. New item identifiers are allocated from
the start of this area, new items from the end. | +| Items | 实际物品本身。 | +| 特殊空间 | 索引访问方法特定的数据。不同的方法存储不同的
数据。在普通表中为空。 | + +每页的前 24 个字节由页头(`页头数据`)。它的格式在[表 70.3](storage-page-layout.html#PAGEHEADERDATA-TABLE).第一个字段跟踪与此页面相关的最新 WAL 条目。第二个字段包含页面校验和,如果[数据校验和](app-initdb.html#APP-INITDB-DATA-CHECKSUMS)已启用。接下来是一个包含标志位的 2 字节字段。接下来是三个 2 字节整数字段 (`pd_lower`,`pd_upper`, 和`pd_special`)。这些包含从页面开始到未分配空间开始、到未分配空间结束和特殊空间开始的字节偏移量。页头的下 2 个字节,`pd_pagesize_version`,存储页面大小和版本指示符。从 PostgreSQL 8.3 开始,版本号为 4;PostgreSQL 8.1 和 8.2 使用版本号 3;PostgreSQL 8.0 使用版本号 2;PostgreSQL 7.3 和 7.4 使用版本号 1;以前的版本使用版本号 0。(这些版本中基本的页面布局和标题格式没有改变,但堆行标题的布局有。)页面大小基本上只是作为交叉检查出现;不支持在安装中使用多个页面大小。最后一个字段是一个提示,显示修剪页面是否可能有利可图:它跟踪页面上最旧的未修剪 XMAX。表 70.3。 + +**PageHeaderData 布局** + +| 场地 | 类型 | 长度 | 描述 | +| --- | --- | --- | --- | +| pd_lsn | PageXLogRecPtr | 8 个字节 | LSN:WAL 记录最后一个字节之后的下一个字节,用于最后一次更改此页面 | +| pd\_校验和 | uint16 | 2 个字节 | 页校验和 | +| pd\_旗帜 | uint16 | 2 个字节 | 标志位 | +| pd\_降低 | 位置索引 | 2 个字节 | 到可用空间开始的偏移量 | +| pd\_上 | 位置索引 | 2 个字节 | 到可用空间末端的偏移量 | +| pd\_特别的 | 位置索引 | 2 个字节 | 特殊空间起点的偏移量 | +| pd\_页面大小\_版本 | uint16 | 2 个字节 | 页面大小和布局版本号信息 | +| pd\_修剪\_xid | 交易 ID | 4字节 | 页面上最旧的未修剪 XMAX,如果没有则为零 | + +所有细节都可以在`src/include/storage/bufpage.h`. + +页眉之后是项目标识符(`ItemIdData`),每个需要四个字节。项目标识符包含项目开头的字节偏移量、其长度(以字节为单位)和一些影响其解释的属性位。根据需要从未分配空间的开头分配新的项目标识符。存在的项目标识符的数量可以通过查看来确定`pd_lower`,增加它以分配一个新的标识符。因为项目标识符在被释放之前永远不会移动,因此它的索引可以长期用于引用项目,即使项目本身在页面上移动以压缩可用空间也是如此。实际上,每个指向项目的指针 (`项目指针`,也称为`CTID`) 由 PostgreSQL 创建的,由页码和项目标识符的索引组成。 + +项目本身存储在未分配空间末尾向后分配的空间中。确切的结构因表要包含的内容而异。表和序列都使用名为`HeapTupleHeaderData`, described below. + +The final section is the “special section” which can contain anything the access method wishes to store. For example, b-tree indexes store links to the page's left and right siblings, as well as some other data relevant to the index structure. Ordinary tables do not use a special section at all (indicated by setting`pd_special`to equal the page size). + +[Figure 70.1](storage-page-layout.html#STORAGE-PAGE-LAYOUT-FIGURE)illustrates how these parts are laid out in a page. + +**Figure 70.1. Page Layout** + +### 70.6.1. Table Row Layout + +All table rows are structured in the same way. There is a fixed-size header (occupying 23 bytes on most machines), followed by an optional null bitmap, an optional object ID field, and the user data. The header is detailed in[Table 70.4](storage-page-layout.html#HEAPTUPLEHEADERDATA-TABLE). The actual user data (columns of the row) begins at the offset indicated by`t_hoff`, which must always be a multiple of the MAXALIGN distance for the platform. The null bitmap is only present if the*HEAP_HASNULL*bit is set in`t_infomask`. If it is present it begins just after the fixed header and occupies enough bytes to have one bit per data column (that is, the number of bits that equals the attribute count in`t_infomask2`). In this list of bits, a 1 bit indicates not-null, a 0 bit is a null. When the bitmap is not present, all columns are assumed not-null. The object ID is only present if the*HEAP_HASOID_OLD*bit is set in`t_infomask`. If present, it appears just before the`t_hoff`boundary. Any padding needed to make`t_hoff`a MAXALIGN multiple will appear between the null bitmap and the object ID. (This in turn ensures that the object ID is suitably aligned.) + +**Table 70.4. HeapTupleHeaderData Layout** + +| Field | Type | Length | Description | +| ----- | ---- | ------ | ----------- | +| t_xmin | TransactionId | 4 bytes | insert XID stamp | +| t_xmax | TransactionId | 4 bytes | delete XID stamp | +| t_cid | CommandId | 4 bytes | insert and/or delete CID stamp (overlays with t_xvac) | +| t\_真空吸尘器 | 交易 ID | 4字节 | 用于 VACUUM 操作移动行版本的 XID | +| 吨\_ctid | 项目指针数据 | 6字节 | 此行或更新行版本的当前 TID | +| 吨\_信息掩码2 | uint16 | 2 个字节 | 属性数量,以及各种标志位 | +| 吨\_信息掩码 | uint16 | 2 个字节 | 各种标志位 | +| 吨\_霍夫 | uint8 | 1 个字节 | 对用户数据的偏移 | + +所有细节都可以在`src/include/access/htup_details.h`. + +只能使用从其他表中获得的信息来解释实际数据,主要是`pg_attribute`.识别字段位置所需的关键值是`阿特伦`和`对齐`.没有办法直接获取特定属性,除非只有固定宽度的字段且没有空值。所有这些诡计都包含在函数中*堆\_获取属性*,*快速获取属性*和*堆\_获取系统属性*. + +要读取数据,您需要依次检查每个属性。首先根据空位图检查该字段是否为NULL。如果是,请转到下一个。然后确保你有正确的对齐方式。如果该字段是一个固定宽度的字段,那么所有的字节都被简单地放置。如果它是一个可变长度字段(attlen = -1),那么它会更复杂一些。所有可变长度数据类型共享公共头结构`结构 varlena`,其中包括存储值的总长度和一些标志位。根据标志,数据可以是内联的,也可以在 TOAST 表中;它也可能被压缩(参见[第 70.2 节](storage-toast.html))。 diff --git a/docs/X/storage-toast.md b/docs/en/storage-toast.md similarity index 100% rename from docs/X/storage-toast.md rename to docs/en/storage-toast.md diff --git a/docs/en/storage-toast.zh.md b/docs/en/storage-toast.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..253a04ba7a3320b8e2312bbd48d261b20c0f4c27 --- /dev/null +++ b/docs/en/storage-toast.zh.md @@ -0,0 +1,55 @@ +## 70.2. TOAST + +[70.2.1. Out-of-Line, On-Disk TOAST Storage](storage-toast.html#STORAGE-TOAST-ONDISK) + +[70.2.2. Out-of-Line, In-Memory TOAST Storage](storage-toast.html#STORAGE-TOAST-INMEMORY) + +[](<>)[](<>) + +This section provides an overview of TOAST (The Oversized-Attribute Storage Technique). + +PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code. The technique is affectionately known as TOAST (or “the best thing since sliced bread”). The TOAST infrastructure is also used to improve handling of large data values in-memory. + +Only certain data types support TOAST — there is no need to impose the overhead on data types that cannot produce large field values. To support TOAST, a data type must have a variable-length (*varlena*) representation, in which, ordinarily, the first four-byte word of any stored value contains the total length of the value in bytes (including itself). TOAST does not constrain the rest of the data type's representation. The special representations collectively called*TOASTed values*work by modifying or reinterpreting this initial length word. Therefore, the C-level functions supporting a TOAST-able data type must be careful about how they handle potentially TOASTed input values: an input might not actually consist of a four-byte length word and contents until after it's been*detoasted*. (This is normally done by invoking`PG_DETOAST_DATUM`before doing anything with an input value, but in some cases more efficient approaches are possible. See[Section 38.13.1](xtypes.html#XTYPES-TOAST)for more detail.) + +TOAST usurps two bits of the varlena length word (the high-order bits on big-endian machines, the low-order bits on little-endian machines), thereby limiting the logical size of any value of a TOAST-able data type to 1 GB (230- 1 bytes). When both bits are zero, the value is an ordinary un-TOASTed value of the data type, and the remaining bits of the length word give the total datum size (including length word) in bytes. When the highest-order or lowest-order bit is set, the value has only a single-byte header instead of the normal four-byte header, and the remaining bits of that byte give the total datum size (including length byte) in bytes. This alternative supports space-efficient storage of values shorter than 127 bytes, while still allowing the data type to grow to 1 GB at need. Values with single-byte headers aren't aligned on any particular boundary, whereas values with four-byte headers are aligned on at least a four-byte boundary; this omission of alignment padding provides additional space savings that is significant compared to short values. As a special case, if the remaining bits of a single-byte header are all zero (which would be impossible for a self-inclusive length), the value is a pointer to out-of-line data, with several possible alternatives as described below. The type and size of such a*TOAST pointer*are determined by a code stored in the second byte of the datum. Lastly, when the highest-order or lowest-order bit is clear but the adjacent bit is set, the content of the datum has been compressed and must be decompressed before use. In this case the remaining bits of the four-byte length word give the total size of the compressed datum, not the original data. Note that compression is also possible for out-of-line data but the varlena header does not tell whether it has occurred — the content of the TOAST pointer tells that, instead. + +The compression technique used for either in-line or out-of-line compressed data can be selected for each column by setting the`COMPRESSION`column option in`CREATE TABLE`or`ALTER TABLE`. The default for columns with no explicit setting is to consult the[default_toast_compression](runtime-config-client.html#GUC-DEFAULT-TOAST-COMPRESSION)parameter at the time data is inserted. + +As mentioned, there are multiple types of TOAST pointer datums. The oldest and most common type is a pointer to out-of-line data stored in a*TOAST table*that is separate from, but associated with, the table containing the TOAST pointer datum itself. These*on-disk*pointer datums are created by theTOAST management code (in`access/common/toast_internals.c`) when a tuple to be stored on disk is too large to be stored as-is. Further details appear in[Section 70.2.1](storage-toast.html#STORAGE-TOAST-ONDISK). Alternatively, a TOAST pointer datum can contain a pointer to out-of-line data that appears elsewhere in memory. Such datums are necessarily short-lived, and will never appear on-disk, but they are very useful for avoiding copying and redundant processing of large data values. Further details appear in[Section 70.2.2](storage-toast.html#STORAGE-TOAST-INMEMORY). + +### 70.2.1. Out-of-Line, On-Disk TOAST Storage + +If any of the columns of a table are TOAST-able, the table will have an associated TOAST table, whose OID is stored in the table's`pg_class`.`reltoastrelid`entry. On-diskTOASTed values are kept in the TOAST table, as described in more detail below. + +Out-of-line values are divided (after compression if used) into chunks of at most`TOAST_MAX_CHUNK_SIZE`bytes (by default this value is chosen so that four chunk rows will fit on a page, making it about 2000 bytes). Each chunk is stored as a separate row in the TOAST table belonging to the owning table. EveryTOAST table has the columns`chunk_id`(an OID identifying the particular TOASTed value),`chunk_seq`(a sequence number for the chunk within its value), and`块数据`(块的实际数据)。唯一索引`chunk_id`和`块序列`提供值的快速检索。因此,表示脱机磁盘 TOAST 值的指针数据需要存储要在其中查找的 TOAST 表的 OID 和特定值的 OID(其`chunk_id`)。为方便起见,指针数据还存储逻辑数据大小(原始未压缩数据长度)、物理存储大小(如果应用压缩则不同)和使用的压缩方法(如果有)。因此,考虑到 varlena 标头字节,磁盘上 TOAST 指针数据的总大小为 18 个字节,而与表示值的实际大小无关。 + +TOAST 管理代码仅在要存储在表中的行值大于`TOAST_TUPLE_THRESHOLD`字节(通常为 2 kB)。TOAST 代码将压缩和/或移动字段值到行外,直到行值短于`TOAST_TUPLE_TARGET`字节(通常也是 2 kB,可调整)或没有更多的增益。在 UPDATE 操作期间,未更改字段的值通常按原样保留;因此,如果行外值均未更改,则对具有行外值的行进行更新不会产生 TOAST 成本。 + +TOAST 管理代码识别四种不同的策略,用于在磁盘上存储支持 TOAST 的列: + +- `清楚的`防止压缩或离线存储;此外,它禁止对 varlena 类型使用单字节标头。对于非 TOAST 数据类型的列,这是唯一可能的策略。 + +- `扩展`允许压缩和离线存储。这是大多数支持 TOAST 的数据类型的默认值。将首先尝试压缩,然后如果行仍然太大,则进行外联存储。 + +- `外部的`允许离线存储但不允许压缩。用于`外部的`将在宽范围内进行子字符串操作`文本`和`拜茶`列更快(以增加存储空间为代价),因为这些操作经过优化,可以在未压缩时仅获取离线值的所需部分。 + +- `主要的`允许压缩但不允许离线存储。(实际上,仍然会为这些列执行外联存储,但只有在没有其他方法可以使行小到足以放在页面上时作为最后的手段。) + + 每个支持 TOAST 的数据类型都为该数据类型的列指定了一个默认策略,但是给定表列的策略可以用[`ALTER TABLE ... SET STORAGE`](sql-altertable.html). + +`TOAST_TUPLE_TARGET`can be adjusted for each table using[`ALTER TABLE ... SET (toast_tuple_target = N)`](sql-altertable.html) + +This scheme has a number of advantages compared to a more straightforward approach such as allowing row values to span pages. Assuming that queries are usually qualified by comparisons against relatively small key values, most of the work of the executor will be done using the main row entry. The big values of TOASTed attributes will only be pulled out (if selected at all) at the time the result set is sent to the client. Thus, the main table is much smaller and more of its rows fit in the shared buffer cache than would be the case without any out-of-line storage. Sort sets shrink also, and sorts will more often be done entirely in memory. A little test showed that a table containing typical HTML pages and their URLs was stored in about half of the raw data size including the TOAST table, and that the main table contained only about 10% of the entire data (the URLs and some small HTML pages). There was no run time difference compared to an un-TOASTed comparison table, in which all the HTML pages were cut down to 7 kB to fit. + +### 70.2.2. Out-of-Line, In-Memory TOAST Storage + +TOAST pointers can point to data that is not on disk, but is elsewhere in the memory of the current server process. Such pointers obviously cannot be long-lived, but they are nonetheless useful. There are currently two sub-cases: pointers to*indirect*data and pointers to*expanded*data. + +Indirect TOAST pointers simply point at a non-indirect varlena value stored somewhere in memory. This case was originally created merely as a proof of concept, but it is currently used during logical decoding to avoid possibly having to create physical tuples exceeding 1 GB (as pulling all out-of-line field values into the tuple might do). The case is of limited use since the creator of the pointer datum is entirely responsible that the referenced data survives for as long as the pointer could exist, and there is no infrastructure to help with this. + +Expanded TOAST pointers are useful for complex data types whose on-disk representation is not especially suited for computational purposes. As an example, the standard varlena representation of aPostgreSQL array includes dimensionality information, a nulls bitmap if there are any null elements, then the values of all the elements in order. When the element type itself is variable-length, the only way to find the*`N`*'th element is to scan through all the preceding elements. This representation is appropriate for on-disk storage because of its compactness, but for computations with the array it's much nicer to have an “expanded” or “deconstructed”representation in which all the element starting locations have been identified. The TOAST pointer mechanism supports this need by allowing a pass-by-reference Datum to point to either a standard varlena value (the on-disk representation) or a TOAST pointer that points to an expanded representation somewhere in memory. The details of this expanded representation are up to the data type, though it must have a standard header and meet the other API requirements given in`src/include/utils/expandeddatum.h`. C-level functions working with the data type can choose to handle either representation. Functions that do not know about the expanded representation, but simply apply`PG_DETOAST_DATUM`to their inputs, will automatically receive the traditional varlena representation; so support for an expanded representation can be introduced incrementally, one function at a time. + +TOAST pointers to expanded values are further broken down into*read-write*and*read-only*pointers. The pointed-to representation is the same either way, but a function that receives a read-write pointer is allowed to modify the referenced value in-place, whereas one that receives a read-only pointer must not; it must first create a copy if it wants to make a modified version of the value. This distinction and some associated conventions make it possible to avoid unnecessary copying of expanded values during query execution. + +For all types of in-memory TOAST pointer, the TOASTmanagement code ensures that no such pointer datum can accidentally get stored on disk. In-memory TOAST pointers are automatically expanded to normal in-line varlena values before storage — and then possibly converted to on-disk TOAST pointers, if the containing tuple would otherwise be too big. diff --git a/docs/X/storage-vm.md b/docs/en/storage-vm.md similarity index 100% rename from docs/X/storage-vm.md rename to docs/en/storage-vm.md diff --git a/docs/en/storage-vm.zh.md b/docs/en/storage-vm.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..4e7fac9478cdf5f520108fdc04cd0ee18235b76b --- /dev/null +++ b/docs/en/storage-vm.zh.md @@ -0,0 +1,11 @@ +## 70.4. Visibility Map + +[](<>)[](<>) + +Each heap relation has a Visibility Map (VM) to keep track of which pages contain only tuples that are known to be visible to all active transactions; it also keeps track of which pages contain only frozen tuples. It's stored alongside the main relation data in a separate relation fork, named after the filenode number of the relation, plus a`_vm`suffix. For example, if the filenode of a relation is 12345, the VM is stored in a file called`12345_vm`, in the same directory as the main relation file. Note that indexes do not have VMs. + +The visibility map stores two bits per heap page. The first bit, if set, indicates that the page is all-visible, or in other words that the page does not contain any tuples that need to be vacuumed. This information can also be used by[*index-only scans*](indexes-index-only-scans.html)to answer queries using only the index tuple. The second bit, if set, means that all tuples on the page have been frozen. That means that even an anti-wraparound vacuum need not revisit the page. + +The map is conservative in the sense that we make sure that whenever a bit is set, we know the condition is true, but if a bit is not set, it might or might not be true. Visibility map bits are only set by vacuum, but are cleared by any data-modifying operations on a page. + +The[pg_visibility](pgvisibility.html)module can be used to examine the information stored in the visibility map. diff --git a/docs/X/storage.md b/docs/en/storage.md similarity index 100% rename from docs/X/storage.md rename to docs/en/storage.md diff --git a/docs/en/storage.zh.md b/docs/en/storage.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..fae58e2f415c4ee7b573115807888c0187f022bd --- /dev/null +++ b/docs/en/storage.zh.md @@ -0,0 +1,23 @@ +## 第 70 章数据库物理存储 + +**目录** + +[70.1。数据库文件布局](storage-file-layout.html) + +[70.2。吐司](storage-toast.html) + +[70.2.1。离线、磁盘上 TOAST 存储](storage-toast.html#STORAGE-TOAST-ONDISK) + +[70.2.2。离线、内存中的 TOAST 存储](storage-toast.html#STORAGE-TOAST-INMEMORY) + +[70.3。自由空间地图](storage-fsm.html) + +[70.4。能见度图](storage-vm.html) + +[70.5。初始化分叉](storage-init.html) + +[70.6。数据库页面布局](storage-page-layout.html) + +[70.6.1。表格行布局](storage-page-layout.html#STORAGE-TUPLE-LAYOUT) + +本章概述了 PostgreSQL 数据库使用的物理存储格式。 diff --git a/docs/X/supported-platforms.md b/docs/en/supported-platforms.md similarity index 100% rename from docs/X/supported-platforms.md rename to docs/en/supported-platforms.md diff --git a/docs/en/supported-platforms.zh.md b/docs/en/supported-platforms.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..abe48b5635148900434714cd5867e929744f4706 --- /dev/null +++ b/docs/en/supported-platforms.zh.md @@ -0,0 +1,9 @@ +## 17.6. Supported Platforms + +A platform (that is, a CPU architecture and operating system combination) is considered supported by the PostgreSQL development community if the code contains provisions to work on that platform and it has recently been verified to build and pass its regression tests on that platform. Currently, most testing of platform compatibility is done automatically by test machines in the[PostgreSQL Build Farm](https://buildfarm.postgresql.org/). If you are interested in using PostgreSQL on a platform that is not represented in the build farm, but on which the code works or can be made to work, you are strongly encouraged to set up a build farm member machine so that continued compatibility can be assured. + +In general, PostgreSQL can be expected to work on these CPU architectures: x86, x86_64, IA64, PowerPC, PowerPC 64, S/390, S/390x, Sparc, Sparc 64, ARM, MIPS, MIPSEL, and PA-RISC. Code support exists for M68K, M32R, and VAX, but these architectures are not known to have been tested recently. It is often possible to build on an unsupported CPU type by configuring with`--disable-spinlocks`, but performance will be poor. + +PostgreSQL can be expected to work on these operating systems: Linux (all recent distributions), Windows (XP and later), FreeBSD, OpenBSD, NetBSD, macOS, AIX, HP/UX, and Solaris. Other Unix-like systems may also work but are not currently being tested. In most cases, all CPU architectures supported by a given operating system will work. Look in[Section 17.7](installation-platform-notes.html)below to see if there is information specific to your operating system, particularly if using an older system. + +If you have installation problems on a platform that is known to be supported according to recent build farm results, please report it to`<[pgsql-bugs@lists.postgresql.org](mailto:pgsql-bugs@lists.postgresql.org)>`. If you are interested in porting PostgreSQL to a new platform,`<[pgsql-hackers@lists.postgresql.org](mailto:pgsql-hackers@lists.postgresql.org)>`is the appropriate place to discuss that. diff --git a/docs/X/system-catalog-declarations.md b/docs/en/system-catalog-declarations.md similarity index 100% rename from docs/X/system-catalog-declarations.md rename to docs/en/system-catalog-declarations.md diff --git a/docs/en/system-catalog-declarations.zh.md b/docs/en/system-catalog-declarations.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..7fe239fcd1a47730d33150d85b9a2e86a9fbffa0 --- /dev/null +++ b/docs/en/system-catalog-declarations.zh.md @@ -0,0 +1,11 @@ +## 71.1. System Catalog Declaration Rules + +The key part of a catalog header file is a C structure definition describing the layout of each row of the catalog. This begins with a`CATALOG`macro, which so far as the C compiler is concerned is just shorthand for`typedef struct FormData_*`catalogname`*`. Each field in the struct gives rise to a catalog column. Fields can be annotated using the BKI property macros described in`genbki.h`, for example to define a default value for a field or mark it as nullable or not nullable. The`CATALOG`line can also be annotated, with some other BKI property macros described in`genbki.h`, to define other properties of the catalog as a whole, such as whether it is a shared relation. + +The system catalog cache code (and most catalog-munging code in general) assumes that the fixed-length portions of all system catalog tuples are in fact present, because it maps this C struct declaration onto them. Thus, all variable-length fields and nullable fields must be placed at the end, and they cannot be accessed as struct fields. For example, if you tried to set`pg_type`.`typrelid`to be NULL, it would fail when some piece of code tried to reference`typetup->typrelid`(or worse,`typetup->typelem`, because that follows`typrelid`)。这将导致随机错误甚至分段违规。 + +作为对此类错误的部分防护,不应该使可变长度或可为空的字段对 C 编译器直接可见。这是通过将它们包装在`#ifdef CATALOG_VARLEN`...`#万一`(在哪里`CATALOG_VARLEN`是一个从未定义的符号)。这可以防止 C 代码粗心地尝试访问可能不存在或可能位于其他偏移量的字段。作为防止创建错误行的独立防护措施,我们要求将所有不应为空的列标记为`pg_attribute`.引导代码将自动将目录列标记为`非空`如果它们是固定宽度并且前面没有任何可为空或可变宽度的列。如果此规则不充分,您可以通过使用强制正确标记`BKI_FORCE_NOT_NULL`和`BKI_FORCE_NULL`根据需要进行注释。 + +前端代码不应包含任何`pg_xxx.h`目录头文件,因为这些文件可能包含不会在后端之外编译的 C 代码。(通常,这是因为这些文件还包含函数的声明`src/后端/目录/`文件。)相反,前端代码可能包含相应的生成`pg_xxx_d.h`标头,将包含 OID`#定义`s 和任何其他可能在客户端使用的数据。如果您希望目录标题中的宏或其他代码对前端代码可见,请编写`#ifdef EXPOSE_TO_CLIENT_CODE`...`#万一`围绕该部分进行指导`genbki.pl`将该部分复制到`pg_xxx_d.h`标题。 + +一些目录是如此基础,以至于它们甚至无法由 BKI 创建`创造`用于大多数目录的命令,因为该命令需要将信息写入这些目录以描述新目录。这些被称为*引导程序*目录,并且定义一个需要很多额外的工作:您必须在预加载的内容中手动为它们准备适当的条目`pg_class`和`pg_type`,并且这些条目将需要更新以用于目录结构的后续更改。(引导目录也需要在`pg_attribute`, 但幸运的是`genbki.pl`现在可以处理这些琐事。)尽可能避免使新目录成为引导目录。 diff --git a/docs/X/system-catalog-initial-data.md b/docs/en/system-catalog-initial-data.md similarity index 100% rename from docs/X/system-catalog-initial-data.md rename to docs/en/system-catalog-initial-data.md diff --git a/docs/en/system-catalog-initial-data.zh.md b/docs/en/system-catalog-initial-data.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..f74371294291493366a36441097b9245e88022f0 --- /dev/null +++ b/docs/en/system-catalog-initial-data.zh.md @@ -0,0 +1,173 @@ +## 71.2. System Catalog Initial Data + +[71.2.1. Data File Format](system-catalog-initial-data.html#SYSTEM-CATALOG-INITIAL-DATA-FORMAT) + +[71.2.2. OID Assignment](system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT) + +[71.2.3. OID Reference Lookup](system-catalog-initial-data.html#SYSTEM-CATALOG-OID-REFERENCES) + +[71.2.4. Automatic Creation of Array Types](system-catalog-initial-data.html#SYSTEM-CATALOG-AUTO-ARRAY-TYPES) + +[71.2.5. Recipes for Editing Data Files](system-catalog-initial-data.html#SYSTEM-CATALOG-RECIPES) + +Each catalog that has any manually-created initial data (some do not) has a corresponding`.dat`file that contains its initial data in an editable format. + +### 71.2.1. Data File Format + +Each`.dat`file contains Perl data structure literals that are simply eval'd to produce an in-memory data structure consisting of an array of hash references, one per catalog row. A slightly modified excerpt from`pg_database.dat`will demonstrate the key features: + +``` +[ + +# A comment could appear here. +{ oid => '1', oid_symbol => 'TemplateDbOid', + descr => 'database\'s default template', + datname => 'template1', encoding => 'ENCODING', datcollate => 'LC_COLLATE', + datctype => 'LC_CTYPE', datistemplate => 't', datallowconn => 't', + datconnlimit => '-1', datlastsysoid => '0', datfrozenxid => '0', + datminmxid => '1', dattablespace => 'pg_default', datacl => '_null_' }, + +] +``` + +Points to note: + +- The overall file layout is: open square bracket, one or more sets of curly braces each of which represents a catalog row, close square bracket. Write a comma after each closing curly brace. + +- Within each catalog row, write comma-separated*`key`* `=>` *`value`*pairs. The allowed*`key`*s are the names of the catalog's columns, plus the metadata keys`oid`,`oid_symbol`,`array_type_oid`, and`描述`.(指某东西的用途`样的`和`oid_symbol`描述在[第 71.2.2 节](system-catalog-initial-data.html#SYSTEM-CATALOG-OID-ASSIGNMENT)下面,而`array_type_oid`描述在[第 71.2.4 节](system-catalog-initial-data.html#SYSTEM-CATALOG-AUTO-ARRAY-TYPES).`描述`为对象提供一个描述字符串,它将被插入到`pg_description`要么`pg_shdescription`视情况而定。)虽然元数据键是可选的,但目录的定义列必须全部提供,除非目录的`。H`file 指定列的默认值。(在上面的例子中,`达达巴`字段已被省略,因为`pg_database.h`为其提供合适的默认值。) + +- 所有值都必须用单引号引起来。使用反斜杠转义值中使用的单引号。反斜杠意味着数据可以但不必加倍;这遵循 Perl 的简单引用文字规则。请注意,作为数据出现的反斜杠将被引导扫描程序视为转义,根据与转义字符串常量相同的规则(请参阅[第 4.1.2.2 节](sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-ESCAPE));例如`\t`转换为制表符。如果你真的想在最终值中使用反斜杠,你需要写四个:Perl 去掉两个,留下`\\`供引导扫描程序查看。 + +- 空值表示为`_空值_`.(请注意,无法创建仅是该字符串的值。) + +- 评论前面有`#`,并且必须在自己的线路上。 + +- 作为其他商品的 OID 的字段值应由符号名称而不是实际的数字 OID 表示。(在上面的例子中,`数据表空间`包含这样的参考。)这在[第 71.2.3 节](system-catalog-initial-data.html#SYSTEM-CATALOG-OID-REFERENCES)以下。 + +- 由于哈希是无序的数据结构,因此字段顺序和行布局在语义上并不重要。但是,为了保持一致的外观,我们设置了一些由格式化脚本应用的规则`重新格式化_dat_file.pl`: + + - 在每对花括号内,元数据字段`样的`,`oid_symbol`,`array_type_oid`, 和`描述`(如果存在)按该顺序首先出现,然后目录自己的字段按其定义的顺序出现。 + + - 如果可能,将根据需要在字段之间插入换行符以将行长度限制为 80 个字符。在元数据字段和常规字段之间也插入了一个换行符。 + + - 如果目录的`。H`文件为列指定默认值,并且数据条目具有相同的值,`重新格式化_dat_file.pl`将从数据文件中省略它。这使数据表示保持紧凑。 + + - `reformat_dat_file.pl`preserves blank lines and comment lines as-is. + + It's recommended to run`reformat_dat_file.pl`before submitting catalog data patches. For convenience, you can simply change to`src/include/catalog/`and run`make reformat-dat-files`. + +- If you want to add a new method of making the data representation smaller, you must implement it in`reformat_dat_file.pl`and also teach`Catalog::ParseData()`how to expand the data back into the full representation. + +### 71.2.2. OID Assignment + +A catalog row appearing in the initial data can be given a manually-assigned OID by writing an`oid => *`nnnn`*`metadata field. Furthermore, if an OID is assigned, a C macro for that OID can be created by writing an`oid_symbol => *`name`*`metadata field. + +Pre-loaded catalog rows must have preassigned OIDs if there are OID references to them in other pre-loaded rows. A preassigned OID is also needed if the row's OID must be referenced from C code. If neither case applies, the`oid`metadata field can be omitted, in which case the bootstrap code assigns an OID automatically. In practice we usually preassign OIDs for all or none of the pre-loaded rows in a given catalog, even if only some of them are actually cross-referenced. + +Writing the actual numeric value of any OID in C code is considered very bad form; always use a macro, instead. Direct references to`pg_proc`OIDs are common enough that there's a special mechanism to create the necessary macros automatically; see`src/backend/utils/Gen_fmgrtab.pl`.类似地——但是,由于历史原因,没有采用同样的方式——有一种自动创建宏的方法`pg_type`OID。`oid_symbol`因此,这两个目录中的条目是不必要的。同样,宏`pg_class`系统目录和索引的 OID 是自动设置的。对于所有其他系统目录,您必须通过手动指定所需的任何宏`oid_symbol`条目。 + +要为新的预加载行查找可用 OID,请运行脚本`src/include/catalog/unused_oids`.它打印未使用 OID 的包含范围(例如,输出行`45-900`表示尚未分配 OID 45 到 900)。目前,OID 1-9999 保留用于手动分配;这`未使用的_oids`脚本只是查看目录标题和`.dat`文件,看看哪些没有出现。您还可以使用`重复样体`检查错误的脚本。(`genbki.pl`将为没有手动分配给它们的任何行分配 OID,并且它还将在编译时检测重复的 OID。) + +当为一个预计不会立即提交的补丁选择 OID 时,最佳实践是使用一组或多或少连续的 OID,从 8000-9999 范围内的某个随机选择开始。这将 OID 与同时开发的其他补丁发生冲突的风险降至最低。为了保持 8000-9999 范围免费用于开发目的,在将补丁提交到主 git 存储库后,应将其 OID 重新编号为低于该范围的可用空间。通常,这将在每个开发周期快结束时完成,同时移动该周期中提交的补丁所消耗的所有 OID。剧本`重新编号_oids.pl`可用于此目的。如果发现未提交的补丁与某个最近提交的补丁存在 OID 冲突,`重新编号_oids.pl`也可能有助于从这种情况中恢复。 + +由于这种可能对补丁分配的 OID 重新编号的约定,补丁分配的 OID 不应被视为稳定,直到补丁包含在正式版本中。但是,一旦发布,我们不会更改手动分配的对象 OID,因为这会产生各种兼容性问题。 + +如果`genbki.pl`needs to assign an OID to a catalog entry that does not have a manually-assigned OID, it will use a value in the range 10000—11999. The server's OID counter is set to 12000 at the start of a bootstrap run. Thus objects created by regular SQL commands during the later phases of bootstrap, such as objects created while running the`information_schema.sql`script, receive OIDs of 12000 or above. + +OIDs assigned during normal database operation are constrained to be 16384 or higher. This ensures that the range 10000—16383 is free for OIDs assigned automatically by`genbki.pl`or during bootstrap. These automatically-assigned OIDs are not considered stable, and may change from one installation to another. + +### 71.2.3. OID Reference Lookup + +In principle, cross-references from one initial catalog row to another could be written just by writing the preassigned OID of the referenced row in the referencing field. However, that is against project policy, because it is error-prone, hard to read, and subject to breakage if a newly-assigned OID is renumbered. Therefore`genbki.pl`provides mechanisms to write symbolic references instead. The rules are as follows: + +- Use of symbolic references is enabled in a particular catalog column by attaching`BKI_LOOKUP(*`lookuprule`*)`to the column's definition, where*`lookuprule`*is the name of the referenced catalog, e.g.,`pg_proc`.`BKI_LOOKUP`can be attached to columns of type`Oid`,`regproc`,`oidvector`, or`Oid[]`; in the latter two cases it implies performing a lookup on each element of the array. + +- 也可以附上`BKI_LOOKUP(编码)`到整数列以引用字符集编码,这些编码当前不表示为目录 OID,但具有一组已知的值`genbki.pl`. + +- 在某些目录列中,允许条目为零而不是有效参考。如果允许,请写`BKI_LOOKUP_OPT`代替`BKI_LOOKUP`.然后你可以写`0`一个条目。(如果该列被声明`正则程序`, 你可以选择写`-`代替`0`.) 除了这种特殊情况,a 中的所有条目`BKI_LOOKUP`列必须是符号引用。`genbki.pl`将警告无法识别的名称。 + +- 大多数种类的目录对象只是通过它们的名称来引用。请注意,类型名称必须与引用的完全匹配`pg_type`条目的`类型名`;您不能使用任何别名,例如`整数`为了`整数4`. + +- 一个函数可以用它的表示`名字`, 如果这是唯一的`pg_proc.dat`条目(这类似于 regproc 输入)。否则,写成*`proname(argtypename,argtypename,...)`*,如重新程序。参数类型名称的拼写必须与它们在`pg_proc.dat`条目的`参数类型`场地。不要插入任何空格。 + +- 运算符表示为*`oprname(lefttype,righttype)`*,完全按照它们在`pg_operator.dat`条目的`左派`和`正确的`字段。(写`0`对于一元运算符的省略操作数。) + +- opclasses 和 opfamilies 的名称仅在访问方法中是唯一的,因此它们表示为*`访问方法名称`*`/`*`对象名`*. + +- 在这些情况下,都没有任何模式限定的规定;在引导期间创建的所有对象都应该在`pg_catalog`schema. + +`genbki.pl`resolves all symbolic references while it runs, and puts simple numeric OIDs into the emitted BKI file. There is therefore no need for the bootstrap backend to deal with symbolic references. + +It's desirable to mark OID reference columns with`BKI_LOOKUP`or`BKI_LOOKUP_OPT`even if the catalog has no initial data that requires lookup. This allows`genbki.pl`to record the foreign key relationships that exist in the system catalogs. That information is used in the regression tests to check for incorrect entries. See also the macros`DECLARE_FOREIGN_KEY`,`DECLARE_FOREIGN_KEY_OPT`,`DECLARE_ARRAY_FOREIGN_KEY`, and`DECLARE_ARRAY_FOREIGN_KEY_OPT`, which are used to declare foreign key relationships that are too complex for`BKI_LOOKUP`(typically, multi-column foreign keys). + +### 71.2.4. Automatic Creation of Array Types + +Most scalar data types should have a corresponding array type (that is, a standard varlena array type whose element type is the scalar type, and which is referenced by the`typarray`field of the scalar type's`pg_type`entry).`genbki.pl`is able to generate the`pg_type`entry for the array type automatically in most cases. + +To use this facility, just write an`array_type_oid => *`nnnn`*`metadata field in the scalar type's`pg_type`entry, specifying the OID to use for the array type. You may then omit the`typarray`field, since it will be filled automatically with that OID. + +The generated array type's name is the scalar type's name with an underscore prepended. The array entry's other fields are filled from`BKI_ARRAY_DEFAULT(*`value`*)`annotations in`pg_type.h`, or if there isn't one, copied from the scalar type. (There's also a special case for`typalign`.) Then the`typelem`and`typarray`fields of the two entries are set to cross-reference each other. + +### 71.2.5. Recipes for Editing Data Files + +Here are some suggestions about the easiest ways to perform common tasks when updating catalog data files. + +**Add a new column with a default to a catalog:**Add the column to the header file with a`BKI_DEFAULT(*`value`*)`annotation. The data file need only be adjusted by adding the field in existing rows where a non-default value is needed. + +**向没有默认值的现有列添加默认值:**添加一个`BKI_DEFAULT`头文件的注释,然后运行`重新格式化 dat 文件`删除现在冗余的字段条目。 + +**删除一列,无论它是否具有默认值:**从标题中删除列,然后运行`重新格式化 dat 文件`删除现在无用的字段条目。 + +**更改或删除现有的默认值:**您不能简单地更改头文件,因为这会导致当前数据被错误地解释。首轮`制作扩展数据文件`用显式插入的所有默认值重写数据文件,然后更改或删除`BKI_DEFAULT`注释,然后运行`重新格式化 dat 文件`再次删除多余的字段。 + +**临时批量编辑:**  `重新格式化_dat_file.pl`可以适应执行多种批量更改。查找它的块注释,显示可以插入一次性代码的位置。在下面的示例中,我们将合并两个布尔字段`pg_proc`进入一个字符字段: + +1. 使用默认值将新列添加到`pg_proc.h`: + + ``` + + /* see PROKIND_ categories below */ + + char prokind BKI_DEFAULT(f); + ``` + +2. 创建一个新的脚本基于`重新格式化_dat_file.pl`即时插入适当的值: + + ``` + - # At this point we have the full row in memory as a hash + - # and can do any operations we want. As written, it only + - # removes default values, but this script can be adapted to + - # do one-off bulk-editing. + + # One-off change to migrate to prokind + + # Default has already been filled in by now, so change to other + + # values as appropriate + + if ($values{proisagg} eq 't') + + { + + $values{prokind} = 'a'; + + } + + elsif ($values{proiswindow} eq 't') + + { + + $values{prokind} = 'w'; + + } + ``` + +3. 运行新脚本: + + ``` + $ cd src/include/catalog + $ perl rewrite_dat_with_prokind.pl pg_proc.dat + ``` + + 在此刻`pg_proc.dat`拥有所有三列,`亲善`, `普罗萨格`, 和`窗口`,尽管它们只会出现在具有非默认值的行中。 + +4. 删除旧列`pg_proc.h`: + + ``` + - /* is it an aggregate? */ + - bool proisagg BKI_DEFAULT(f); + - + - /* is it a window function? */ + - bool proiswindow BKI_DEFAULT(f); + ``` + +5. 最后,运行`重新格式化 dat 文件`删除无用的旧条目`pg_proc.dat`. + + 有关用于批量编辑的脚本的更多示例,请参阅`convert_oid2name.pl`和`remove_pg_type_oid_symbols.pl`附加到此消息: diff --git a/docs/X/uuid-ossp.md b/docs/en/uuid-ossp.md similarity index 100% rename from docs/X/uuid-ossp.md rename to docs/en/uuid-ossp.md diff --git a/docs/en/uuid-ossp.zh.md b/docs/en/uuid-ossp.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..a61dfe4e9ae9fcf29e724064f8f2344782b564e9 --- /dev/null +++ b/docs/en/uuid-ossp.zh.md @@ -0,0 +1,41 @@ +## F.46. uuid-ossp + +[F.46.1.`uuid-ossp`Functions](uuid-ossp.html#id-1.11.7.55.5)[F.46.2. Building`uuid-ossp`](uuid-ossp.html#id-1.11.7.55.6)[F.46.3. Author](uuid-ossp.html#id-1.11.7.55.7) + +[](<>) + +The`uuid-ossp`module provides functions to generate universally unique identifiers (UUIDs) using one of several standard algorithms. There are also functions to produce certain special UUID constants. This module is only necessary for special requirements beyond what is available in core PostgreSQL. See[Section 9.14](functions-uuid.html)for built-in ways to generate UUIDs. + +This module is considered “trusted”, that is, it can be installed by non-superusers who have`CREATE`privilege on the current database. + +### F.46.1.`uuid-ossp`Functions + +[Table F.33](uuid-ossp.html#UUID-OSSP-FUNCTIONS)shows the functions available to generate UUIDs. The relevant standards ITU-T Rec. X.667, ISO/IEC 9834-8:2005, and[RFC 4122](https://tools.ietf.org/html/rfc4122)specify four algorithms for generating UUIDs, identified by the version numbers 1, 3, 4, and 5. (There is no version 2 algorithm.) Each of these algorithms could be suitable for a different set of applications. + +**Table F.33. Functions for UUID Generation** + +| Function

Description | +| ----------------------------- | +| [](<>) `uuid_generate_v1`() →`uuid`

Generates a version 1 UUID. This involves the MAC address of the computer and a time stamp. Note that UUIDs of this kind reveal the identity of the computer that created the identifier and the time at which it did so, which might make it unsuitable for certain security-sensitive applications. | +| [](<>) `uuid_generate_v1mc`() →`uuid`

生成版本 1 UUID,但使用随机多播 MAC 地址而不是计算机的真实 MAC 地址。 | +| [](<>) `uuid_generate_v3`(*`命名空间`* `uuid`,*`姓名`* `文本`) →`uuid`

使用指定的输入名称在给定的命名空间中生成版本 3 UUID。命名空间应该是由`uuid_ns_*()`所示功能[表 F.34](uuid-ossp.html#UUID-OSSP-CONSTANTS).(理论上它可以是任何 UUID。)名称是所选命名空间中的标识符。

例如:

`
SELECT uuid_generate_v3(uuid_ns_url(), 'http://www.postgresql.org');

`

name 参数将经过 MD5 哈希处理,因此无法从生成的 UUID 派生明文。通过这种方法生成的 UUID 没有随机或环境相关的元素,因此是可重现的。 | +| `uuid_generate_v4`() →`uuid`

生成完全从随机数派生的版本 4 UUID。 | +| `uuid_generate_v5`(*`命名空间`* `uuid`,*`姓名`* `文本`) →`乌伊德`

生成版本5的UUID,除了SHA-1用作散列方法外,它的工作方式与版本3的UUID类似。版本5应该优先于版本3,因为SHA-1被认为比MD5更安全。 | + +**表F.34。返回UUID常量的函数** + +| 作用

描述 | +| -------------- | +| `uuid_nil`() →`乌伊德`

返回一个“nil”UUID常量,该常量不是真实的UUID。 | +| `uuid_ns_dns`() →`乌伊德`

返回指定UUID的DNS命名空间的常量。 | +| `uuid_ns_url`() →`乌伊德`

返回一个常量,指定UUID的URL命名空间。 | +| `uuid_ns_oid`() →`乌伊德`

返回一个常量,指定UUID的ISO对象标识符(OID)命名空间。(这与ASN.1 OID有关,与PostgreSQL中使用的OID无关。) | +| `uuid_ns_x500`() →`乌伊德`

返回一个常量,指定UUID的X.500可分辨名称(DN)命名空间。 | + +### F.46.2。建筑`uuid ossp` + +从历史上看,该模块依赖于OSSP UUID库,该库解释了该模块的名称。而OSSP UUID库仍然可以在,它没有得到很好的维护,并且越来越难以移植到更新的平台。`uuid ossp`在某些平台上,现在可以在没有OSSP库的情况下构建。在FreeBSD、NetBSD和其他一些BSD衍生平台上,内核中包含了合适的UUID创建函数`libc`图书馆在Linux、macOS和其他一些平台上,在`libuuid`图书馆,最初来自`e2fsprogs`项目(尽管在现代Linux上它被认为是`util linux ng`).当调用`配置`具体说明`--uuid=bsd时`使用BSD功能,或`--uuid=e2fs时`使用`e2fsprogs`' `libuuid`或`--uuid=ossp时`使用OSSP UUID库。在一台特定的机器上可能有不止一个这样的库,所以`配置`不会自动选择一个。 + +### F.46.3。作者 + +彼得·艾森特`<[peter_e@gmx.net](邮寄地址:彼得)_e@gmx.net)>` diff --git a/docs/X/view-pg-locks.md b/docs/en/view-pg-locks.md similarity index 100% rename from docs/X/view-pg-locks.md rename to docs/en/view-pg-locks.md diff --git a/docs/en/view-pg-locks.zh.md b/docs/en/view-pg-locks.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..42f5f215076eb8a04391a73b27c371dc4037ff67 --- /dev/null +++ b/docs/en/view-pg-locks.zh.md @@ -0,0 +1,60 @@ +## 52.74.`pg_locks` + +[](<>) + +The view`pg_locks`provides access to information about the locks held by active processes within the database server. See[Chapter 13](mvcc.html)for more discussion of locking. + +`pg_locks`contains one row per active lockable object, requested lock mode, and relevant process. Thus, the same lockable object might appear many times, if multiple processes are holding or waiting for locks on it. However, an object that currently has no locks on it will not appear at all. + +There are several distinct types of lockable objects: whole relations (e.g., tables), individual pages of relations, individual tuples of relations, transaction IDs (both virtual and permanent IDs), and general database objects (identified by class OID and object OID, in the same way as in[`pg_description`](catalog-pg-description.html)or[`pg_depend`](catalog-pg-depend.html)). Also, the right to extend a relation is represented as a separate lockable object, as is the right to update`pg_database`.`datfrozenxid`. Also, “advisory” locks can be taken on numbers that have user-defined meanings. + +**Table 52.75.`pg_locks`Columns** + +| Column Type

Description | +| -------------------------------- | +| `locktype` `text`

Type of the lockable object:`relation`,`延长`,`冻结ID`,`页`,`元组`,`交易标识`,`虚拟xid`,`幽灵`,`目的`,`用户锁`, 或者`咨询`.(也可以看看[表 28.11](monitoring-stats.html#WAIT-EVENT-LOCK-TABLE).) | +| `数据库` `样的`(参考[`pg_database`](catalog-pg-database.html).`样的`)

锁定目标所在的数据库的 OID,如果目标是共享对象,则为零,如果目标是事务 ID,则为 null | +| `关系` `样的`(参考[`pg_class`](catalog-pg-class.html).`样的`)

锁定目标关系的 OID,如果目标不是关系或关系的一部分,则为 null | +| `页` `整数4`

关系中锁定目标的页码,如果目标不是关系页或元组,则为 null | +| `元组` `整数2`

页面内锁定目标的元组编号,如果目标不是元组,则为 null | +| `虚拟xid` `文本`

锁所针对的事务的虚拟 ID,如果目标不是虚拟事务 ID,则为 null | +| `交易标识` `xid`

锁所针对的事务的 ID,如果目标不是事务 ID,则为 null | +| `班级号` `样的`(参考[`pg_class`](catalog-pg-class.html).`样的`)

包含锁定目标的系统目录的 OID,如果目标不是一般数据库对象,则为 null | +| `对象` `样的`(引用任何 OID 列)

锁定目标在其系统目录中的 OID,如果目标不是通用数据库对象,则为 null | +| `objsubid` `整数2`

锁定目标的列号(`班级号`和`对象`引用表本身),如果目标是某个其他通用数据库对象,则为零,如果目标不是通用数据库对象,则为 null | +| `虚拟交易` `文本`

持有或等待此锁的事务的虚拟 ID | +| `pid` `整数4`

持有或等待此锁的服务器进程的进程 ID,如果锁由准备好的事务持有,则为 null | +| `模式` `文本`

此进程持有或需要的锁定模式的名称(请参阅[第 13.3.1 节](explicit-locking.html#LOCKING-TABLES)和[第 13.2.3 节](transaction-iso.html#XACT-SERIALIZABLE)) | +| `的确` `布尔`

True if lock is held, false if lock is awaited | +| `fastpath` `bool`

True if lock was taken via fast path, false if taken via main lock table | +| `waitstart` `timestamptz`

Time when the server process started waiting for this lock, or null if the lock is held. Note that this can be null for a very short period of time after the wait started even though`granted`is`false`. | + +`granted`is true in a row representing a lock held by the indicated process. False indicates that this process is currently waiting to acquire this lock, which implies that at least one other process is holding or waiting for a conflicting lock mode on the same lockable object. The waiting process will sleep until the other lock is released (or a deadlock situation is detected). A single process can be waiting to acquire at most one lock at a time. + +Throughout running a transaction, a server process holds an exclusive lock on the transaction's virtual transaction ID. If a permanent ID is assigned to the transaction (which normally happens only if the transaction changes the state of the database), it also holds an exclusive lock on the transaction's permanent transaction ID until it ends. When a process finds it necessary to wait specifically for another transaction to end, it does so by attempting to acquire share lock on the other transaction's ID (either virtual or permanent ID depending on the situation). That will succeed only when the other transaction terminates and releases its locks. + +Although tuples are a lockable type of object, information about row-level locks is stored on disk, not in memory, and therefore row-level locks normally do not appear in this view. If a process is waiting for a row-level lock, it will usually appear in the view as waiting for the permanent transaction ID of the current holder of that row lock. + +Advisory locks can be acquired on keys consisting of either a single`bigint`value or two integer values. A`bigint`key is displayed with its high-order half in the`classid`column, its low-order half in the`objid`column, and`objsubid`equal to 1. The original`bigint`value can be reassembled with the expression`(classid::bigint << 32) |`objid::bigint.`整数键与第一个键一起显示`班级号`列中的第二个键`对象`列,和`objsubid等于 2。键的实际含义由用户决定。咨询锁对于每个数据库都是本地的,因此`数据库`列对于咨询锁有意义。 + +`pg_locks`提供数据库集群中所有锁的全局视图,而不仅仅是与当前数据库相关的锁。虽然它的`关系`列可以加入反对[`pg_class`](catalog-pg-class.html).`样的`要识别锁定的关系,这只会对当前数据库中的关系正常工作(那些`数据库`列是当前数据库的 OID 或零)。 + +这`pid`列可以连接到`pid`的列[`pg_stat_activity`](monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW)查看以获取有关会话持有或等待每个锁的更多信息,例如 + +``` +SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa + ON pl.pid = psa.pid; +``` + +此外,如果您使用准备好的事务,则`virtualtransaction`column can be joined to the`transaction`column of the[`pg_prepared_xacts`](view-pg-prepared-xacts.html)view to get more information on prepared transactions that hold locks. (A prepared transaction can never be waiting for a lock, but it continues to hold the locks it acquired while running.) For example: + +``` +SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts ppx + ON pl.virtualtransaction = '-1/' || ppx.transaction; +``` + +While it is possible to obtain information about which processes block which other processes by joining`pg_locks`against itself, this is very difficult to get right in detail. Such a query would have to encode knowledge about which lock modes conflict with which others. Worse, the`pg_locks`view does not expose information about which processes are ahead of which others in lock wait queues, nor information about which processes are parallel workers running on behalf of which other client sessions. It is better to use the`pg_blocking_pids()`function (see[Table 9.65](functions-info.html#FUNCTIONS-INFO-SESSION-TABLE)) to identify which process(es) a waiting process is blocked behind. + +The`pg_locks`view displays data from both the regular lock manager and the predicate lock manager, which are separate systems; in addition, the regular lock manager subdivides its locks into regular and*fast-path*locks. This data is not guaranteed to be entirely consistent. When the view is queried, data on fast-path locks (with`fastpath`=`true`) is gathered from each backend one at a time, without freezing the state of the entire lock manager, so it is possible for locks to be taken or released while information is gathered. Note, however, that these locks are known not to conflict with any other lock currently in place. After all backends have been queried for fast-path locks, the remainder of the regular lock manager is locked as a unit, and a consistent snapshot of all remaining locks is collected as an atomic action. After unlocking the regular lock manager, the predicate lock manager is similarly locked and all predicate locks are collected as an atomic action. Thus, with the exception of fast-path locks, each lock manager will deliver a consistent set of results, but as we do not lock both lock managers simultaneously, it is possible for locks to be taken or released after we interrogate the regular lock manager and before we interrogate the predicate lock manager. + +Locking the regular and/or predicate lock manager could have some impact on database performance if this view is very frequently accessed. The locks are held only for the minimum amount of time necessary to obtain data from the lock managers, but this does not completely eliminate the possibility of a performance impact. diff --git a/docs/X/wal-configuration.md b/docs/en/wal-configuration.md similarity index 100% rename from docs/X/wal-configuration.md rename to docs/en/wal-configuration.md diff --git a/docs/en/wal-configuration.zh.md b/docs/en/wal-configuration.zh.md new file mode 100644 index 0000000000000000000000000000000000000000..88d1f0306b1e1f45f5d39370a95ca8e11e903257 --- /dev/null +++ b/docs/en/wal-configuration.zh.md @@ -0,0 +1,37 @@ +## 30.5. WAL Configuration + +There are several WAL-related configuration parameters that affect database performance. This section explains their use. Consult[Chapter 20](runtime-config.html)for general information about setting server configuration parameters. + +*Checkpoints*[](<>)are points in the sequence of transactions at which it is guaranteed that the heap and index data files have been updated with all information written before that checkpoint. At checkpoint time, all dirty data pages are flushed to disk and a special checkpoint record is written to the log file. (The change records were previously flushed to the WAL files.) In the event of a crash, the crash recovery procedure looks at the latest checkpoint record to determine the point in the log (known as the redo record) from which it should start the REDO operation. Any changes made to data files before that point are guaranteed to be already on disk. Hence, after a checkpoint, log segments preceding the one containing the redo record are no longer needed and can be recycled or removed. (When WAL archiving is being done, the log segments must be archived before being recycled or removed.) + +The checkpoint requirement of flushing all dirty data pages to disk can cause a significant I/O load. For this reason, checkpoint activity is throttled so that I/O begins at checkpoint start and completes before the next checkpoint is due to start; this minimizes performance degradation during checkpoints. + +The server's checkpointer process automatically performs a checkpoint every so often. A checkpoint is begun every[checkpoint_timeout](runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT)seconds, or if[max_wal_size](runtime-config-wal.html#GUC-MAX-WAL-SIZE)is about to be exceeded, whichever comes first. The default settings are 5 minutes and 1 GB, respectively. If no WAL has been written since the previous checkpoint, new checkpoints will be skipped even if`checkpoint_timeout`has passed. (If WAL archiving is being used and you want to put a lower limit on how often files are archived in order to bound potential data loss, you should adjust the[archive_timeout](runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT)parameter rather than the checkpoint parameters.) It is also possible to force a checkpoint by using the SQL command`CHECKPOINT`. + +Reducing`checkpoint_timeout`and/or`max_wal_size`导致检查点更频繁地发生。这允许更快的崩溃后恢复,因为需要重做的工作更少。但是,必须在这一点与更频繁地刷新脏数据页的成本增加之间取得平衡。如果[满的\_页\_写](runtime-config-wal.html#GUC-FULL-PAGE-WRITES)已设置(默认设置),还有另一个因素需要考虑。为了确保数据页的一致性,在每个检查点之后对数据页的第一次修改会导致记录整个页的内容。在这种情况下,较小的检查点间隔会增加 WAL 日志的输出量,部分否定使用较小间隔的目标,并且无论如何都会导致更多的磁盘 I/O。 + +检查点相当昂贵,首先因为它们需要写出所有当前脏缓冲区,其次因为它们会导致额外的后续 WAL 流量,如上所述。因此,明智的做法是将检查点参数设置得足够高,这样检查点就不会经常发生。作为对检查点参数的简单完整性检查,您可以设置[检查点\_警告](runtime-config-wal.html#GUC-CHECKPOINT-WARNING)范围。如果检查点之间的距离比`checkpoint_warning`秒,将向服务器日志输出一条消息,建议增加`max_wal_size`.此类消息的偶尔出现不会引起警报,但如果经常出现,则应增加检查点控制参数。批量操作,如大`复制`如果您没有设置,传输可能会导致出现许多此类警告`max_wal_size`足够高。 + +为了避免大量的页面写入淹没 I/O 系统,在检查点期间写入脏缓冲区会分散一段时间。该时期由[检查点\_完成\_目标](runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET),它作为检查点间隔的一部分给出(通过使用配置`checkpoint_timeout`)。调整 I/O 速率,以便检查点在给定分数时完成`checkpoint_timeout`秒已过去,或之前`max_wal_size`超过,以较早者为准。使用默认值 0.9,可以预期 PostgreSQL 会在下一个计划检查点之前完成每个检查点(大约是最后一个检查点持续时间的 90%)。这会尽可能分散 I/O,以便检查点 I/O 负载在整个检查点间隔内保持一致。这样做的缺点是延长检查点会影响恢复时间,因为需要保留更多的 WAL 段以便可能用于恢复。担心恢复所需时间的用户可能希望减少`checkpoint_timeout`这样检查点会更频繁地出现,但仍会在检查点间隔内分散 I/O。或者,`checkpoint_completion_target`可以减少,但这会导致 I/O 时间更密集(在检查点期间)和 I/O 时间减少(在检查点完成之后但在下一个计划检查点之前),因此不建议这样做。虽然`checkpoint_completion_target`可以设置为最高 1.0,通常建议将其设置为不高于 0.9(默认值),因为检查点除了写入脏缓冲区之外还包括其他一些活动。设置为 1.0 很可能导致检查点未按时完成,这会由于所需 WAL 段数量的意外变化而导致性能损失。 + +在 Linux 和 POSIX 平台上[检查点\_冲洗\_后](runtime-config-wal.html#GUC-CHECKPOINT-FLUSH-AFTER)允许强制操作系统在可配置的字节数后将检查点写入的页面刷新到磁盘。否则,这些页面可能会保存在操作系统的页面缓存中,从而导致在`同步`在检查点结束时发出。此设置通常有助于减少事务延迟,但也会对性能产生不利影响;特别是对于大于[共享\_缓冲区](runtime-config-resource.html#GUC-SHARED-BUFFERS),但小于操作系统的页面缓存。 + +WAL 段文件的数量`pg_wal`目录取决于`min_wal_size`,`max_wal_size`以及在之前的检查点周期中生成的 WAL 数量。当不再需要旧的日志段文件时,它们将被删除或回收(即,重命名为按编号顺序成为未来的段)。如果,由于对数输出率的短期峰值,`max_wal_size`超过,不需要的段文件将被删除,直到系统回到这个限制之下。低于该限制,系统会回收足够的 WAL 文件来满足下一个检查点之前的估计需求,并删除其余文件。该估计值基于先前检查点周期中使用的 WAL 文件数量的移动平均值。如果实际使用量超过估计值,移动平均值会立即增加,因此它在一定程度上适应了峰值使用量而不是平均使用量。`min_wal_size`将回收的 WAL 文件数量降至最低以供将来使用;即使系统处于空闲状态并且 WAL 使用估计表明需要很少的 WAL,也总是会回收那么多 WAL 以供将来使用。 + +独立于`max_wal_size`, 最近的[沃尔\_保持\_尺寸](runtime-config-replication.html#GUC-WAL-KEEP-SIZE)兆字节的 WAL 文件加上一个额外的 WAL 文件一直保留。此外,如果使用 WAL 归档,则在归档之前无法删除或回收旧段。如果 WAL 归档无法跟上 WAL 生成的速度,或者如果`归档命令`反复失败,旧的 WAL 文件将累积在`pg_wal`直到情况得到解决。使用复制槽的慢速或故障备用服务器将产生相同的效果(请参阅[第 27.2.6 节](warm-standby.html#STREAMING-REPLICATION-SLOTS))。 + +在归档恢复或待机模式下,服务器会定期执行*重启点*,[](<>)这类似于正常操作中的检查点:服务器将其所有状态强制写入磁盘,更新`pg_control`文件指示已经处理过的 WAL 数据不需要再次扫描,然后回收任何旧的日志段文件在`pg_wal`目录。重启点不能比主节点上的检查点更频繁地执行,因为重启点只能在检查点记录处执行。达到检查点记录时触发重新启动点,如果至少`checkpoint_timeout`自上次重启点以来已经过去了几秒,或者 WAL 大小即将超过`max_wal_size`.但是,由于对何时可以执行重新启动点的限制,`max_wal_size`在恢复期间通常会超过一个检查点周期的 WAL。(`max_wal_size`无论如何都不是硬限制,因此您应该始终留出足够的空间以避免磁盘空间不足。) + +有两个常用的内部 WAL 函数:`XLogInsertRecord`和`XLogFlush`.`XLogInsertRecord`用于将新记录放入共享内存中的 WAL 缓冲区。如果没有新记录的空间,`XLogInsertRecord`将不得不写入(移动到内核缓存)一些填充的 WAL 缓冲区。这是不可取的,因为`XLogInsertRecord`用于在受影响的数据页上持有排他锁时的每个数据库低级修改(例如,行插入),因此操作需要尽可能快。更糟糕的是,写入 WAL 缓冲区还可能会强制创建新的日志段,这需要更多时间。通常,WAL 缓冲区应该由一个`XLogFlush`请求,大部分是在事务提交时发出的,以确保事务记录被刷新到永久存储。在具有高日志输出的系统上,`XLogFlush`请求的发生频率可能​​不足以阻止`XLogInsertRecord`从不得不写。在这样的系统上,应该通过修改[沃尔\_缓冲区](runtime-config-wal.html#GUC-WAL-BUFFERS)范围。什么时候[满的\_页\_写](runtime-config-wal.html#GUC-FULL-PAGE-WRITES)已设置,系统很忙,设置`wal_buffers`更高将有助于在每个检查点之后的这段时间内平滑响应时间。 + +这[犯罪\_延迟](runtime-config-wal.html#GUC-COMMIT-DELAY)参数定义组提交领导进程在获得锁定后将休眠多少微秒`XLogFlush`,而组提交追随者在领导者后面排队。此延迟允许其他服务器进程将其提交记录添加到 WAL 缓冲区,以便所有这些都将被领导者最终的同步操作刷新。如果没有睡眠将不会发生[同步](runtime-config-wal.html#GUC-FSYNC)未启用,或者如果少于[犯罪\_兄弟姐妹](runtime-config-wal.html#GUC-COMMIT-SIBLINGS)其他会话当前处于活动事务中;当任何其他会话不太可能很快提交时,这可以避免睡觉。请注意,在某些平台上,睡眠请求的分辨率为 10 毫秒,因此任何非零`提交延迟`设置在 1 到 10000 微秒之间会产生相同的效果。另请注意,在某些平台上,睡眠操作可能需要比参数请求的时间稍长的时间。 + +既然目的`提交延迟`是允许每个刷新操作的成本在并发提交的事务中分摊(可能以事务延迟为代价),有必要在可以智能地选择设置之前量化该成本。成本越高,效果越好`提交延迟`预计将在一定程度上增加交易吞吐量。这[皮克\_测试\_同步](pgtestfsync.html)程序可用于测量单个 WAL 刷新操作所需的平均时间(以微秒为单位)。程序报告的单次 8kB 写入操作后刷新所需的平均时间的一半的值通常是最有效的设置`提交延迟`, so this value is recommended as the starting point to use when optimizing for a particular workload. While tuning`commit_delay`is particularly useful when the WAL log is stored on high-latency rotating disks, benefits can be significant even on storage media with very fast sync times, such as solid-state drives or RAID arrays with a battery-backed write cache; but this should definitely be tested against a representative workload. Higher values of`commit_siblings`should be used in such cases, whereas smaller`commit_siblings`values are often helpful on higher latency media. Note that it is quite possible that a setting of`commit_delay`that is too high can increase transaction latency by so much that total transaction throughput suffers. + +When`commit_delay`is set to zero (the default), it is still possible for a form of group commit to occur, but each group will consist only of sessions that reach the point where they need to flush their commit records during the window in which the previous flush operation (if any) is occurring. At higher client counts a “gangway effect” tends to occur, so that the effects of group commit become significant even when`commit_delay`is zero, and thus explicitly setting`commit_delay`tends to help less. Setting`commit_delay`can only help when (1) there are some concurrently committing transactions, and (2) throughput is limited to some degree by commit rate; but with high rotational latency this setting can be effective in increasing transaction throughput with as few as two clients (that is, a single committing client with one sibling transaction). + +The[wal_sync_method](runtime-config-wal.html#GUC-WAL-SYNC-METHOD)parameter determines how PostgreSQL will ask the kernel to force WAL updates out to disk. All the options should be the same in terms of reliability, with the exception of`fsync_writethrough`, which can sometimes force a flush of the disk cache even when other options do not do so. However, it's quite platform-specific which one will be the fastest. You can test the speeds of different options using the[pg_test\_同步](pgtestfsync.html)程序。请注意,此参数无关紧要,如果`同步`已关闭。 + +启用[沃尔\_调试](runtime-config-developer.html#GUC-WAL-DEBUG)配置参数(前提是 PostgreSQL 已经编译支持它)将导致每个`XLogInsertRecord`和`XLogFlush`WAL 调用被记录到服务器日志中。将来,此选项可能会被更通用的机制所取代。 + +将 WAL 数据写入磁盘有两个内部函数:`XLogWrite`和`issue_xlog_fsync`.什么时候[追踪\_沃尔\_io\_定时](runtime-config-statistics.html#GUC-TRACK-WAL-IO-TIMING)启用时,总时间`XLogWrite`写和`issue_xlog_fsync`将 WAL 数据同步到磁盘被计为`wal_write_time`和`wal_sync_time`在[皮克\_统计\_沃尔](monitoring-stats.html#PG-STAT-WAL-VIEW), 分别。`XLogWrite`通常由`XLogInsertRecord`(当 WAL 缓冲区中没有新记录的空间时),`XLogFlush`和 WAL 编写器,将 WAL 缓冲区写入磁盘并调用`issue_xlog_fsync`.`issue_xlog_fsync`通常由`XLogWrite`将 WAL 文件同步到磁盘。如果`wal_sync_method`或者是`open_datasync`或者`开放同步`,写操作在`XLogWrite`保证将写入的 WAL 数据同步到磁盘和`issue_xlog_fsync`什么也没做。如果`wal_sync_method`或者是`数据同步`,`同步`, 或者`fsync_writethrough`,写操作将 WAL 缓冲区移动到内核缓存和`issue_xlog_fsync`将它们同步到磁盘。无论设置`track_wal_io_timing`, 次数`XLogWrite`写和`issue_xlog_fsync`同步 WAL 数据到磁盘也算作`wal_write`和`wal_sync`在`pg_stat_wal`, 分别。