提交 5e84d58e 编写于 作者: T Thomas G. Lockhart

Minor updates for release.

上级 abc40591
<chapter id="datatype">
<title>Data Types</title>
<title id="datatype-title">Data Types</title>
<abstract>
<para>
......
<!--
$Header: /cvsroot/pgsql/doc/src/sgml/docguide.sgml,v 1.15 1999/05/27 15:49:07 thomas Exp $
$Header: /cvsroot/pgsql/doc/src/sgml/docguide.sgml,v 1.16 1999/06/14 07:36:11 thomas Exp $
Documentation Guide
Thomas Lockhart
$Log: docguide.sgml,v $
Revision 1.16 1999/06/14 07:36:11 thomas
Minor updates for release.
Revision 1.15 1999/05/27 15:49:07 thomas
Markup fixes.
Update for v6.5 release.
......@@ -175,7 +178,7 @@ Include working list of all documentation sources, with current status
James Clark's
<ulink url="http://www.jclark.com/jade/"> <productname>jade</productname></ulink>
and Norm Walsh's
<ulink url="http://www.berkshire.net/~norm/docbook/dsssl">Modular DocBook Stylesheets</ulink>.
<ulink url="http://www.nwalsh.com/docbook/dsssl/">Modular DocBook Stylesheets</ulink>.
</para>
<para>
......
此差异已折叠。
<Chapter Id="largeObjects">
<Title>Large Objects</Title>
<Para>
In <ProductName>Postgres</ProductName>, data values are stored in tuples and
individual tuples cannot span data pages. Since the size of
a data page is 8192 bytes, the upper limit on the size
of a data value is relatively low. To support the storage
of larger atomic values, <ProductName>Postgres</ProductName> provides a large
object interface. This interface provides file
oriented access to user data that has been declared to
be a large type.
This section describes the implementation and the
programmatic and query language interfaces to <ProductName>Postgres</ProductName>
large object data.
</Para>
<Sect1>
<Title>Historical Note</Title>
<Para>
Originally, <ProductName>Postgres 4.2</ProductName> supported three standard
<chapter id="largeObjects">
<title id="largeObjects-title">Large Objects</title>
<para>
In <productname>Postgres</productname>,
data values are stored in tuples and
individual tuples cannot span data pages. Since the size of
a data page is 8192 bytes, the upper limit on the size
of a data value is relatively low. To support the storage
of larger atomic values,
<productname>Postgres</productname> provides a large
object interface. This interface provides file
oriented access to user data that has been declared to
be a large type.
This section describes the implementation and the
programmatic and query language interfaces to
<productname>Postgres</productname>
large object data.
</para>
<sect1>
<title>Historical Note</title>
<para>
Originally, <productname>Postgres 4.2</productname> supported three standard
implementations of large objects: as files external
to <ProductName>Postgres</ProductName>, as <Acronym>UNIX</Acronym> files managed by <ProductName>Postgres</ProductName>, and as data
stored within the <ProductName>Postgres</ProductName> database. It causes
to <productname>Postgres</productname>, as
<acronym>ym>U</acronym>ym> files managed by <productname>Postgres</productname>, and as data
stored within the <productname>Postgres</productname> database. It causes
considerable confusion among users. As a result, we only
support large objects as data stored within the <ProductName>Postgres</ProductName>
database in <ProductName>PostgreSQL</ProductName>. Even though it is slower to
support large objects as data stored within the <productname>Postgres</productname>
database in <productname>PostgreSQL</productname>. Even though it is slower to
access, it provides stricter data integrity.
For historical reasons, this storage scheme is referred to as
Inversion large objects. (We will use Inversion and large
objects interchangeably to mean the same thing in this
section.)
</Para>
</Sect1>
</para>
</sect1>
<Sect1>
<Title>Inversion Large Objects</Title>
<sect1>
<title>Inversion Large Objects</title>
<Para>
<para>
The Inversion large object implementation breaks large
objects up into "chunks" and stores the chunks in
tuples in the database. A B-tree index guarantees fast
searches for the correct chunk number when doing random
access reads and writes.
</Para>
</Sect1>
</para>
</sect1>
<Sect1>
<Title>Large Object Interfaces</Title>
<sect1>
<title>Large Object Interfaces</title>
<Para>
The facilities <ProductName>Postgres</ProductName> provides to access large
<para>
The facilities <productname>Postgres</productname> provides to access large
objects, both in the backend as part of user-defined
functions or the front end as part of an application
using the interface, are described below. (For users
familiar with <ProductName>Postgres 4.2</ProductName>, <ProductName>PostgreSQL</ProductName> has a new set of
familiar with <productname>Postgres 4.2</productname>,
<productname>PostgreSQL</productname> has a new set of
functions providing a more coherent interface. The
interface is the same for dynamically-loaded C
functions as well as for XXX LOST TEXT? WHAT SHOULD GO HERE??.
The <ProductName>Postgres</ProductName> large object interface is modeled after
the <Acronym>UNIX</Acronym> file system interface, with analogues of
<Function>open(2)</Function>, <Function>read(2)</Function>, <Function>write(2)</Function>,
<Function>lseek(2)</Function>, etc. User
The <productname>Postgres</productname> large object interface is modeled after
the <acronym>UNIX</acronym> file system interface, with analogues of
<function>open(2)</function>, <function>read(2)</function>,
<function>write(2)</function>,
<function>lseek(2)</function>, etc. User
functions call these routines to retrieve only the data of
interest from a large object. For example, if a large
object type called mugshot existed that stored
......@@ -72,81 +78,82 @@
the beard that appeared there, if any. The entire
large object value need not be buffered, or even
examined, by the beard function.
Large objects may be accessed from dynamically-loaded <Acronym>C</Acronym>
Large objects may be accessed from dynamically-loaded <acronym>C</acronym>
functions or database client programs that link the
library. <ProductName>Postgres</ProductName> provides a set of routines that
library. <productname>Postgres</productname> provides a set of routines that
support opening, reading, writing, closing, and seeking on
large objects.
</Para>
</para>
<Sect2>
<Title>Creating a Large Object</Title>
<sect2>
<title>Creating a Large Object</title>
<Para>
<para>
The routine
<ProgramListing>
<programlisting>
Oid lo_creat(PGconn *conn, int mode)
</ProgramListing>
</programlisting>
creates a new large object. The mode is a bitmask
describing several different attributes of the new
object. The symbolic constants listed here are defined
in
<FileName>
<filename>
PGROOT/src/backend/libpq/libpq-fs.h
</FileName>
</filename>
The access type (read, write, or both) is controlled by
OR ing together the bits <Acronym>INV_READ</Acronym> and <Acronym>INV_WRITE</Acronym>. If
OR ing together the bits <acronym>INV_READ</acronym> and
<acronym>INV_WRITE</acronym>. If
the large object should be archived -- that is, if
historical versions of it should be moved periodically to
a special archive relation -- then the <Acronym>INV_ARCHIVE</Acronym> bit
a special archive relation -- then the <acronym>INV_ARCHIVE</acronym> bit
should be set. The low-order sixteen bits of mask are
the storage manager number on which the large object
should reside. For sites other than Berkeley, these
bits should always be zero.
The commands below create an (Inversion) large object:
<ProgramListing>
<programlisting>
inv_oid = lo_creat(INV_READ|INV_WRITE|INV_ARCHIVE);
</ProgramListing>
</Para>
</Sect2>
</programlisting>
</para>
</sect2>
<Sect2>
<Title>Importing a Large Object</Title>
<sect2>
<title>Importing a Large Object</title>
<Para>
To import a <Acronym>UNIX</Acronym> file as
<para>
To import a <acronym>UNIX</acronym> file as
a large object, call
<ProgramListing>
<programlisting>
Oid lo_import(PGconn *conn, text *filename)
</ProgramListing>
The filename argument specifies the <Acronym>UNIX</Acronym> pathname of
</programlisting>
The filename argument specifies the <acronym>UNIX</acronym> pathname of
the file to be imported as a large object.
</Para>
</Sect2>
</para>
</sect2>
<Sect2>
<Title>Exporting a Large Object</Title>
<sect2>
<title>Exporting a Large Object</title>
<Para>
<para>
To export a large object
into <Acronym>UNIX</Acronym> file, call
<ProgramListing>
into <acronym>UNIX</acronym> file, call
<programlisting>
int lo_export(PGconn *conn, Oid lobjId, text *filename)
</ProgramListing>
</programlisting>
The lobjId argument specifies the Oid of the large
object to export and the filename argument specifies
the <Acronym>UNIX</Acronym> pathname of the file.
</Para>
</Sect2>
the <acronym>UNIX</acronym> pathname of the file.
</para>
</sect2>
<Sect2>
<Title>Opening an Existing Large Object</Title>
<sect2>
<title>Opening an Existing Large Object</title>
<Para>
<para>
To open an existing large object, call
<ProgramListing>
<programlisting>
int lo_open(PGconn *conn, Oid lobjId, int mode, ...)
</ProgramListing>
</programlisting>
The lobjId argument specifies the Oid of the large
object to open. The mode bits control whether the
object is opened for reading INV_READ), writing or
......@@ -154,64 +161,65 @@ int lo_open(PGconn *conn, Oid lobjId, int mode, ...)
A large object cannot be opened before it is created.
lo_open returns a large object descriptor for later use
in lo_read, lo_write, lo_lseek, lo_tell, and lo_close.
</Para>
</Sect2>
</para>
</sect2>
<Sect2>
<Title>Writing Data to a Large Object</Title>
<sect2>
<title>Writing Data to a Large Object</title>
<Para>
<para>
The routine
<ProgramListing>
<programlisting>
int lo_write(PGconn *conn, int fd, char *buf, int len)
</ProgramListing>
</programlisting>
writes len bytes from buf to large object fd. The fd
argument must have been returned by a previous lo_open.
The number of bytes actually written is returned. In
the event of an error, the return value is negative.
</Para>
</Sect2>
</para>
</sect2>
<Sect2>
<Title>Seeking on a Large Object</Title>
<sect2>
<title>Seeking on a Large Object</title>
<Para>
<para>
To change the current read or write location on a large
object, call
<ProgramListing>
<programlisting>
int lo_lseek(PGconn *conn, int fd, int offset, int whence)
</ProgramListing>
</programlisting>
This routine moves the current location pointer for the
large object described by fd to the new location specified
by offset. The valid values for .i whence are
SEEK_SET SEEK_CUR and SEEK_END.
</Para>
</Sect2>
</para>
</sect2>
<Sect2>
<Title>Closing a Large Object Descriptor</Title>
<sect2>
<title>Closing a Large Object Descriptor</title>
<Para>
<para>
A large object may be closed by calling
<ProgramListing>
<programlisting>
int lo_close(PGconn *conn, int fd)
</ProgramListing>
</programlisting>
where fd is a large object descriptor returned by
lo_open. On success, <Acronym>lo_close</Acronym> returns zero. On error,
lo_open. On success, <acronym>lo_close</acronym> returns zero. On error,
the return value is negative.
</Para>
</para>
</sect2>
</Sect1>
</sect1>
<Sect1>
<Title>Built in registered functions</Title>
<sect1>
<title>Built in registered functions</title>
<Para>
There are two built-in registered functions, <Acronym>lo_import</Acronym>
and <Acronym>lo_export</Acronym> which are convenient for use in <Acronym>SQL</Acronym>
<para>
There are two built-in registered functions, <acronym>lo_import</acronym>
and <acronym>lo_export</acronym> which are convenient for use
in <acronym>SQL</acronym>
queries.
Here is an example of their use
<ProgramListing>
<programlisting>
CREATE TABLE image (
name text,
raster oid
......@@ -222,33 +230,33 @@ INSERT INTO image (name, raster)
SELECT lo_export(image.raster, "/tmp/motd") from image
WHERE name = 'beautiful image';
</ProgramListing>
</Para>
</Sect1>
</programlisting>
</para>
</sect1>
<Sect1>
<Title>Accessing Large Objects from LIBPQ</Title>
<sect1>
<title>Accessing Large Objects from LIBPQ</title>
<Para>
<para>
Below is a sample program which shows how the large object
interface
in LIBPQ can be used. Parts of the program are
commented out but are left in the source for the readers
benefit. This program can be found in
<FileName>
<filename>
../src/test/examples
</FileName>
</filename>
Frontend applications which use the large object interface
in LIBPQ should include the header file
libpq/libpq-fs.h and link with the libpq library.
</Para>
</Sect1>
</para>
</sect1>
<Sect1>
<Title>Sample Program</Title>
<sect1>
<title>Sample Program</title>
<Para>
<ProgramListing>
<para>
<programlisting>
/*--------------------------------------------------------------
*
* testlo.c--
......@@ -479,8 +487,25 @@ SELECT lo_export(image.raster, "/tmp/motd") from image
PQfinish(conn);
exit(0);
}
</ProgramListing>
</Para>
</Sect1>
</Chapter>
</programlisting>
</para>
</sect1>
</chapter>
<!-- Keep this comment at the end of the file
Local variables:
mode: sgml
sgml-omittag:nil
sgml-shorttag:t
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:1
sgml-indent-data:t
sgml-parent-document:nil
sgml-default-dtd-file:"./reference.ced"
sgml-exposed-tags:nil
sgml-local-catalogs:"/usr/lib/sgml/catalog"
sgml-local-ecat-files:nil
End:
-->
......@@ -23,7 +23,7 @@
</para>
<para>
Here is a brief summary of some of the more noticable changes:
Here is a brief summary of the more notable changes:
<variablelist>
<varlistentry>
......@@ -188,16 +188,16 @@
<para>
Because readers in 6.5 don't lock data, regardless of transaction
isolation level, data read by one transaction can be overwritten by
another. In the other words, if a row is returned by
another. In other words, if a row is returned by
<command>SELECT</command> it doesn't mean that this row really exists
at the time it is returned (i.e. sometime after the statement or
transaction began) nor that the row is protected from deletion or
update by concurrent transactions before the current transaction does
transaction began) nor that the row is protected from being deleted or
updated by concurrent transactions before the current transaction does
a commit or rollback.
</para>
<para>
To ensure the actual existance of a row and protect it against
To ensure the actual existence of a row and protect it against
concurrent updates one must use <command>SELECT FOR UPDATE</command> or
an appropriate <command>LOCK TABLE</command> statement. This should be
taken into account when porting applications from previous releases of
......@@ -205,7 +205,8 @@
</para>
<para>
Keep above in mind if you are using contrib/refint.* triggers for
Keep the above in mind if you are using
<filename>contrib/refint.*</filename> triggers for
referential integrity. Additional technics are required now. One way is
to use <command>LOCK parent_table IN SHARE ROW EXCLUSIVE MODE</command>
command if a transaction is going to update/delete a primary key and
......@@ -2634,6 +2635,7 @@ Initial release.
<programlisting>
Time System
02:00 Dual Pentium Pro 180, 224MB, UW-SCSI, Linux 2.0.36, gcc 2.7.2.3 -O2 -m486
04:38 Sparc Ultra 1 143MHz, 64MB, Solaris 2.6
</programlisting>
</para>
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册