1. 10 2月, 2011 8 次提交
    • M
      Track last time for statistics reset on databases and bgwriter · 4c468b37
      Magnus Hagander 提交于
      Tracks one counter for each database, which is reset whenever
      the statistics for any individual object inside the database is
      reset, and one counter for the background writer.
      
      Tomas Vondra, reviewed by Greg Smith
      4c468b37
    • M
      Use NOWAIT when including WAL in base backup · a2e61ec3
      Magnus Hagander 提交于
      Avoids warning and waiting for the last segment to be
      archived, which isn't necessary when we're including the
      required WAL in the backup itself.
      a2e61ec3
    • H
      Allocate all entries in the serializable xid hash up-front, so that you don't · cecb5901
      Heikki Linnakangas 提交于
      run out of shared memory when you try to assign an xid to a transaction.
      
      Kevin Grittner
      cecb5901
    • T
      Fix improper matching of resjunk column names for FOR UPDATE in subselect. · e617f0d7
      Tom Lane 提交于
      Flattening of subquery range tables during setrefs.c could lead to the
      rangetable indexes in PlanRowMark nodes not matching up with the column
      names previously assigned to the corresponding resjunk ctid (resp. tableoid
      or wholerow) columns.  Typical symptom would be either a "cannot extract
      system attribute from virtual tuple" error or an Assert failure.  This
      wasn't a problem before 9.0 because we didn't support FOR UPDATE below the
      top query level, and so the final flattening could never renumber an RTE
      that was relevant to FOR UPDATE.  Fix by using a plan-tree-wide unique
      number for each PlanRowMark to label the associated resjunk columns, so
      that the number need not change during flattening.
      
      Per report from David Johnston (though I'm darned if I can see how this got
      past initial testing of the relevant code).  Back-patch to 9.0.
      e617f0d7
    • T
      Fix pg_upgrade to handle extensions. · caddcb8f
      Tom Lane 提交于
      This follows my proposal of yesterday, namely that we try to recreate the
      previous state of the extension exactly, instead of allowing CREATE
      EXTENSION to run a SQL script that might create some entirely-incompatible
      on-disk state.  In --binary-upgrade mode, pg_dump won't issue CREATE
      EXTENSION at all, but instead uses a kluge function provided by
      pg_upgrade_support to recreate the pg_extension row (and extension-level
      pg_depend entries) without creating any member objects.  The member objects
      are then restored in the same way as if they weren't members, in particular
      using pg_upgrade's normal hacks to preserve OIDs that need to be preserved.
      Then, for each member object, ALTER EXTENSION ADD is issued to recreate the
      pg_depend entry that marks it as an extension member.
      
      In passing, fix breakage in pg_upgrade's enum-type support: somebody didn't
      fix it when the noise word VALUE got added to ALTER TYPE ADD.  Also,
      rationalize parsetree representation of COMMENT ON DOMAIN and fix
      get_object_address() to allow OBJECT_DOMAIN.
      caddcb8f
    • P
      Information schema views for collation support · 2e2d56fe
      Peter Eisentraut 提交于
      Add the views character_sets, collations, and
      collation_character_set_applicability.
      2e2d56fe
    • T
      Rethink order of operations for dumping extension member objects. · 183d3cff
      Tom Lane 提交于
      My original idea of doing extension member identification during
      getDependencies() didn't work correctly: we have to mark member tables as
      not-to-be-dumped rather earlier than that, else their subsidiary objects
      like indexes get dumped anyway.  Rearrange code to mark them early enough.
      183d3cff
    • T
      Implement "ALTER EXTENSION ADD object". · 5bc178b8
      Tom Lane 提交于
      This is an essential component of making the extension feature usable;
      first because it's needed in the process of converting an existing
      installation containing "loose" objects of an old contrib module into
      the extension-based world, and second because we'll have to use it
      in pg_dump --binary-upgrade, as per recent discussion.
      
      Loosely based on part of Dimitri Fontaine's ALTER EXTENSION UPGRADE
      patch.
      5bc178b8
  2. 09 2月, 2011 10 次提交
  3. 08 2月, 2011 6 次提交
    • S
      Remove rare corner case for data loss when triggering standby server. · faa05505
      Simon Riggs 提交于
      If the standby was streaming when trigger file arrives, check also in the
      archive for additional WAL files. This is a corner case since it is
      unlikely that we would trigger a failover while the master is still
      available and sending data to standby, while at the same time running in
      archive mode and also while the streaming standby has fallen behind archive.
      Someone would eventually be unlucky; we must plug all gaps however small.
      
      Fujii Masao
      faa05505
    • S
      Extend ALTER TABLE to allow Foreign Keys to be added without initial validation. · 722bf701
      Simon Riggs 提交于
      FK constraints that are marked NOT VALID may later be VALIDATED, which uses an
      ShareUpdateExclusiveLock on constraint table and RowShareLock on referenced
      table. Significantly reduces lock strength and duration when adding FKs.
      New state visible from psql.
      
      Simon Riggs, with reviews from Marko Tiikkaja and Robert Haas
      722bf701
    • H
      Fix copy-pasto in description of pg_serial, and silence compiler warning · 7202ad7b
      Heikki Linnakangas 提交于
      about uninitialized field you get on some compilers.
      7202ad7b
    • R
      Avoid having autovacuum workers wait for relation locks. · 32896c40
      Robert Haas 提交于
      Waiting for relation locks can lead to starvation - it pins down an
      autovacuum worker for as long as the lock is held.  But if we're doing
      an anti-wraparound vacuum, then we still wait; maintenance can no longer
      be put off.
      
      To assist with troubleshooting, if log_autovacuum_min_duration >= 0,
      we log whenever an autovacuum or autoanalyze is skipped for this reason.
      
      Per a gripe by Josh Berkus, and ensuing discussion.
      32896c40
    • H
      Oops, forgot to bump catversion in the Serializable Snapshot Isolation patch. · 47082fa8
      Heikki Linnakangas 提交于
      I thought we didn't need that, but then I remembered that it added a new
      SLRU subdirectory, pg_serial. While we're at it, document what pg_serial is.
      47082fa8
    • H
      Implement genuine serializable isolation level. · dafaa3ef
      Heikki Linnakangas 提交于
      Until now, our Serializable mode has in fact been what's called Snapshot
      Isolation, which allows some anomalies that could not occur in any
      serialized ordering of the transactions. This patch fixes that using a
      method called Serializable Snapshot Isolation, based on research papers by
      Michael J. Cahill (see README-SSI for full references). In Serializable
      Snapshot Isolation, transactions run like they do in Snapshot Isolation,
      but a predicate lock manager observes the reads and writes performed and
      aborts transactions if it detects that an anomaly might occur. This method
      produces some false positives, ie. it sometimes aborts transactions even
      though there is no anomaly.
      
      To track reads we implement predicate locking, see storage/lmgr/predicate.c.
      Whenever a tuple is read, a predicate lock is acquired on the tuple. Shared
      memory is finite, so when a transaction takes many tuple-level locks on a
      page, the locks are promoted to a single page-level lock, and further to a
      single relation level lock if necessary. To lock key values with no matching
      tuple, a sequential scan always takes a relation-level lock, and an index
      scan acquires a page-level lock that covers the search key, whether or not
      there are any matching keys at the moment.
      
      A predicate lock doesn't conflict with any regular locks or with another
      predicate locks in the normal sense. They're only used by the predicate lock
      manager to detect the danger of anomalies. Only serializable transactions
      participate in predicate locking, so there should be no extra overhead for
      for other transactions.
      
      Predicate locks can't be released at commit, but must be remembered until
      all the transactions that overlapped with it have completed. That means that
      we need to remember an unbounded amount of predicate locks, so we apply a
      lossy but conservative method of tracking locks for committed transactions.
      If we run short of shared memory, we overflow to a new "pg_serial" SLRU
      pool.
      
      We don't currently allow Serializable transactions in Hot Standby mode.
      That would be hard, because even read-only transactions can cause anomalies
      that wouldn't otherwise occur.
      
      Serializable isolation mode now means the new fully serializable level.
      Repeatable Read gives you the old Snapshot Isolation level that we have
      always had.
      
      Kevin Grittner and Dan Ports, reviewed by Jeff Davis, Heikki Linnakangas and
      Anssi Kääriäinen
      dafaa3ef
  4. 07 2月, 2011 4 次提交
  5. 06 2月, 2011 3 次提交
  6. 05 2月, 2011 5 次提交
  7. 04 2月, 2011 4 次提交