1. 24 2月, 2012 5 次提交
    • T
      Stamp 8.3.18. · 82345d87
      Tom Lane 提交于
      82345d87
    • T
      Last-minute release note updates. · ecabae5a
      Tom Lane 提交于
      Security: CVE-2012-0866, CVE-2012-0867, CVE-2012-0868
      ecabae5a
    • T
      Convert newlines to spaces in names written in pg_dump comments. · a7f6cb85
      Tom Lane 提交于
      pg_dump was incautious about sanitizing object names that are emitted
      within SQL comments in its output script.  A name containing a newline
      would at least render the script syntactically incorrect.  Maliciously
      crafted object names could present a SQL injection risk when the script
      is reloaded.
      
      Reported by Heikki Linnakangas, patch by Robert Haas
      
      Security: CVE-2012-0868
      a7f6cb85
    • T
      Require execute permission on the trigger function for CREATE TRIGGER. · d1b8b8fb
      Tom Lane 提交于
      This check was overlooked when we added function execute permissions to the
      system years ago.  For an ordinary trigger function it's not a big deal,
      since trigger functions execute with the permissions of the table owner,
      so they couldn't do anything the user issuing the CREATE TRIGGER couldn't
      have done anyway.  However, if a trigger function is SECURITY DEFINER,
      that is not the case.  The lack of checking would allow another user to
      install it on his own table and then invoke it with, essentially, forged
      input data; which the trigger function is unlikely to realize, so it might
      do something undesirable, for instance insert false entries in an audit log
      table.
      
      Reported by Dinesh Kumar, patch by Robert Haas
      
      Security: CVE-2012-0866
      d1b8b8fb
    • P
      Translation updates · a930226c
      Peter Eisentraut 提交于
      a930226c
  2. 23 2月, 2012 1 次提交
  3. 22 2月, 2012 2 次提交
    • T
      Don't clear btpo_cycleid during _bt_vacuum_one_page. · 2c293f25
      Tom Lane 提交于
      When "vacuuming" a single btree page by removing LP_DEAD tuples, we are not
      actually within a vacuum operation, but rather in an ordinary insertion
      process that could well be running concurrently with a vacuum.  So clearing
      the cycleid is incorrect, and could cause the concurrent vacuum to miss
      removing tuples that it needs to remove.  This is a longstanding bug
      introduced by commit e6284649 of
      2006-07-25.  I believe it explains Maxim Boguk's recent report of index
      corruption, and probably some other previously unexplained reports.
      
      In 9.0 and up this is a one-line fix; before that we need to introduce a
      flag to tell _bt_delitems what to do.
      2c293f25
    • M
      Avoid double close of file handle in syslogger on win32 · f3ad4ca0
      Magnus Hagander 提交于
      This causes an exception when running under a debugger or in particular
      when running on a debug version of Windows.
      
      Patch from MauMau
      f3ad4ca0
  4. 21 2月, 2012 1 次提交
  5. 20 2月, 2012 1 次提交
    • T
      Fix regex back-references that are directly quantified with *. · 35ab4284
      Tom Lane 提交于
      The syntax "\n*", that is a backref with a * quantifier directly applied
      to it, has never worked correctly in Spencer's library.  This has been an
      open bug in the Tcl bug tracker since 2005:
      https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894
      
      The core of the problem is in parseqatom(), which first changes "\n*" to
      "\n+|" and then applies repeat() to the NFA representing the backref atom.
      repeat() thinks that any arc leading into its "rp" argument is part of the
      sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
      the arc that was intended to represent the empty bypass around "\n+", this
      arc gets moved too, so that it now leads into the state loop created by
      repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
      something that represents zero or more repetitions of the NFA representing
      the backref atom.  In the original example, in place of
      	^([bc])\1*$
      we now have something that acts like
      	^([bc])(\1+|[bc]*)$
      At runtime, the branch involving the actual backref fails, as it's supposed
      to, but then the other branch succeeds anyway.
      
      We could no doubt fix this by some rearrangement of the operations in
      parseqatom(), but that code is plenty ugly already, and what's more the
      whole business of converting "x*" to "x+|" probably needs to go away to fix
      another problem I'll mention in a moment.  Instead, this patch suppresses
      the *-conversion when the target is a simple backref atom, leaving the case
      of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
      one-liner, at the cost of having to tweak cbrdissect() a little.  In the
      event I went a bit further than that and rewrote cbrdissect() to check all
      the string-length-related conditions before it starts comparing characters.
      It seems a bit stupid to possibly iterate through many copies of an
      n-character backreference, only to fail at the end because the target
      string's length isn't a multiple of n --- we could have found that out
      before starting.  The existing coding could only be a win if integer
      division is hugely expensive compared to character comparison, but I don't
      know of any modern machine where that might be true.
      
      This does not fix all the problems with quantified back-references.  In
      particular, the code is still broken for back-references that appear within
      a larger expression that is quantified (so that direct insertion of the
      quantification limits into the BACKREF node doesn't apply).  I think fixing
      that will take some major surgery on the NFA code, specifically introducing
      an explicit iteration node type instead of trying to transform iteration
      into concatenation of modified regexps.
      
      Back-patch to all supported branches.  In HEAD, also add a regression test
      case for this.  (It may seem a bit silly to create a regression test file
      for just one test case; but I'm expecting that we will soon import a whole
      bunch of regex regression tests from Tcl, so might as well create the
      infrastructure now.)
      35ab4284
  6. 17 2月, 2012 1 次提交
    • T
      Fix longstanding error in contrib/intarray's int[] & int[] operator. · b0e1a4bd
      Tom Lane 提交于
      The array intersection code would give wrong results if the first entry of
      the correct output array would be "1".  (I think only this value could be
      at risk, since the previous word would always be a lower-bound entry with
      that fixed value.)
      
      Problem spotted by Julien Rouhaud, initial patch by Guillaume Lelarge,
      cosmetic improvements by me.
      b0e1a4bd
  7. 12 2月, 2012 1 次提交
    • T
      Fix I/O-conversion-related memory leaks in plpgsql. · 3eb2ff16
      Tom Lane 提交于
      Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,
      since they are generally called in short-lived contexts.  However, plpgsql
      calls such functions for purposes of type conversion, and was calling them
      in its procedure context.  Therefore, any leaked memory would not be
      recovered until the end of the plpgsql function.  If such a conversion
      was done within a loop, quite a bit of memory could get consumed.  Fix by
      calling such functions in the transient "eval_econtext", and adjust other
      logic to match.  Back-patch to all supported versions.
      
      Andres Freund, Jan Urbański, Tom Lane
      3eb2ff16
  8. 11 2月, 2012 2 次提交
    • T
      Fix brain fade in previous pg_dump patch. · 01f99e2d
      Tom Lane 提交于
      In pre-7.3 databases, pg_attribute.attislocal doesn't exist.  The easiest
      way to make sure the new inheritance logic behaves sanely is to assume it's
      TRUE, not FALSE.  This will result in printing child columns even when
      they're not really needed.  We could work harder at trying to reconstruct a
      value for attislocal, but there is little evidence that anyone still cares
      about dumping from such old versions, so just do the minimum necessary to
      have a valid dump.
      
      I had this correct in the original draft of the patch, but for some
      unaccountable reason decided it wasn't necessary to change the value.
      Testing against an old server shows otherwise...
      01f99e2d
    • T
      Fix pg_dump for better handling of inherited columns. · 02e64181
      Tom Lane 提交于
      Revise pg_dump's handling of inherited columns, which was last looked at
      seriously in 2001, to eliminate several misbehaviors associated with
      inherited default expressions and NOT NULL flags.  In particular make sure
      that a column is printed in a child table's CREATE TABLE command if and
      only if it has attislocal = true; the former behavior would sometimes cause
      a column to become marked attislocal when it was not so marked in the
      source database.  Also, stop relying on textual comparison of default
      expressions to decide if they're inherited; instead, don't use
      default-expression inheritance at all, but just install the default
      explicitly at each level of the hierarchy.  This fixes the
      search-path-related misbehavior recently exhibited by Chester Young, and
      also removes some dubious assumptions about the order in which ALTER TABLE
      SET DEFAULT commands would be executed.
      
      Back-patch to all supported branches.
      02e64181
  9. 07 2月, 2012 1 次提交
    • T
      Avoid problems with OID wraparound during WAL replay. · 0f5d2082
      Tom Lane 提交于
      Fix a longstanding thinko in replay of NEXTOID and checkpoint records: we
      tried to advance nextOid only if it was behind the value in the WAL record,
      but the comparison would draw the wrong conclusion if OID wraparound had
      occurred since the previous value.  Better to just unconditionally assign
      the new value, since OID assignment shouldn't be happening during replay
      anyway.
      
      The consequences of a failure to update nextOid would be pretty minimal,
      since we have long had the code set up to obtain another OID and try again
      if the generated value is already in use.  But in the worst case there
      could be significant performance glitches while such loops iterate through
      many already-used OIDs before finding a free one.
      
      The odds of a wraparound happening during WAL replay would be small in a
      crash-recovery scenario, and the length of any ensuing OID-assignment stall
      quite limited anyway.  But neither of these statements hold true for a
      replication slave that follows a WAL stream for a long period; its behavior
      upon going live could be almost unboundedly bad.  Hence it seems worth
      back-patching this fix into all supported branches.
      
      Already fixed in HEAD in commit c6d76d7c.
      0f5d2082
  10. 30 1月, 2012 1 次提交
    • H
      Accept a non-existent value in "ALTER USER/DATABASE SET ..." command. · 483c8ce2
      Heikki Linnakangas 提交于
      When default_text_search_config, default_tablespace, or temp_tablespaces
      setting is set per-user or per-database, with an "ALTER USER/DATABASE SET
      ..." statement, don't throw an error if the text search configuration or
      tablespace does not exist. In case of text search configuration, even if
      it doesn't exist in the current database, it might exist in another
      database, where the setting is intended to have its effect. This behavior
      is now the same as search_path's.
      
      Tablespaces are cluster-wide, so the same argument doesn't hold for
      tablespaces, but there's a problem with pg_dumpall: it dumps "ALTER USER
      SET ..." statements before the "CREATE TABLESPACE" statements. Arguably
      that's pg_dumpall's fault - it should dump the statements in such an order
      that the tablespace is created first and then the "ALTER USER SET
      default_tablespace ..." statements after that - but it seems better to be
      consistent with search_path and default_text_search_config anyway. Besides,
      you could still create a dump that throws an error, by creating the
      tablespace, running "ALTER USER SET default_tablespace", then dropping the
      tablespace and running pg_dumpall on that.
      
      Backpatch to all supported versions.
      483c8ce2
  11. 28 1月, 2012 1 次提交
  12. 10 1月, 2012 1 次提交
    • T
      Fix one-byte buffer overrun in contrib/test_parser. · 3852cfaf
      Tom Lane 提交于
      The original coding examined the next character before verifying that
      there *is* a next character.  In the worst case with the input buffer
      right up against the end of memory, this would result in a segfault.
      
      Problem spotted by Paul Guyot; this commit extends his patch to fix an
      additional case.  In addition, make the code a tad more readable by not
      overloading the usage of *tlen.
      3852cfaf
  13. 08 1月, 2012 1 次提交
    • T
      Use __sync_lock_test_and_set() for spinlocks on ARM, if available. · aa31c350
      Tom Lane 提交于
      Historically we've used the SWPB instruction for TAS() on ARM, but this
      is deprecated and not available on ARMv6 and later.  Instead, make use
      of a GCC builtin if available.  We'll still fall back to SWPB if not,
      so as not to break existing ports using older GCC versions.
      
      Eventually we might want to try using __sync_lock_test_and_set() on some
      other architectures too, but for now that seems to present only risk and
      not reward.
      
      Back-patch to all supported versions, since people might want to use any
      of them on more recent ARM chips.
      
      Martin Pitt
      aa31c350
  14. 07 1月, 2012 1 次提交
    • T
      Fix pg_restore's direct-to-database mode for INSERT-style table data. · a86448e8
      Tom Lane 提交于
      In commit 6545a901, I removed the mini SQL
      lexer that was in pg_backup_db.c, thinking that it had no real purpose
      beyond separating COPY data from SQL commands, which purpose had been
      obsoleted by long-ago fixes in pg_dump's archive file format.
      Unfortunately this was in error: that code was also used to identify
      command boundaries in INSERT-style table data, which is run together as a
      single string in the archive file for better compressibility.  As a result,
      direct-to-database restores from archive files made with --inserts or
      --column-inserts fail in our latest releases, as reported by Dick Visser.
      
      To fix, restore the mini SQL lexer, but simplify it by adjusting the
      calling logic so that it's only required to cope with INSERT-style table
      data, not arbitrary SQL commands.  This allows us to not have to deal with
      SQL comments, E'' strings, or dollar-quoted strings, none of which have
      ever been emitted by dumpTableData_insert.
      
      Also, fix the lexer to cope with standard-conforming strings, which was the
      actual bug that the previous patch was meant to solve.
      
      Back-patch to all supported branches.  The previous patch went back to 8.2,
      which unfortunately means that the EOL release of 8.2 contains this bug,
      but I don't think we're doing another 8.2 release just because of that.
      a86448e8
  15. 15 12月, 2011 1 次提交
  16. 12 12月, 2011 1 次提交
    • H
      Revert the behavior of inet/cidr functions to not unpack the arguments. · c1a03230
      Heikki Linnakangas 提交于
      I forgot to change the functions to use the PG_GETARG_INET_PP() macro,
      when I changed DatumGetInetP() to unpack the datum, like Datum*P macros
      usually do. Also, I screwed up the definition of the PG_GETARG_INET_PP()
      macro, and didn't notice because it wasn't used.
      
      This fixes the memory leak when sorting inet values, as reported
      by Jochen Erwied and debugged by Andres Freund. Backpatch to 8.3, like
      the previous patch that broke it.
      c1a03230
  17. 02 12月, 2011 1 次提交
  18. 01 12月, 2011 3 次提交
  19. 30 11月, 2011 2 次提交
  20. 28 11月, 2011 1 次提交
  21. 19 11月, 2011 1 次提交
    • T
      Avoid floating-point underflow while tracking buffer allocation rate. · fdaff0ba
      Tom Lane 提交于
      When the system is idle for awhile after activity, the "smoothed_alloc"
      state variable in BgBufferSync converges slowly to zero.  With standard
      IEEE float arithmetic this results in several iterations with denormalized
      values, which causes kernel traps and annoying log messages on some
      poorly-designed platforms.  There's no real need to track such small values
      of smoothed_alloc, so we can prevent the kernel traps by forcing it to zero
      as soon as it's too small to be interesting for our purposes.  This issue
      is purely cosmetic, since the iterations don't happen fast enough for the
      kernel traps to pose any meaningful performance problem, but still it seems
      worth shutting up the log messages.
      
      The kernel log messages were previously reported by a number of people,
      but kudos to Greg Matthews for tracking down exactly where they were coming
      from.
      fdaff0ba
  22. 16 11月, 2011 1 次提交
  23. 11 11月, 2011 1 次提交
  24. 09 11月, 2011 1 次提交
  25. 05 11月, 2011 2 次提交
    • T
      Don't assume that a tuple's header size is unchanged during toasting. · c34088fd
      Tom Lane 提交于
      This assumption can be wrong when the toaster is passed a raw on-disk
      tuple, because the tuple might pre-date an ALTER TABLE ADD COLUMN operation
      that added columns without rewriting the table.  In such a case the tuple's
      natts value is smaller than what we expect from the tuple descriptor, and
      so its t_hoff value could be smaller too.  In fact, the tuple might not
      have a null bitmap at all, and yet our current opinion of it is that it
      contains some trailing nulls.
      
      In such a situation, toast_insert_or_update did the wrong thing, because
      to save a few lines of code it would use the old t_hoff value as the offset
      where heap_fill_tuple should start filling data.  This did not leave enough
      room for the new nulls bitmap, with the result that the first few bytes of
      data could be overwritten with null flag bits, as in a recent report from
      Hubert Depesz Lubaczewski.
      
      The particular case reported requires ALTER TABLE ADD COLUMN followed by
      CREATE TABLE AS SELECT * FROM ... or INSERT ... SELECT * FROM ..., and
      further requires that there be some out-of-line toasted fields in one of
      the tuples to be copied; else we'll not reach the troublesome code.
      The problem can only manifest in this form in 8.4 and later, because
      before commit a77eaa6a, CREATE TABLE AS or
      INSERT/SELECT wouldn't result in raw disk tuples getting passed directly
      to heap_insert --- there would always have been at least a junkfilter in
      between, and that would reconstitute the tuple header with an up-to-date
      t_natts and hence t_hoff.  But I'm backpatching the tuptoaster change all
      the way anyway, because I'm not convinced there are no older code paths
      that present a similar risk.
      c34088fd
    • P
      Fix archive_command example · 60817575
      Peter Eisentraut 提交于
      The given archive_command example didn't use %p or %f, which wouldn't
      really work in practice.
      60817575
  26. 04 11月, 2011 1 次提交
    • T
      Fix bogus code in contrib/ tsearch dictionary examples. · 0dddbbcd
      Tom Lane 提交于
      Both dict_int and dict_xsyn were blithely assuming that whatever memory
      palloc gives back will be pre-zeroed.  This would typically work for
      just about long enough to run their regression tests, and no longer :-(.
      
      The pre-9.0 code in dict_xsyn was even lamer than that, as it would
      happily give back a pointer to the result of palloc(0), encouraging
      its caller to access off the end of memory.  Again, this would just
      barely fail to fail as long as memory contained nothing but zeroes.
      
      Per a report from Rodrigo Hjort that code based on these examples
      didn't work reliably.
      0dddbbcd
  27. 03 11月, 2011 1 次提交
  28. 02 11月, 2011 1 次提交
    • T
      Fix race condition with toast table access from a stale syscache entry. · 7e03d284
      Tom Lane 提交于
      If a tuple in a syscache contains an out-of-line toasted field, and we
      try to fetch that field shortly after some other transaction has committed
      an update or deletion of the tuple, there is a race condition: vacuum
      could come along and remove the toast tuples before we can fetch them.
      This leads to transient failures like "missing chunk number 0 for toast
      value NNNNN in pg_toast_2619", as seen in recent reports from Andrew
      Hammond and Tim Uckun.
      
      The design idea of syscache is that access to stale syscache entries
      should be prevented by relation-level locks, but that fails for at least
      two cases where toasted fields are possible: ANALYZE updates pg_statistic
      rows without locking out sessions that might want to plan queries on the
      same table, and CREATE OR REPLACE FUNCTION updates pg_proc rows without
      any meaningful lock at all.
      
      The least risky fix seems to be an idea that Heikki suggested when we
      were dealing with a related problem back in August: forcibly detoast any
      out-of-line fields before putting a tuple into syscache in the first place.
      This avoids the problem because at the time we fetch the parent tuple from
      the catalog, we should be holding an MVCC snapshot that will prevent
      removal of the toast tuples, even if the parent tuple is outdated
      immediately after we fetch it.  (Note: I'm not convinced that this
      statement holds true at every instant where we could be fetching a syscache
      entry at all, but it does appear to hold true at the times where we could
      fetch an entry that could have a toasted field.  We will need to be a bit
      wary of adding toast tables to low-level catalogs that don't have them
      already.)  An additional benefit is that subsequent uses of the syscache
      entry should be faster, since they won't have to detoast the field.
      
      Back-patch to all supported versions.  The problem is significantly harder
      to reproduce in pre-9.0 releases, because of their willingness to flush
      every entry in a syscache whenever the underlying catalog is vacuumed
      (cf CatalogCacheFlushRelation); but there is still a window for trouble.
      7e03d284
  29. 01 11月, 2011 1 次提交
    • T
      Stop btree indexscans upon reaching nulls in either direction. · ff41611d
      Tom Lane 提交于
      The existing scan-direction-sensitive tests were overly complex, and
      failed to stop the scan in cases where it's perfectly legitimate to do so.
      Per bug #6278 from Maksym Boguk.
      
      Back-patch to 8.3, which is as far back as the patch applies easily.
      Doesn't seem worth sweating over a relatively minor performance issue in
      8.2 at this late date.  (But note that this was a performance regression
      from 8.1 and before, so 8.2 is being left as an outlier.)
      ff41611d
  30. 30 10月, 2011 1 次提交
    • T
      Fix assorted bogosities in cash_in() and cash_out(). · b52ca458
      Tom Lane 提交于
      cash_out failed to handle multiple-byte thousands separators, as per bug
      #6277 from Alexander Law.  In addition, cash_in didn't handle that either,
      nor could it handle multiple-byte positive_sign.  Both routines failed to
      support multiple-byte mon_decimal_point, which I did not think was worth
      changing, but at least now they check for the possibility and fall back to
      using '.' rather than emitting invalid output.  Also, make cash_in handle
      trailing negative signs, which formerly it would reject.  Since cash_out
      generates trailing negative signs whenever the locale tells it to, this
      last omission represents a fail-to-reload-dumped-data bug.  IMO that
      justifies patching this all the way back.
      b52ca458