1. 01 4月, 2012 2 次提交
    • T
      Fix O(N^2) behavior in pg_dump when many objects are in dependency loops. · b1be1294
      Tom Lane 提交于
      Combining the loop workspace with the record of already-processed objects
      might have been a cute trick, but it behaves horridly if there are many
      dependency loops to repair: the time spent in the first step of findLoop()
      grows as O(N^2).  Instead use a separate flag array indexed by dump ID,
      which we can check in constant time.  The length of the workspace array
      is now never more than the actual length of a dependency chain, which
      should be reasonably short in all cases of practical interest.  The code
      is noticeably easier to understand this way, too.
      
      Per gripe from Mike Roest.  Since this is a longstanding performance bug,
      backpatch to all supported versions.
      b1be1294
    • T
      Fix O(N^2) behavior in pg_dump for large numbers of owned sequences. · 55eb2567
      Tom Lane 提交于
      The loop that matched owned sequences to their owning tables required time
      proportional to number of owned sequences times number of tables; although
      this work was only expended in selective-dump situations, which is probably
      why the issue wasn't recognized long since.  Refactor slightly so that we
      can perform this work after the index array for findTableByOid has been
      set up, reducing the time to O(M log N).
      
      Per gripe from Mike Roest.  Since this is a longstanding performance bug,
      backpatch to all supported versions.
      55eb2567
  2. 23 3月, 2012 1 次提交
    • T
      Fix GET DIAGNOSTICS for case of assignment to function's first variable. · 38350a49
      Tom Lane 提交于
      An incorrect and entirely unnecessary "safety check" in exec_stmt_getdiag()
      caused the code to treat an assignment to a variable with dno zero as a
      no-op.  Unfortunately, that's a perfectly valid dno.  This has been broken
      since GET DIAGNOSTICS was invented.  It's not terribly surprising that the
      bug went unnoticed for so long, since in most cases you probably wouldn't
      use the function's first-created variable (normally its first parameter)
      as a GET DIAGNOSTICS target.  Nonetheless, it's broken.  Per bug #6551
      from Adam Buraczewski.
      38350a49
  3. 22 3月, 2012 1 次提交
    • R
      Don't allow CREATE TABLE AS to put relations in pg_global. · 596e1632
      Robert Haas 提交于
      This was never intended to be allowed, and is blocked for an ordinary
      CREATE TABLE, but CREATE TABLE AS slipped through the cracks.  This
      commit won't do anything to fix existing cases where this has loophole
      has been exploited, but it still seems prudent to lock it down going
      forward.
      
      Back-branch commit only, as this problem has been refactored away
      on the master branch.
      
      Andres Freund
      596e1632
  4. 21 3月, 2012 1 次提交
  5. 18 3月, 2012 1 次提交
    • A
      Honor inputdir and outputdir when converting regression files. · 206f5b08
      Andrew Dunstan 提交于
      When converting source files, pg_regress' inputdir and outputdir options were
      ignored when computing the locations of the destination files. In consequence,
      these options were effectively unusable when the regression inputs need to
      be adjusted by pg_regress. This patch makes pg_regress put the converted files
      in the same place that these options specify non-converted input or results
      files are to be found. Backpatched to all live branches.
      206f5b08
  6. 11 3月, 2012 1 次提交
  7. 06 3月, 2012 1 次提交
    • T
      Improve documentation around logging_collector and use of stderr. · 3713ca86
      Tom Lane 提交于
      In backup.sgml, point out that you need to be using the logging collector
      if you want to log messages from a failing archive_command script.  (This
      is an oversimplification, in that it will work without the collector as
      long as you're not sending postmaster stderr to /dev/null; but it seems
      like a good idea to encourage use of the collector to avoid problems
      with multiple processes concurrently scribbling on one file.)
      
      In config.sgml, do some wordsmithing of logging_collector discussion.
      
      Per bug #6518 from Janning Vygen
      3713ca86
  8. 24 2月, 2012 5 次提交
    • T
      Stamp 8.3.18. · 82345d87
      Tom Lane 提交于
      82345d87
    • T
      Last-minute release note updates. · ecabae5a
      Tom Lane 提交于
      Security: CVE-2012-0866, CVE-2012-0867, CVE-2012-0868
      ecabae5a
    • T
      Convert newlines to spaces in names written in pg_dump comments. · a7f6cb85
      Tom Lane 提交于
      pg_dump was incautious about sanitizing object names that are emitted
      within SQL comments in its output script.  A name containing a newline
      would at least render the script syntactically incorrect.  Maliciously
      crafted object names could present a SQL injection risk when the script
      is reloaded.
      
      Reported by Heikki Linnakangas, patch by Robert Haas
      
      Security: CVE-2012-0868
      a7f6cb85
    • T
      Require execute permission on the trigger function for CREATE TRIGGER. · d1b8b8fb
      Tom Lane 提交于
      This check was overlooked when we added function execute permissions to the
      system years ago.  For an ordinary trigger function it's not a big deal,
      since trigger functions execute with the permissions of the table owner,
      so they couldn't do anything the user issuing the CREATE TRIGGER couldn't
      have done anyway.  However, if a trigger function is SECURITY DEFINER,
      that is not the case.  The lack of checking would allow another user to
      install it on his own table and then invoke it with, essentially, forged
      input data; which the trigger function is unlikely to realize, so it might
      do something undesirable, for instance insert false entries in an audit log
      table.
      
      Reported by Dinesh Kumar, patch by Robert Haas
      
      Security: CVE-2012-0866
      d1b8b8fb
    • P
      Translation updates · a930226c
      Peter Eisentraut 提交于
      a930226c
  9. 23 2月, 2012 1 次提交
  10. 22 2月, 2012 2 次提交
    • T
      Don't clear btpo_cycleid during _bt_vacuum_one_page. · 2c293f25
      Tom Lane 提交于
      When "vacuuming" a single btree page by removing LP_DEAD tuples, we are not
      actually within a vacuum operation, but rather in an ordinary insertion
      process that could well be running concurrently with a vacuum.  So clearing
      the cycleid is incorrect, and could cause the concurrent vacuum to miss
      removing tuples that it needs to remove.  This is a longstanding bug
      introduced by commit e6284649 of
      2006-07-25.  I believe it explains Maxim Boguk's recent report of index
      corruption, and probably some other previously unexplained reports.
      
      In 9.0 and up this is a one-line fix; before that we need to introduce a
      flag to tell _bt_delitems what to do.
      2c293f25
    • M
      Avoid double close of file handle in syslogger on win32 · f3ad4ca0
      Magnus Hagander 提交于
      This causes an exception when running under a debugger or in particular
      when running on a debug version of Windows.
      
      Patch from MauMau
      f3ad4ca0
  11. 21 2月, 2012 1 次提交
  12. 20 2月, 2012 1 次提交
    • T
      Fix regex back-references that are directly quantified with *. · 35ab4284
      Tom Lane 提交于
      The syntax "\n*", that is a backref with a * quantifier directly applied
      to it, has never worked correctly in Spencer's library.  This has been an
      open bug in the Tcl bug tracker since 2005:
      https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894
      
      The core of the problem is in parseqatom(), which first changes "\n*" to
      "\n+|" and then applies repeat() to the NFA representing the backref atom.
      repeat() thinks that any arc leading into its "rp" argument is part of the
      sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
      the arc that was intended to represent the empty bypass around "\n+", this
      arc gets moved too, so that it now leads into the state loop created by
      repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
      something that represents zero or more repetitions of the NFA representing
      the backref atom.  In the original example, in place of
      	^([bc])\1*$
      we now have something that acts like
      	^([bc])(\1+|[bc]*)$
      At runtime, the branch involving the actual backref fails, as it's supposed
      to, but then the other branch succeeds anyway.
      
      We could no doubt fix this by some rearrangement of the operations in
      parseqatom(), but that code is plenty ugly already, and what's more the
      whole business of converting "x*" to "x+|" probably needs to go away to fix
      another problem I'll mention in a moment.  Instead, this patch suppresses
      the *-conversion when the target is a simple backref atom, leaving the case
      of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
      one-liner, at the cost of having to tweak cbrdissect() a little.  In the
      event I went a bit further than that and rewrote cbrdissect() to check all
      the string-length-related conditions before it starts comparing characters.
      It seems a bit stupid to possibly iterate through many copies of an
      n-character backreference, only to fail at the end because the target
      string's length isn't a multiple of n --- we could have found that out
      before starting.  The existing coding could only be a win if integer
      division is hugely expensive compared to character comparison, but I don't
      know of any modern machine where that might be true.
      
      This does not fix all the problems with quantified back-references.  In
      particular, the code is still broken for back-references that appear within
      a larger expression that is quantified (so that direct insertion of the
      quantification limits into the BACKREF node doesn't apply).  I think fixing
      that will take some major surgery on the NFA code, specifically introducing
      an explicit iteration node type instead of trying to transform iteration
      into concatenation of modified regexps.
      
      Back-patch to all supported branches.  In HEAD, also add a regression test
      case for this.  (It may seem a bit silly to create a regression test file
      for just one test case; but I'm expecting that we will soon import a whole
      bunch of regex regression tests from Tcl, so might as well create the
      infrastructure now.)
      35ab4284
  13. 17 2月, 2012 1 次提交
    • T
      Fix longstanding error in contrib/intarray's int[] & int[] operator. · b0e1a4bd
      Tom Lane 提交于
      The array intersection code would give wrong results if the first entry of
      the correct output array would be "1".  (I think only this value could be
      at risk, since the previous word would always be a lower-bound entry with
      that fixed value.)
      
      Problem spotted by Julien Rouhaud, initial patch by Guillaume Lelarge,
      cosmetic improvements by me.
      b0e1a4bd
  14. 12 2月, 2012 1 次提交
    • T
      Fix I/O-conversion-related memory leaks in plpgsql. · 3eb2ff16
      Tom Lane 提交于
      Datatype I/O functions are allowed to leak memory in CurrentMemoryContext,
      since they are generally called in short-lived contexts.  However, plpgsql
      calls such functions for purposes of type conversion, and was calling them
      in its procedure context.  Therefore, any leaked memory would not be
      recovered until the end of the plpgsql function.  If such a conversion
      was done within a loop, quite a bit of memory could get consumed.  Fix by
      calling such functions in the transient "eval_econtext", and adjust other
      logic to match.  Back-patch to all supported versions.
      
      Andres Freund, Jan Urbański, Tom Lane
      3eb2ff16
  15. 11 2月, 2012 2 次提交
    • T
      Fix brain fade in previous pg_dump patch. · 01f99e2d
      Tom Lane 提交于
      In pre-7.3 databases, pg_attribute.attislocal doesn't exist.  The easiest
      way to make sure the new inheritance logic behaves sanely is to assume it's
      TRUE, not FALSE.  This will result in printing child columns even when
      they're not really needed.  We could work harder at trying to reconstruct a
      value for attislocal, but there is little evidence that anyone still cares
      about dumping from such old versions, so just do the minimum necessary to
      have a valid dump.
      
      I had this correct in the original draft of the patch, but for some
      unaccountable reason decided it wasn't necessary to change the value.
      Testing against an old server shows otherwise...
      01f99e2d
    • T
      Fix pg_dump for better handling of inherited columns. · 02e64181
      Tom Lane 提交于
      Revise pg_dump's handling of inherited columns, which was last looked at
      seriously in 2001, to eliminate several misbehaviors associated with
      inherited default expressions and NOT NULL flags.  In particular make sure
      that a column is printed in a child table's CREATE TABLE command if and
      only if it has attislocal = true; the former behavior would sometimes cause
      a column to become marked attislocal when it was not so marked in the
      source database.  Also, stop relying on textual comparison of default
      expressions to decide if they're inherited; instead, don't use
      default-expression inheritance at all, but just install the default
      explicitly at each level of the hierarchy.  This fixes the
      search-path-related misbehavior recently exhibited by Chester Young, and
      also removes some dubious assumptions about the order in which ALTER TABLE
      SET DEFAULT commands would be executed.
      
      Back-patch to all supported branches.
      02e64181
  16. 07 2月, 2012 1 次提交
    • T
      Avoid problems with OID wraparound during WAL replay. · 0f5d2082
      Tom Lane 提交于
      Fix a longstanding thinko in replay of NEXTOID and checkpoint records: we
      tried to advance nextOid only if it was behind the value in the WAL record,
      but the comparison would draw the wrong conclusion if OID wraparound had
      occurred since the previous value.  Better to just unconditionally assign
      the new value, since OID assignment shouldn't be happening during replay
      anyway.
      
      The consequences of a failure to update nextOid would be pretty minimal,
      since we have long had the code set up to obtain another OID and try again
      if the generated value is already in use.  But in the worst case there
      could be significant performance glitches while such loops iterate through
      many already-used OIDs before finding a free one.
      
      The odds of a wraparound happening during WAL replay would be small in a
      crash-recovery scenario, and the length of any ensuing OID-assignment stall
      quite limited anyway.  But neither of these statements hold true for a
      replication slave that follows a WAL stream for a long period; its behavior
      upon going live could be almost unboundedly bad.  Hence it seems worth
      back-patching this fix into all supported branches.
      
      Already fixed in HEAD in commit c6d76d7c.
      0f5d2082
  17. 30 1月, 2012 1 次提交
    • H
      Accept a non-existent value in "ALTER USER/DATABASE SET ..." command. · 483c8ce2
      Heikki Linnakangas 提交于
      When default_text_search_config, default_tablespace, or temp_tablespaces
      setting is set per-user or per-database, with an "ALTER USER/DATABASE SET
      ..." statement, don't throw an error if the text search configuration or
      tablespace does not exist. In case of text search configuration, even if
      it doesn't exist in the current database, it might exist in another
      database, where the setting is intended to have its effect. This behavior
      is now the same as search_path's.
      
      Tablespaces are cluster-wide, so the same argument doesn't hold for
      tablespaces, but there's a problem with pg_dumpall: it dumps "ALTER USER
      SET ..." statements before the "CREATE TABLESPACE" statements. Arguably
      that's pg_dumpall's fault - it should dump the statements in such an order
      that the tablespace is created first and then the "ALTER USER SET
      default_tablespace ..." statements after that - but it seems better to be
      consistent with search_path and default_text_search_config anyway. Besides,
      you could still create a dump that throws an error, by creating the
      tablespace, running "ALTER USER SET default_tablespace", then dropping the
      tablespace and running pg_dumpall on that.
      
      Backpatch to all supported versions.
      483c8ce2
  18. 28 1月, 2012 1 次提交
  19. 10 1月, 2012 1 次提交
    • T
      Fix one-byte buffer overrun in contrib/test_parser. · 3852cfaf
      Tom Lane 提交于
      The original coding examined the next character before verifying that
      there *is* a next character.  In the worst case with the input buffer
      right up against the end of memory, this would result in a segfault.
      
      Problem spotted by Paul Guyot; this commit extends his patch to fix an
      additional case.  In addition, make the code a tad more readable by not
      overloading the usage of *tlen.
      3852cfaf
  20. 08 1月, 2012 1 次提交
    • T
      Use __sync_lock_test_and_set() for spinlocks on ARM, if available. · aa31c350
      Tom Lane 提交于
      Historically we've used the SWPB instruction for TAS() on ARM, but this
      is deprecated and not available on ARMv6 and later.  Instead, make use
      of a GCC builtin if available.  We'll still fall back to SWPB if not,
      so as not to break existing ports using older GCC versions.
      
      Eventually we might want to try using __sync_lock_test_and_set() on some
      other architectures too, but for now that seems to present only risk and
      not reward.
      
      Back-patch to all supported versions, since people might want to use any
      of them on more recent ARM chips.
      
      Martin Pitt
      aa31c350
  21. 07 1月, 2012 1 次提交
    • T
      Fix pg_restore's direct-to-database mode for INSERT-style table data. · a86448e8
      Tom Lane 提交于
      In commit 6545a901, I removed the mini SQL
      lexer that was in pg_backup_db.c, thinking that it had no real purpose
      beyond separating COPY data from SQL commands, which purpose had been
      obsoleted by long-ago fixes in pg_dump's archive file format.
      Unfortunately this was in error: that code was also used to identify
      command boundaries in INSERT-style table data, which is run together as a
      single string in the archive file for better compressibility.  As a result,
      direct-to-database restores from archive files made with --inserts or
      --column-inserts fail in our latest releases, as reported by Dick Visser.
      
      To fix, restore the mini SQL lexer, but simplify it by adjusting the
      calling logic so that it's only required to cope with INSERT-style table
      data, not arbitrary SQL commands.  This allows us to not have to deal with
      SQL comments, E'' strings, or dollar-quoted strings, none of which have
      ever been emitted by dumpTableData_insert.
      
      Also, fix the lexer to cope with standard-conforming strings, which was the
      actual bug that the previous patch was meant to solve.
      
      Back-patch to all supported branches.  The previous patch went back to 8.2,
      which unfortunately means that the EOL release of 8.2 contains this bug,
      but I don't think we're doing another 8.2 release just because of that.
      a86448e8
  22. 15 12月, 2011 1 次提交
  23. 12 12月, 2011 1 次提交
    • H
      Revert the behavior of inet/cidr functions to not unpack the arguments. · c1a03230
      Heikki Linnakangas 提交于
      I forgot to change the functions to use the PG_GETARG_INET_PP() macro,
      when I changed DatumGetInetP() to unpack the datum, like Datum*P macros
      usually do. Also, I screwed up the definition of the PG_GETARG_INET_PP()
      macro, and didn't notice because it wasn't used.
      
      This fixes the memory leak when sorting inet values, as reported
      by Jochen Erwied and debugged by Andres Freund. Backpatch to 8.3, like
      the previous patch that broke it.
      c1a03230
  24. 02 12月, 2011 1 次提交
  25. 01 12月, 2011 3 次提交
  26. 30 11月, 2011 2 次提交
  27. 28 11月, 2011 1 次提交
  28. 19 11月, 2011 1 次提交
    • T
      Avoid floating-point underflow while tracking buffer allocation rate. · fdaff0ba
      Tom Lane 提交于
      When the system is idle for awhile after activity, the "smoothed_alloc"
      state variable in BgBufferSync converges slowly to zero.  With standard
      IEEE float arithmetic this results in several iterations with denormalized
      values, which causes kernel traps and annoying log messages on some
      poorly-designed platforms.  There's no real need to track such small values
      of smoothed_alloc, so we can prevent the kernel traps by forcing it to zero
      as soon as it's too small to be interesting for our purposes.  This issue
      is purely cosmetic, since the iterations don't happen fast enough for the
      kernel traps to pose any meaningful performance problem, but still it seems
      worth shutting up the log messages.
      
      The kernel log messages were previously reported by a number of people,
      but kudos to Greg Matthews for tracking down exactly where they were coming
      from.
      fdaff0ba
  29. 16 11月, 2011 1 次提交
  30. 11 11月, 2011 1 次提交