1. 07 1月, 2014 1 次提交
  2. 06 1月, 2014 2 次提交
    • H
      Remove bogus -K option from pg_dump. · 10a82cda
      Heikki Linnakangas 提交于
      I added it to the getopt call by accident in commit
      691e595d.
      
      Amit Kapila
      10a82cda
    • T
      Cache catalog lookup data across groups in ordered-set aggregates. · 8b49a604
      Tom Lane 提交于
      The initial commit of ordered-set aggregates just did all the setup work
      afresh each time the aggregate function is started up.  But in a GROUP BY
      query, the catalog lookups need not be repeated for each group, since the
      column datatypes and sort information won't change.  When there are many
      small groups, this makes for a useful, though not huge, performance
      improvement.  Per suggestion from Andrew Gierth.
      
      Profiling of these cases suggests that it might be profitable to avoid
      duplicate lookups within tuplesort startup as well; but changing the
      tuplesort APIs would have much broader impact, so I left that for
      another day.
      8b49a604
  3. 05 1月, 2014 3 次提交
    • T
      Fix translatability markings in psql, and add defenses against future bugs. · 92459e7a
      Tom Lane 提交于
      Several previous commits have added columns to various \d queries without
      updating their translate_columns[] arrays, leading to potentially incorrect
      translations in NLS-enabled builds.  Offenders include commit 89368676
      (added prosecdef to \df+), c9ac00e6 (added description to \dc+) and
      3b17efdf (added description to \dC+).  Fix those cases back to 9.3 or
      9.2 as appropriate.
      
      Since this is evidently more easily missed than one would like, in HEAD
      also add an Assert that the supplied array is long enough.  This requires
      an API change for printQuery(), so it seems inappropriate for back
      branches, but presumably all future changes will be tested in HEAD anyway.
      
      In HEAD and 9.3, also clean up a whole lot of sloppiness in the emitted
      SQL for \dy (event triggers): lack of translatability due to failing to
      pass words-to-be-translated through gettext_noop(), inadequate schema
      qualification, and sloppy formatting resulting in unnecessarily ugly
      -E output.
      
      Peter Eisentraut and Tom Lane, per bug #8702 from Sergey Burladyan
      92459e7a
    • T
      Fix header comment for bitncmp(). · 5858cf8a
      Tom Lane 提交于
      The result is an int less than, equal to, or greater than zero, in the
      style of memcmp (and, in fact, exactly the output of memcmp in some cases).
      This comment previously said -1, 1, or 0, which was an overspecification,
      as noted by Emre Hasegeli.  All of the existing callers appear to be fine
      with the actual behavior, so just fix the comment.
      
      In passing, improve infelicitous formatting of some call sites.
      5858cf8a
    • T
      Fix typo in comment. · 99299756
      Tom Lane 提交于
      classifyClauses was renamed to classifyConditions somewhere along the
      line, but this comment didn't get the memo.
      
      Ian Barwick
      99299756
  4. 04 1月, 2014 1 次提交
  5. 03 1月, 2014 7 次提交
    • T
      Ooops, should use double not single quotes in StaticAssertStmt(). · a3b4aeec
      Tom Lane 提交于
      That's what I get for testing this on an older compiler.
      a3b4aeec
    • T
      Fix calculation of maximum statistics-message size. · a7ef273e
      Tom Lane 提交于
      The PGSTAT_NUM_TABENTRIES macro should have been updated when new fields
      were added to struct PgStat_MsgTabstat in commit 64482890, but it wasn't.
      Fix that.
      
      Also, add a static assertion that we didn't overrun the intended size limit
      on stats messages.  This will not necessarily catch every mistake in
      computing the maximum array size for stats messages, but it will catch ones
      that have practical consequences.  (The assertion in fact doesn't complain
      about the aforementioned error in PGSTAT_NUM_TABENTRIES, because that was
      not big enough to cause the array length to increase.)
      
      No back-patch, as there's no actual bug in existing releases; this is just
      in the nature of future-proofing.
      
      Mark Dilger and Tom Lane
      a7ef273e
    • A
      Handle 5-char filenames in SlruScanDirectory · 638cf09e
      Alvaro Herrera 提交于
      Original users of slru.c were all producing 4-digit filenames, so that
      was all that that code was prepared to handle.  Changes to multixact.c
      in the course of commit 0ac5ad51 made pg_multixact/members create
      5-digit filenames once a certain threshold was reached, which
      SlruScanDirectory wasn't prepared to deal with; in particular,
      5-digit-name files were not removed during truncation.  Change that
      routine to make it aware of those files, and have it process them just
      like any others.
      
      Right now, some pg_multixact/members directories will contain a mixture
      of 4-char and 5-char filenames.  A future commit is expected fix things
      so that each slru.c user declares the correct maximum width for the
      files it produces, to avoid such unsightly mixtures.
      
      Noticed while investigating bug #8673 reported by Serge Negodyuck.
      638cf09e
    • A
      Wrap multixact/members correctly during extension · a50d9762
      Alvaro Herrera 提交于
      In the 9.2 code for extending multixact/members, the logic was very
      simple because the number of entries in a members page was a proper
      divisor of 2^32, and thus at 2^32 wraparound the logic for page switch
      was identical than at any other page boundary.  In commit 0ac5ad51 I
      failed to realize this and introduced code that was not able to go over
      the 2^32 boundary.  Fix that by ensuring that when we reach the last
      page of the last segment we correctly zero the initial page of the
      initial segment, using correct uint32-wraparound-safe arithmetic.
      
      Noticed while investigating bug #8673 reported by Serge Negodyuck, as
      diagnosed by Andres Freund.
      a50d9762
    • A
      Handle wraparound during truncation in multixact/members · 722acf51
      Alvaro Herrera 提交于
      In pg_multixact/members, relying on modulo-2^32 arithmetic for
      wraparound handling doesn't work all that well.  Because we don't
      explicitely track wraparound of the allocation counter for members, it
      is possible that the "live" area exceeds 2^31 entries; trying to remove
      SLRU segments that are "old" according to the original logic might lead
      to removal of segments still in use.  To fix, have the truncation
      routine use a tailored SlruScanDirectory callback that keeps track of
      the live area in actual use; that way, when the live range exceeds 2^31
      entries, the oldest segments still live will not get removed untimely.
      
      This new SlruScanDir callback needs to take care not to remove segments
      that are "in the future": if new SLRU segments appear while the
      truncation is ongoing, make sure we don't remove them.  This requires
      examination of shared memory state to recheck for false positives, but
      testing suggests that this doesn't cause a problem.  The original coding
      didn't suffer from this pitfall because segments created when truncation
      is running are never considered to be removable.
      
      Per Andres Freund's investigation of bug #8673 reported by Serge
      Negodyuck.
      722acf51
    • R
      Aggressively freeze tables when CLUSTER or VACUUM FULL rewrites them. · 3cff1879
      Robert Haas 提交于
      We haven't wanted to do this in the past on the grounds that in rare
      cases the original xmin value will be needed for forensic purposes, but
      commit 37484ad2 removes that objection,
      so now we can.
      
      Per extensive discussion, among many people, on pgsql-hackers.
      3cff1879
    • T
      Fix contrib/pg_upgrade to clean all the cruft made during "make check". · 4cf81b73
      Tom Lane 提交于
      Although these files get cleaned up if the test runs to completion,
      a failure partway through leaves trash all over the floor.  The Makefile
      ought to be bright enough to get rid of it when you say "make clean".
      4cf81b73
  6. 02 1月, 2014 1 次提交
  7. 01 1月, 2014 1 次提交
  8. 31 12月, 2013 4 次提交
    • T
      Fix broken support for event triggers as extension members. · c01bc51f
      Tom Lane 提交于
      CREATE EVENT TRIGGER forgot to mark the event trigger as a member of its
      extension, and pg_dump didn't pay any attention anyway when deciding
      whether to dump the event trigger.  Per report from Moshe Jacobson.
      
      Given the obvious lack of testing here, it's rather astonishing that
      ALTER EXTENSION ADD/DROP EVENT TRIGGER work, but they seem to.
      c01bc51f
    • T
      Fix alphabetization in catalogs.sgml. · d7ee4311
      Tom Lane 提交于
      Some recent patches seem not to have grasped the concept that the catalogs
      are described in alphabetical order.
      d7ee4311
    • T
      Remove dead code now that orindxpath.c is history. · f7fbf4b0
      Tom Lane 提交于
      We don't need make_restrictinfo_from_bitmapqual() anymore at all.
      generate_bitmap_or_paths() doesn't need to be exported, and we can
      drop its rather klugy restriction_only flag.
      f7fbf4b0
    • T
      Extract restriction OR clauses whether or not they are indexable. · f343a880
      Tom Lane 提交于
      It's possible to extract a restriction OR clause from a join clause that
      has the form of an OR-of-ANDs, if each sub-AND includes a clause that
      mentions only one specific relation.  While PG has been aware of that idea
      for many years, the code previously only did it if it could extract an
      indexable OR clause.  On reflection, though, that seems a silly limitation:
      adding a restriction clause can be a win by reducing the number of rows
      that have to be filtered at the join step, even if we have to test the
      clause as a plain filter clause during the scan.  This should be especially
      useful for foreign tables, where the change can cut the number of rows that
      have to be retrieved from the foreign server; but testing shows it can win
      even on local tables.  Per a suggestion from Robert Haas.
      
      As a heuristic, I made the code accept an extracted restriction clause
      if its estimated selectivity is less than 0.9, which will probably result
      in accepting extracted clauses just about always.  We might need to tweak
      that later based on experience.
      
      Since the code no longer has even a weak connection to Path creation,
      remove orindxpath.c and create a new file optimizer/util/orclauses.c.
      
      There's some additional janitorial cleanup of now-dead code that needs
      to happen, but it seems like that's a fit subject for a separate commit.
      f343a880
  9. 30 12月, 2013 2 次提交
    • K
      Don't attempt to limit target database for pg_restore. · 47f50262
      Kevin Grittner 提交于
      There was an apparent attempt to limit the target database for
      pg_restore to version 7.1.0 or later.  Due to a leading zero this
      was interpreted as an octal number, which allowed targets with
      version numbers down to 2.87.36.  The lowest actual release above
      that was 6.0.0, so that was effectively the limit.
      
      Since the success of the restore attempt will depend primarily on
      on what statements were generated by the dump run, we don't want
      pg_restore trying to guess whether a given target should be allowed
      based on version number.  Allow a connection to any version.  Since
      it is very unlikely that anyone would be using a recent version of
      pg_restore to restore to a pre-6.0 database, this has little to no
      practical impact, but it makes the code less confusing to read.
      
      Issue reported and initial patch suggestion from Joel Jacobson
      based on an article by Andrey Karpov reporting on issues found by
      PVS-Studio static code analyzer.  Final patch based on analysis by
      Tom Lane.  Back-patch to all supported branches.
      47f50262
    • T
      Undo autoconf 2.69's attempt to #define _DARWIN_USE_64_BIT_INODE. · ed011d97
      Tom Lane 提交于
      Defining this symbol causes OS X 10.5 to use a buggy version of readdir(),
      which can sometimes fail with EINVAL if the previously-fetched directory
      entry has been deleted or renamed.  In later OS X versions that bug has
      been repaired, but we still don't need the #define because it's on by
      default.  So this is just an all-around bad idea, and we can do without it.
      ed011d97
  10. 29 12月, 2013 1 次提交
  11. 28 12月, 2013 3 次提交
    • P
      Fix whitespace · b986270b
      Peter Eisentraut 提交于
      b986270b
    • A
      Properly detect invalid JSON numbers when generating JSON. · 29dcf7de
      Andrew Dunstan 提交于
      Instead of looking for characters that aren't valid in JSON numbers, we
      simply pass the output string through the JSON number parser, and if it
      fails the string is quoted. This means among other things that money and
      domains over money will be quoted correctly and generate valid JSON.
      
      Fixes bug #8676 reported by Anderson Cristian da Silva.
      
      Backpatched to 9.2 where JSON generation was introduced.
      29dcf7de
    • K
      Fix misplaced right paren bugs in pgstatfuncs.c. · a133bf70
      Kevin Grittner 提交于
      The bug would only show up if the C sockaddr structure contained
      zero in the first byte for a valid address; otherwise it would
      fail to fail, which is probably why it went unnoticed for so long.
      
      Patch submitted by Joel Jacobson after seeing an article by Andrey
      Karpov in which he reports finding this through static code
      analysis using PVS-Studio.  While I was at it I moved a definition
      of a local variable referenced in the buggy code to a more local
      context.
      
      Backpatch to all supported branches.
      a133bf70
  12. 27 12月, 2013 1 次提交
  13. 25 12月, 2013 1 次提交
  14. 24 12月, 2013 4 次提交
    • T
      Fix ANALYZE failure on a column that's a domain over a range. · 4eeda92d
      Tom Lane 提交于
      Most other range operations seem to work all right on domains,
      but this one not so much, at least not since commit 918eee0c.
      Per bug #8684 from Brett Neumeier.
      4eeda92d
    • R
      Revise documentation for new freezing method. · d43760b6
      Robert Haas 提交于
      Commit 37484ad2 invalidated a good
      chunk of documentation, so patch it up to reflect the new state of
      play.  Along the way, patch remaining documentation references to
      FrozenXID to say instead FrozenTransactionId, so that they match the
      way we actually spell it in the code.
      d43760b6
    • T
      Fix portability issue in ordered-set patch. · cf63c641
      Tom Lane 提交于
      Overly compact coding in makeOrderedSetArgs() led to a platform dependency:
      if the compiler chose to execute the subexpressions in the wrong order,
      list_length() might get applied to an already-modified List, giving a
      value we didn't want.  Per buildfarm.
      cf63c641
    • T
      Support ordered-set (WITHIN GROUP) aggregates. · 8d65da1f
      Tom Lane 提交于
      This patch introduces generic support for ordered-set and hypothetical-set
      aggregate functions, as well as implementations of the instances defined in
      SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
      percent_rank(), cume_dist()).  We also added mode() though it is not in the
      spec, as well as versions of percentile_cont() and percentile_disc() that
      can compute multiple percentile values in one pass over the data.
      
      Unlike the original submission, this patch puts full control of the sorting
      process in the hands of the aggregate's support functions.  To allow the
      support functions to find out how they're supposed to sort, a new API
      function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
      the aggregate call's Aggref node, which may have other uses beyond the
      immediate need.  There is also support for ordered-set aggregates to
      install cleanup callback functions, so that they can be sure that
      infrastructure such as tuplesort objects gets cleaned up.
      
      In passing, make some fixes in the recently-added support for variadic
      aggregates, and make some editorial adjustments in the recent FILTER
      additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
      allowing it to succeed whenever the target type is ANY or ANYELEMENT.
      It was inconsistent that it dealt with other polymorphic target types
      but not these.
      
      Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
      and rather heavily editorialized upon by Tom Lane
      8d65da1f
  15. 23 12月, 2013 1 次提交
    • R
      Change the way we mark tuples as frozen. · 37484ad2
      Robert Haas 提交于
      Instead of changing the tuple xmin to FrozenTransactionId, the combination
      of HEAP_XMIN_COMMITTED and HEAP_XMIN_INVALID, which were previously never
      set together, is now defined as HEAP_XMIN_FROZEN.  A variety of previous
      proposals to freeze tuples opportunistically before vacuum_freeze_min_age
      is reached have foundered on the objection that replacing xmin by
      FrozenTransactionId might hinder debugging efforts when things in this
      area go awry; this patch is intended to solve that problem by keeping
      the XID around (but largely ignoring the value to which it is set).
      
      Third-party code that checks for HEAP_XMIN_INVALID on tuples where
      HEAP_XMIN_COMMITTED might be set will be broken by this change.  To fix,
      use the new accessor macros in htup_details.h rather than consulting the
      bits directly.  HeapTupleHeaderGetXmin has been modified to return
      FrozenTransactionId when the infomask bits indicate that the tuple is
      frozen; use HeapTupleHeaderGetRawXmin when you already know that the
      tuple isn't marked commited or frozen, or want the raw value anyway.
      We currently do this in routines that display the xmin for user consumption,
      in tqual.c where it's known to be safe and important for the avoidance of
      extra cycles, and in the function-caching code for various procedural
      languages, which shouldn't invalidate the cache just because the tuple
      gets frozen.
      
      Robert Haas and Andres Freund
      37484ad2
  16. 21 12月, 2013 1 次提交
  17. 20 12月, 2013 6 次提交
    • A
      Avoid useless palloc during transaction commit · 6130208e
      Alvaro Herrera 提交于
      We can allocate the initial relations-to-drop array when first needed,
      instead of at function entry; this avoids allocating it when the
      function is not going to do anything, which is most of the time.
      
      Backpatch to 9.3, where this behavior was introduced by commit
      279628a0.
      
      There's more that could be done here, such as possible reworking of the
      code to avoid having to palloc anything, but that doesn't sound as
      backpatchable as this relatively minor change.
      
      Per complaint from Noah Misch in
      20131031145234.GA621493@tornado.leadboat.com
      6130208e
    • R
      pg_prewarm, a contrib module for prewarming relationd data. · c32afe53
      Robert Haas 提交于
      Patch by me.  Review by Álvaro Herrera, Amit Kapila, Jeff Janes,
      Gurjeet Singh, and others.
      c32afe53
    • A
      isolationtester: Ensure stderr is unbuffered, too · 6eda3e9c
      Alvaro Herrera 提交于
      6eda3e9c
    • B
      Move pg_upgrade_support global variables to their own include file · 527fdd9d
      Bruce Momjian 提交于
      Previously their declarations were spread around to avoid accidental
      access.
      527fdd9d
    • A
      Make stdout unbuffered · 73bcb76b
      Alvaro Herrera 提交于
      This ensures that all stdout output is flushed immediately, to match
      stderr.  This eliminates the need for fflush(stdout) calls sprinkled all
      over the place.
      
      Per Daniel Wood in message 519A79C6.90308@salesforce.com
      73bcb76b
    • A
      Optimize updating a row that's locked by same xid · 13aa6244
      Alvaro Herrera 提交于
      Updating or locking a row that was already locked by the same
      transaction under the same Xid caused a MultiXact to be created; but
      this is unnecessary, because there's no usefulness in being able to
      differentiate two locks by the same transaction.  In particular, if a
      transaction executed SELECT FOR UPDATE followed by an UPDATE that didn't
      modify columns of the key, we would dutifully represent the resulting
      combination as a multixact -- even though a single key-update is
      sufficient.
      
      Optimize the case so that only the strongest of both locks/updates is
      represented in Xmax.  This can save some Xmax's from becoming
      MultiXacts, which can be a significant optimization.
      
      This missed optimization opportunity was spotted by Andres Freund while
      investigating a bug reported by Oliver Seemann in message
      CANCipfpfzoYnOz5jj=UZ70_R=CwDHv36dqWSpwsi27vpm1z5sA@mail.gmail.com
      and also directly as a performance regression reported by Dong Ye in
      message
      d54b8387.000012d8.00000010@YED-DEVD1.vmware.com
      Reportedly, this patch fixes the performance regression.
      
      Since the missing optimization was reported as a significant performance
      regression from 9.2, backpatch to 9.3.
      
      Andres Freund, tweaked by Álvaro Herrera
      13aa6244