1. 15 7月, 2014 1 次提交
  2. 14 7月, 2014 1 次提交
    • F
      Prevent bitmap heap scans from showing unnecessary block info in EXPLAIN ANALYZE. · d4635b16
      Fujii Masao 提交于
      EXPLAIN ANALYZE shows the information of the numbers of exact/lossy blocks which
      bitmap heap scan processes. But, previously, when those numbers were both zero,
      it displayed only the prefix "Heap Blocks:" in TEXT output format. This is strange
      and would confuse the users. So this commit suppresses such unnecessary information.
      
      Backpatch to 9.4 where EXPLAIN ANALYZE was changed so that such information was
      displayed.
      
      Etsuro Fujita
      d4635b16
  3. 12 7月, 2014 3 次提交
    • A
      Fix decoding of consecutive MULTI_INSERTs emitted by one heap_multi_insert(). · 626bfad6
      Andres Freund 提交于
      Commit 1b86c81d fixed the decoding of toasted columns for the rows
      contained in one xl_heap_multi_insert record. But that's not actually
      enough, because heap_multi_insert() will actually first toast all
      passed in rows and then emit several *_multi_insert records; one for
      each page it fills with tuples.
      
      Add a XLOG_HEAP_LAST_MULTI_INSERT flag which is set in
      xl_heap_multi_insert->flag denoting that this multi_insert record is
      the last emitted by one heap_multi_insert() call. Then use that flag
      in decode.c to only set clear_toast_afterwards in the right situation.
      
      Expand the number of rows inserted via COPY in the corresponding
      regression test to make sure that more than one heap page is filled
      with tuples by one heap_multi_insert() call.
      
      Backpatch to 9.4 like the previous commit.
      626bfad6
    • T
      Fix bug with whole-row references to append subplans. · d6858148
      Tom Lane 提交于
      ExecEvalWholeRowVar incorrectly supposed that it could "bless" the source
      TupleTableSlot just once per query.  But if the input is coming from an
      Append (or, perhaps, other cases?) more than one slot might be returned
      over the query run.  This led to "record type has not been registered"
      errors when a composite datum was extracted from a non-blessed slot.
      
      This bug has been there a long time; I guess it escaped notice because when
      dealing with subqueries the planner tends to expand whole-row Vars into
      RowExprs, which don't have the same problem.  It is possible to trigger
      the problem in all active branches, though, as illustrated by the added
      regression test.
      d6858148
    • P
      Fix whitespace · 80ddd04b
      Peter Eisentraut 提交于
      80ddd04b
  4. 11 7月, 2014 2 次提交
    • T
      Implement IMPORT FOREIGN SCHEMA. · 59efda3e
      Tom Lane 提交于
      This command provides an automated way to create foreign table definitions
      that match remote tables, thereby reducing tedium and chances for error.
      In this patch, we provide the necessary core-server infrastructure and
      implement the feature fully in the postgres_fdw foreign-data wrapper.
      Other wrappers will throw a "feature not supported" error until/unless
      they are updated.
      
      Ronan Dunklau and Michael Paquier, additional work by me
      59efda3e
    • B
      Adjust blank lines around PG_MODULE_MAGIC defines, for consistency · 6a605cd6
      Bruce Momjian 提交于
      Report by Robert Haas
      6a605cd6
  5. 09 7月, 2014 3 次提交
  6. 07 7月, 2014 1 次提交
  7. 06 7月, 2014 2 次提交
    • A
      Fix decoding of MULTI_INSERTs when rows other than the last are toasted. · 1b86c81d
      Andres Freund 提交于
      When decoding the results of a HEAP2_MULTI_INSERT (currently only
      generated by COPY FROM) toast columns for all but the last tuple
      weren't replaced by their actual contents before being handed to the
      output plugin. The reassembled toast datums where disregarded after
      every REORDER_BUFFER_CHANGE_(INSERT|UPDATE|DELETE) which is correct
      for plain inserts, updates, deletes, but not multi inserts - there we
      generate several REORDER_BUFFER_CHANGE_INSERTs for a single
      xl_heap_multi_insert record.
      
      To solve the problem add a clear_toast_afterwards boolean to
      ReorderBufferChange's union member that's used by modifications. All
      row changes but multi_inserts always set that to true, but
      multi_insert sets it only for the last change generated.
      
      Add a regression test covering decoding of multi_inserts - there was
      none at all before.
      
      Backpatch to 9.4 where logical decoding was introduced.
      
      Bug found by Petr Jelinek.
      1b86c81d
    • N
      Consistently pass an "unsigned char" to ctype.h functions. · 333b7db8
      Noah Misch 提交于
      The isxdigit() calls relied on undefined behavior.  The isascii() call
      was well-defined, but our prevailing style is to include the cast.
      Back-patch to 9.4, where the isxdigit() calls were introduced.
      333b7db8
  8. 04 7月, 2014 2 次提交
    • T
      Don't cache per-group context across the whole query in orderedsetaggs.c. · ecd65797
      Tom Lane 提交于
      Although nodeAgg.c currently uses the same per-group memory context for
      all groups of a query, that might change in future.  Avoid assuming it.
      This costs us an extra AggCheckCallContext() call per group, but that's
      pretty cheap and is probably good from a safety standpoint anyway.
      
      Back-patch to 9.4 in case any third-party code copies this logic.
      
      Andrew Gierth
      ecd65797
    • T
      Redesign API presented by nodeAgg.c for ordered-set and similar aggregates. · 6f5034ed
      Tom Lane 提交于
      The previous design exposed the input and output ExprContexts of the
      Agg plan node, but work on grouping sets has suggested that we'll regret
      doing that.  Instead provide more narrowly-defined APIs that can be
      implemented in multiple ways, namely a way to get a short-term memory
      context and a way to register an aggregate shutdown callback.
      
      Back-patch to 9.4 where the bad APIs were introduced, since we don't
      want third-party code using these APIs and then having to change in 9.5.
      
      Andrew Gierth
      6f5034ed
  9. 03 7月, 2014 3 次提交
    • K
      Smooth reporting of commit/rollback statistics. · ac46de56
      Kevin Grittner 提交于
      If a connection committed or rolled back any transactions within a
      PGSTAT_STAT_INTERVAL pacing interval without accessing any tables,
      the reporting of those statistics would be held up until the
      connection closed or until it ended a PGSTAT_STAT_INTERVAL interval
      in which it had accessed a table.  This could result in under-
      reporting of transactions for an extended period, followed by a
      spike in reported transactions.
      
      While this is arguably a bug, the impact is minimal, primarily
      affecting, and being affected by, monitoring software.  It might
      cause more confusion than benefit to change the existing behavior
      in released stable branches, so apply only to master and the 9.4
      beta.
      
      Gurjeet Singh, with review and editing by Kevin Grittner,
      incorporating suggested changes from Abhijit Menon-Sen and Tom
      Lane.
      ac46de56
    • A
      Rename logical decoding's pg_llog directory to pg_logical. · a36a8fa3
      Andres Freund 提交于
      The old name wasn't very descriptive as of actual contents of the
      directory, which are historical snapshots in the snapshots/
      subdirectory and mappingdata for rewritten tuples in
      mappings/. There's been a fair amount of discussion what would be a
      good name. I'm settling for pg_logical because it's likely that
      further data around logical decoding and replication will need saving
      in the future.
      
      Also add the missing entry for the directory into storage.sgml's list
      of PGDATA contents.
      
      Bumps catversion as the data directories won't be compatible.
      a36a8fa3
    • T
      Add some errdetail to checkRuleResultList(). · 7980ab30
      Tom Lane 提交于
      This function wasn't originally thought to be really user-facing,
      because converting a table to a view isn't something we expect people
      to do manually.  So not all that much effort was spent on the error
      messages; in particular, while the code will complain that you got
      the column types wrong it won't say exactly what they are.  But since
      we repurposed the code to also check compatibility of rule RETURNING
      lists, it's definitely user-facing.  It now seems worthwhile to add
      errdetail messages showing exactly what the conflict is when there's
      a mismatch of column names or types.  This is prompted by bug #10836
      from Matthias Raffelsieper, which might have been forestalled if the
      error message had reported the wrong column type as being "record".
      
      Back-patch to 9.4, but not into older branches where the set of
      translatable error strings is supposed to be stable.
      7980ab30
  10. 02 7月, 2014 2 次提交
    • T
      Allow CREATE/ALTER DATABASE to manipulate datistemplate and datallowconn. · fbb1d7d7
      Tom Lane 提交于
      Historically these database properties could be manipulated only by
      manually updating pg_database, which is error-prone and only possible for
      superusers.  But there seems no good reason not to allow database owners to
      set them for their databases, so invent CREATE/ALTER DATABASE options to do
      that.  Adjust a couple of places that were doing it the hard way to use the
      commands instead.
      
      Vik Fearing, reviewed by Pavel Stehule
      fbb1d7d7
    • T
      Refactor CREATE/ALTER DATABASE syntax so options need not be keywords. · 15c82efd
      Tom Lane 提交于
      Most of the existing option names are keywords anyway, but we can get rid
      of LC_COLLATE and LC_CTYPE as keywords known to the lexer/grammar.  This
      immediately reduces the size of the grammar tables by about 8KB, and will
      save more when we add additional CREATE/ALTER DATABASE options in future.
      
      A side effect of the implementation is that the CONNECTION LIMIT option
      can now also be spelled CONNECTION_LIMIT.  We choose not to document this,
      however.
      
      Vik Fearing, based on a suggestion by me; reviewed by Pavel Stehule
      15c82efd
  11. 01 7月, 2014 1 次提交
    • R
      Avoid copying index tuples when building an index. · 9f03ca91
      Robert Haas 提交于
      The previous code, perhaps out of concern for avoid memory leaks, formed
      the tuple in one memory context and then copied it to another memory
      context.  However, this doesn't appear to be necessary, since
      index_form_tuple and the functions it calls take precautions against
      leaking memory.  In my testing, building the tuple directly inside the
      sort context shaves several percent off the index build time.
      Rearrange things so we do that.
      
      Patch by me.  Review by Amit Kapila, Tom Lane, Andres Freund.
      9f03ca91
  12. 30 6月, 2014 3 次提交
    • A
      Check interrupts during logical decoding more frequently. · 1cbc9480
      Andres Freund 提交于
      When reading large amounts of preexisting WAL during logical decoding
      using the SQL interface we possibly could fail to check interrupts in
      due time. Similarly the same could happen on systems with a very high
      WAL volume while creating a new logical replication slot, independent
      of the used interface.
      
      Previously these checks where only performed in xlogreader's read_page
      callbacks, while waiting for new WAL to be produced. That's not
      sufficient though, if there's never a need to wait.  Walsender's send
      loop already contains a interrupt check.
      
      Backpatch to 9.4 where the logical decoding feature was introduced.
      1cbc9480
    • H
      Fix and enhance the assertion of no palloc's in a critical section. · 1c6821be
      Heikki Linnakangas 提交于
      The assertion failed if WAL_DEBUG or LWLOCK_STATS was enabled; fix that by
      using separate memory contexts for the allocations made within those code
      blocks.
      
      This patch introduces a mechanism for marking any memory context as allowed
      in a critical section. Previously ErrorContext was exempt as a special case.
      
      Instead of a blanket exception of the checkpointer process, only exempt the
      memory context used for the pending ops hash table.
      1c6821be
    • T
      Remove use_json_as_text options from json_to_record/json_populate_record. · a749a23d
      Tom Lane 提交于
      The "false" case was really quite useless since all it did was to throw
      an error; a definition not helped in the least by making it the default.
      Instead let's just have the "true" case, which emits nested objects and
      arrays in JSON syntax.  We might later want to provide the ability to
      emit sub-objects in Postgres record or array syntax, but we'd be best off
      to drive that off a check of the target field datatype, not a separate
      argument.
      
      For the functions newly added in 9.4, we can just remove the flag arguments
      outright.  We can't do that for json_populate_record[set], which already
      existed in 9.3, but we can ignore the argument and always behave as if it
      were "true".  It helps that the flag arguments were optional and not
      documented in any useful fashion anyway.
      a749a23d
  13. 29 6月, 2014 2 次提交
    • A
      Add cluster_name GUC which is included in process titles if set. · 51adcaa0
      Andres Freund 提交于
      When running several postgres clusters on one OS instance it's often
      inconveniently hard to identify which "postgres" process belongs to
      which postgres instance.
      
      Add the cluster_name GUC, whose value will be included as part of the
      process titles if set. With that processes can more easily identified
      using tools like 'ps'.
      
      To avoid problems with encoding mismatches between postgresql.conf,
      consoles, and individual databases replace non-ASCII chars in the name
      with question marks. The length is limited to NAMEDATALEN to make it
      less likely to truncate important information at the end of the
      status.
      
      Thomas Munro, with some adjustments by me and review by a host of people.
      51adcaa0
    • A
      Remove Alpha and Tru64 support. · a6d488cb
      Andres Freund 提交于
      Support for running postgres on Alpha hasn't been tested for a long
      while. Due to Alpha's uniquely lax cache coherency model it's a hard
      to develop for platform (especially blindly!) and thought to be
      unlikely to currently work correctly.
      
      As Alpha is the only supported architecture for Tru64 drop support for
      it as well. Tru64's support has ended 2012 and it has been in
      maintenance-only mode for much longer.
      
      Also remove stray references to __ksr__ and ultrix defines.
      a6d488cb
  14. 28 6月, 2014 5 次提交
    • T
      Allow pushdown of WHERE quals into subqueries with window functions. · d222585a
      Tom Lane 提交于
      We can allow this even without any specific knowledge of the semantics
      of the window function, so long as pushed-down quals will either accept
      every row in a given window partition, or reject every such row.  Because
      window functions act only within a partition, such a case can't result
      in changing the window functions' outputs for any surviving row.
      Eliminating entire partitions in this way obviously can reduce the cost
      of the window-function computations substantially.
      
      The fly in the ointment is that it's hard to be entirely sure whether
      this is true for an arbitrary qual condition.  This patch allows pushdown
      if (a) the qual references only partitioning columns, and (b) the qual
      contains no volatile functions.  We are at risk of incorrect results if
      the qual can produce different answers for values that the partitioning
      equality operator sees as equal.  While it's not hard to invent cases
      for which that can happen, it seems to seldom be a problem in practice,
      since no one has complained about a similar assumption that we've had
      for many years with respect to DISTINCT.  The potential performance
      gains seem to be worth the risk.
      
      David Rowley, reviewed by Vik Fearing; some credit is due also to
      Thomas Mayer who did considerable preliminary investigation.
      d222585a
    • A
      Have multixact be truncated by checkpoint, not vacuum · f741300c
      Alvaro Herrera 提交于
      Instead of truncating pg_multixact at vacuum time, do it only at
      checkpoint time.  The reason for doing it this way is twofold: first, we
      want it to delete only segments that we're certain will not be required
      if there's a crash immediately after the removal; and second, we want to
      do it relatively often so that older files are not left behind if
      there's an untimely crash.
      
      Per my proposal in
      http://www.postgresql.org/message-id/20140626044519.GJ7340@eldon.alvh.no-ip.org
      we now execute the truncation in the checkpointer process rather than as
      part of vacuum.  Vacuum is in only charge of maintaining in shared
      memory the value to which it's possible to truncate the files; that
      value is stored as part of checkpoints also, and so upon recovery we can
      reuse the same value to re-execute truncate and reset the
      oldest-value-still-safe-to-use to one known to remain after truncation.
      
      Per bug reported by Jeff Janes in the course of his tests involving
      bug #8673.
      
      While at it, update some comments that hadn't been updated since
      multixacts were changed.
      
      Backpatch to 9.3, where persistency of pg_multixact files was
      introduced by commit 0ac5ad51.
      f741300c
    • A
      Don't allow relminmxid to go backwards during VACUUM FULL · b7e51d9c
      Alvaro Herrera 提交于
      We were allowing a table's pg_class.relminmxid value to move backwards
      when heaps were swapped by VACUUM FULL or CLUSTER.  There is a
      similar protection against relfrozenxid going backwards, which we
      neglected to clone when the multixact stuff was rejiggered by commit
      0ac5ad51.
      
      Backpatch to 9.3, where relminmxid was introduced.
      
      As reported by Heikki in
      http://www.postgresql.org/message-id/52401AEA.9000608@vmware.com
      b7e51d9c
    • A
      Fix broken Assert() introduced by 8e9a16ab8f7f0e58 · b2770576
      Alvaro Herrera 提交于
      Don't assert MultiXactIdIsRunning if the multi came from a tuple that
      had been share-locked and later copied over to the new cluster by
      pg_upgrade.  Doing that causes an error to be raised unnecessarily:
      MultiXactIdIsRunning is not open to the possibility that its argument
      came from a pg_upgraded tuple, and all its other callers are already
      checking; but such multis cannot, obviously, have transactions still
      running, so the assert is pointless.
      
      Noticed while investigating the bogus pg_multixact/offsets/0000 file
      left over by pg_upgrade, as reported by Andres Freund in
      http://www.postgresql.org/message-id/20140530121631.GE25431@alap3.anarazel.de
      
      Backpatch to 9.3, as the commit that introduced the buglet.
      b2770576
    • T
      Disallow pushing volatile qual expressions down into DISTINCT subqueries. · 11470352
      Tom Lane 提交于
      A WHERE clause applied to the output of a subquery with DISTINCT should
      theoretically be applied only once per distinct row; but if we push it
      into the subquery then it will be evaluated at each row before duplicate
      elimination occurs.  If the qual is volatile this can give rise to
      observably wrong results, so don't do that.
      
      While at it, refactor a little bit to allow subquery_is_pushdown_safe
      to report more than one kind of restrictive condition without indefinitely
      expanding its argument list.
      
      Although this is a bug fix, it seems unwise to back-patch it into released
      branches, since it might de-optimize plans for queries that aren't giving
      any trouble in practice.  So apply to 9.4 but not further back.
      11470352
  15. 26 6月, 2014 2 次提交
    • T
      Rationalize error messages within jsonfuncs.c. · 798e2357
      Tom Lane 提交于
      I noticed that the functions in jsonfuncs.c sometimes printed error
      messages that claimed I'd called some other function.  Investigation showed
      that this was from repurposing code into "worker" functions without taking
      much care as to whether it would mention the right SQL-level function if it
      threw an error.  Moreover, there was a weird mismash of messages that
      contained a fixed function name, messages that used %s for a function name,
      and messages that constructed a function name out of spare parts, like
      "json%s_populate_record" (which, quite aside from being ugly as sin, wasn't
      even sufficient to cover all the cases).  This would put an undue burden on
      our long-suffering translators.  Standardize on inserting the SQL function
      name with %s so as to reduce the number of translatable strings, and pass
      function names around as needed to make sure we can report the right one.
      Fix up some gratuitous variations in wording, too.
      798e2357
    • T
      Cosmetic improvements in jsonfuncs.c. · 8d2d7ad5
      Tom Lane 提交于
      Re-pgindent, remove a lot of random vertical whitespace, remove useless
      (if not counterproductive) inline markings, get rid of unnecessary
      zero-padding of strings for hashtable searches.  No functional changes.
      8d2d7ad5
  16. 25 6月, 2014 1 次提交
    • T
      Fix handling of nested JSON objects in json_populate_recordset and friends. · 57d8c127
      Tom Lane 提交于
      populate_recordset_object_start() improperly created a new hash table
      (overwriting the link to the existing one) if called at nest levels
      greater than one.  This resulted in previous fields not appearing in
      the final output, as reported by Matti Hameister in bug #10728.
      In 9.4 the problem also affects json_to_recordset.
      
      This perhaps missed detection earlier because the default behavior is to
      throw an error for nested objects: you have to pass use_json_as_text = true
      to see the problem.
      
      In addition, fix query-lifespan leakage of the hashtable created by
      json_populate_record().  This is pretty much the same problem recently
      fixed in dblink: creating an intended-to-be-temporary context underneath
      the executor's per-tuple context isn't enough to make it go away at the
      end of the tuple cycle, because MemoryContextReset is not
      MemoryContextResetAndDeleteChildren.
      
      Michael Paquier and Tom Lane
      57d8c127
  17. 24 6月, 2014 2 次提交
    • H
      Don't allow foreign tables with OIDs. · a87a7dc8
      Heikki Linnakangas 提交于
      The syntax doesn't let you specify "WITH OIDS" for foreign tables, but it
      was still possible with default_with_oids=true. But the rest of the system,
      including pg_dump, isn't prepared to handle foreign tables with OIDs
      properly.
      
      Backpatch down to 9.1, where foreign tables were introduced. It's possible
      that there are databases out there that already have foreign tables with
      OIDs. There isn't much we can do about that, but at least we can prevent
      them from being created in the future.
      
      Patch by Etsuro Fujita, reviewed by Hadi Moshayedi.
      a87a7dc8
    • R
      Check for interrupts during tuple-insertion loops. · c922353b
      Robert Haas 提交于
      Normally, this won't matter too much; but if I/O is really slow, for
      example because the system is overloaded, we might write many pages
      before checking for interrupts.  A single toast insertion might
      write up to 1GB of data, and a multi-insert could write hundreds
      of tuples (and their corresponding TOAST data).
      c922353b
  18. 23 6月, 2014 1 次提交
    • H
      Fix bug in WAL_DEBUG. · 85ba0748
      Heikki Linnakangas 提交于
      The record header was not copied correctly to the buffer that was passed
      to the rm_desc function. Broken by my rm_desc signature refactoring patch.
      85ba0748
  19. 21 6月, 2014 1 次提交
    • T
      Add Asserts to verify that catalog cache keys are unique and not null. · 8b38a538
      Tom Lane 提交于
      The catcache code is effectively assuming this already, so let's insist
      that the catalog and index are actually declared that way.
      
      Having done that, the comments in indexing.h about non-unique indexes
      not being used for catcaches are completely redundant not just mostly so;
      and we didn't have such a comment for every such index anyway.  So let's
      get rid of them.
      
      Per discussion of whether we should identify primary keys for catalogs.
      We might or might not take that further step, but this change in itself
      will allow quicker detection of misdeclared catcaches, so it seems worth
      doing in any case.
      8b38a538
  20. 20 6月, 2014 2 次提交
    • A
      Do all-visible handling in lazy_vacuum_page() outside its critical section. · ecac0e2b
      Andres Freund 提交于
      Since fdf9e211 lazy_vacuum_page() rechecks the all-visible status
      of pages in the second pass over the heap. It does so inside a
      critical section, but both visibilitymap_test() and
      heap_page_is_all_visible() perform operations that should not happen
      inside one. The former potentially performs IO and both potentially do
      memory allocations.
      
      To fix, simply move all the all-visible handling outside the critical
      section. Doing so means that the PD_ALL_VISIBLE on the page won't be
      included in the full page image of the HEAP2_CLEAN record anymore. But
      that's fine, the flag will be set by the HEAP2_VISIBLE logged later.
      
      Backpatch to 9.3 where the problem was introduced. The bug only came
      to light due to the assertion added in 4a170ee9 and isn't likely to
      cause problems in production scenarios. The worst outcome is a
      avoidable PANIC restart.
      
      This also gets rid of the difference in the order of operations
      between master and standby mentioned in 2a8e1ac5.
      
      Per reports from David Leverton and Keith Fiske in bug #10533.
      ecac0e2b
    • A
      Don't allow to disable backend assertions via the debug_assertions GUC. · 3bdcf6a5
      Andres Freund 提交于
      The existance of the assert_enabled variable (backing the
      debug_assertions GUC) reduced the amount of knowledge some static code
      checkers (like coverity and various compilers) could infer from the
      existance of the assertion. That could have been solved by optionally
      removing the assertion_enabled variable from the Assert() et al macros
      at compile time when some special macro is defined, but the resulting
      complication doesn't seem to be worth the gain from having
      debug_assertions. Recompiling is fast enough.
      
      The debug_assertions GUC is still available, but readonly, as it's
      useful when diagnosing problems. The commandline/client startup option
      -A, which previously also allowed to enable/disable assertions, has
      been removed as it doesn't serve a purpose anymore.
      
      While at it, reduce code duplication in bufmgr.c and localbuf.c
      assertions checking for spurious buffer pins. That code had to be
      reindented anyway to cope with the assert_enabled removal.
      3bdcf6a5