1. 06 10月, 2020 1 次提交
  2. 25 9月, 2020 3 次提交
  3. 23 3月, 2020 2 次提交
    • H
      Revert "Make fault injector configurable (#9532)" (#9795) · 495343e1
      Hao Wu 提交于
      This reverts commit 7f5c7da1.
      495343e1
    • H
      Make fault injector configurable (#9532) · 7f5c7da1
      Hao Wu 提交于
      Definition FAULT_INJECTOR is hardcoded in a header(pg_config_manual.h) file.
      Fault injector is useful, but it may introduce some issues in production
      stage, like runtime cost, security problems. It's better to enable this
      feature in development and disable it in release.
      
      To achieve this target, we add a configure option to make fault injector
      configurable. When fault injector is disabled, some tests using this feature
      should be avoided to run ICW. Under isolation2 and regress, there are a lot of
      tests. Now, all tests under isolation2 and regress that depend on fault injector
      are moved to a new schedule file. The pattern name of it is XXX_faultinjector_schedule.
      
      **NOTE**
      All tests that depend on fault injector are saved to the XXX_faultinjector_schedule.
      With this rule, we only run tests that don't depend on fault injector when fault injector
      is disabled.
      
      The schedule files used for fault injector are:
      src/test/regress/greenplum_faultinjector_schedule
      src/test/isolation2/isolation2_faultinjector_schedule
      src/test/isolation2/isolation2_resgroup_faultinjector_schedule
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      7f5c7da1
  4. 25 7月, 2019 1 次提交
  5. 17 6月, 2019 1 次提交
  6. 07 6月, 2019 1 次提交
    • A
      Remove statically defined faults from faultinjector framework · ec3cb08c
      Asim R P 提交于
      With this, a fault is now identified only by its name.  New workflow for adding
      a fault:
      
      1. Choose a name for your fault, say "brand_new_fault".  Use
         "git grep brand_new_fault" to check if the name is not already taken.
      
      2. Identify the location in code and use SIMPLE_FAULT_INJECTOR("brand_new_fault")
         or more elaborate FaultInjector_InjectFaultIfSet("brand_new_fault", ...)
         function to setup the trigger.
      
      3. Invoke gp_inject_fault("brand_new_fault", ...) from a regress, isolation or
         isolation2 test to enable the fault.
      
      I made some cleanup along the way by marking a function as static and removing
      an unused function.
      
      Reviewed by: Hao Wu, Haozhou Wang, Ashwin Agrawal, Adam Berlin, Jacob Champion
      
      Fixes GitHub issue #7788
      Discussion: https://groups.google.com/a/greenplum.org/d/topic/gpdb-dev/zmCIxrN873c/discussion
      ec3cb08c
  7. 31 5月, 2019 1 次提交
    • T
      Fix assorted header files that failed to compile standalone. · 7640f931
      Tom Lane 提交于
      We have a longstanding project convention that all .h files should
      be includable with no prerequisites other than postgres.h.  This is
      tested/relied-on by cpluspluscheck.  However, cpluspluscheck has not
      historically been applied to most headers outside the src/include
      tree, with the predictable consequence that some of them don't work.
      Fix that, usually by adding missing #include dependencies.
      
      The change in printf_hack.h might require some explanation: without
      it, my C++ compiler whines that the function is unused.  There's
      not so many call sites that "inline" is going to cost much, and
      besides all the callers are in test code that we really don't care
      about the size of.
      
      There's no actual bugs being fixed here, so I see no need to back-patch.
      
      Discussion: https://postgr.es/m/b517ec3918d645eb950505eac8dd434e@gaz-is.ru
      7640f931
  8. 23 5月, 2019 2 次提交
  9. 20 5月, 2019 1 次提交
  10. 19 4月, 2019 1 次提交
    • T
      Fix problems with auto-held portals. · 4d5840ce
      Tom Lane 提交于
      HoldPinnedPortals() did things in the wrong order: it must not mark
      a portal autoHeld until it's been successfully held.  Otherwise,
      a failure while persisting the portal results in a server crash
      because we think the portal is in a good state when it's not.
      
      Also add a check that portal->status is READY before attempting to
      hold a pinned portal.  We have such a check before the only other
      use of HoldPortal(), so it seems unwise not to check it here.
      
      Lastly, rethink the responsibility for where to call HoldPinnedPortals.
      The comment for it imagined that it was optional for any individual PL
      to call it or not, but that cannot be the case: if some outer level of
      procedure has a pinned portal, failing to persist it when an inner
      procedure commits is going to be trouble.  Let's have SPI do it instead
      of the individual PLs.  That's not a complete solution, since in theory
      a PL might not be using SPI to perform commit/rollback, but such a PL
      is going to have to be aware of lots of related requirements anyway.
      (This change doesn't cause an API break for any external PLs that might
      be calling HoldPinnedPortals per the old regime, because calling it
      twice during a commit or rollback sequence won't hurt.)
      
      Per bug #15703 from Julian Schauder.  Back-patch to v11 where this code
      came in.
      
      Discussion: https://postgr.es/m/15703-c12c5bc0ea34ba26@postgresql.org
      4d5840ce
  11. 30 3月, 2019 1 次提交
  12. 25 2月, 2019 1 次提交
  13. 27 1月, 2019 1 次提交
    • A
      Change function call information to be variable length. · a9c35cf8
      Andres Freund 提交于
      Before this change FunctionCallInfoData, the struct arguments etc for
      V1 function calls are stored in, always had space for
      FUNC_MAX_ARGS/100 arguments, storing datums and their nullness in two
      arrays.  For nearly every function call 100 arguments is far more than
      needed, therefore wasting memory. Arg and argnull being two separate
      arrays also guarantees that to access a single argument, two
      cachelines have to be touched.
      
      Change the layout so there's a single variable-length array with pairs
      of value / isnull. That drastically reduces memory consumption for
      most function calls (on x86-64 a two argument function now uses
      64bytes, previously 936 bytes), and makes it very likely that argument
      value and its nullness are on the same cacheline.
      
      Arguments are stored in a new NullableDatum struct, which, due to
      padding, needs more memory per argument than before. But as usually
      far fewer arguments are stored, and individual arguments are cheaper
      to access, that's still a clear win.  It's likely that there's other
      places where conversion to NullableDatum arrays would make sense,
      e.g. TupleTableSlots, but that's for another commit.
      
      Because the function call information is now variable-length
      allocations have to take the number of arguments into account. For
      heap allocations that can be done with SizeForFunctionCallInfoData(),
      for on-stack allocations there's a new LOCAL_FCINFO(name, nargs) macro
      that helps to allocate an appropriately sized and aligned variable.
      
      Some places with stack allocation function call information don't know
      the number of arguments at compile time, and currently variably sized
      stack allocations aren't allowed in postgres. Therefore allow for
      FUNC_MAX_ARGS space in these cases. They're not that common, so for
      now that seems acceptable.
      
      Because of the need to allocate FunctionCallInfo of the appropriate
      size, older extensions may need to update their code. To avoid subtle
      breakages, the FunctionCallInfoData struct has been renamed to
      FunctionCallInfoBaseData. Most code only references FunctionCallInfo,
      so that shouldn't cause much collateral damage.
      
      This change is also a prerequisite for more efficient expression JIT
      compilation (by allocating the function call information on the stack,
      allowing LLVM to optimize it away); previously the size of the call
      information caused problems inside LLVM's optimizer.
      
      Author: Andres Freund
      Reviewed-By: Tom Lane
      Discussion: https://postgr.es/m/20180605172952.x34m5uz6ju6enaem@alap3.anarazel.de
      a9c35cf8
  14. 23 1月, 2019 1 次提交
  15. 03 1月, 2019 1 次提交
  16. 09 10月, 2018 1 次提交
    • T
      Fix omissions in snprintf.c's coverage of standard *printf functions. · 7767aadd
      Tom Lane 提交于
      A warning on a NetBSD box revealed to me that pg_waldump/compat.c
      is using vprintf(), which snprintf.c did not provide coverage for.
      This is not good if we want to have uniform *printf behavior, and
      it's pretty silly to omit when it's a one-line function.
      
      I also noted that snprintf.c has pg_vsprintf() but for some reason
      it was not exposed to the outside world, creating another way in
      which code might accidentally invoke the platform *printf family.
      
      Let's just make sure that we replace all eight of the POSIX-standard
      printf family.
      
      Also, upgrade plperl.h and plpython.h to make sure that they do
      their undefine/redefine rain dance for all eight, not some random
      maybe-sufficient subset thereof.
      7767aadd
  17. 27 9月, 2018 3 次提交
    • T
      Clean up *printf macros to avoid conflict with format archetypes. · 8b91d258
      Tom Lane 提交于
      We must define the macro "printf" with arguments, else it can mess
      up format archetype attributes in builds where PG_PRINTF_ATTRIBUTE
      is just "printf".  Fortunately, that's easy to do now that we're
      requiring C99; we can use __VA_ARGS__.
      
      On the other hand, it's better not to use __VA_ARGS__ for the rest
      of the *printf crew, so that one can take the addresses of those
      functions without surprises.
      
      I'd proposed doing this some time ago, but forgot to make it happen;
      buildfarm failures subsequent to 96bf88d5 reminded me.
      
      Discussion: https://postgr.es/m/22709.1535135640@sss.pgh.pa.us
      Discussion: https://postgr.es/m/20180926190934.ea4xvzhkayuw7gkx@alap3.anarazel.de
      8b91d258
    • T
      Implement %m in src/port/snprintf.c, and teach elog.c to rely on that. · d6c55de1
      Tom Lane 提交于
      I started out with the idea that we needed to detect use of %m format specs
      in contexts other than elog/ereport calls, because we couldn't rely on that
      working in *printf calls.  But a better answer is to fix things so that it
      does work.  Now that we're using snprintf.c all the time, we can implement
      %m in that and we've fixed the problem.
      
      This requires also adjusting our various printf-wrapping functions so that
      they ensure "errno" is preserved when they call snprintf.c.
      
      Remove elog.c's handmade implementation of %m, and let it rely on
      snprintf to support the feature.  That should provide some performance
      gain, though I've not attempted to measure it.
      
      There are a lot of places where we could now simplify 'printf("%s",
      strerror(errno))' into 'printf("%m")', but I'm not in any big hurry
      to make that happen.
      
      Patch by me, reviewed by Michael Paquier
      
      Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
      d6c55de1
    • T
      Always use our own versions of *printf(). · 96bf88d5
      Tom Lane 提交于
      We've spent an awful lot of effort over the years in coping with
      platform-specific vagaries of the *printf family of functions.  Let's just
      forget all that mess and standardize on always using src/port/snprintf.c.
      This gets rid of a lot of configure logic, and it will allow a saner
      approach to dealing with %m (though actually changing that is left for
      a follow-on patch).
      
      Preliminary performance testing suggests that as it stands, snprintf.c is
      faster than the native printf functions for some tasks on some platforms,
      and slower for other cases.  A pending patch will improve that, though
      cases with floating-point conversions will doubtless remain slower unless
      we want to put a *lot* of effort into that.  Still, we've not observed
      that *printf is really a performance bottleneck for most workloads, so
      I doubt this matters much.
      
      Patch by me, reviewed by Michael Paquier
      
      Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
      96bf88d5
  18. 26 9月, 2018 1 次提交
    • T
      Convert elog.c's useful_strerror() into a globally-used strerror wrapper. · 26e9d4d4
      Tom Lane 提交于
      elog.c has long had a private strerror wrapper that handles assorted
      possible failures or deficiencies of the platform's strerror.  On Windows,
      it also knows how to translate Winsock error codes, which the native
      strerror does not.  Move all this code into src/port/strerror.c and
      define strerror() as a macro that invokes it, so that both our frontend
      and backend code will have all of this behavior.
      
      I believe this constitutes an actual bug fix on Windows, since AFAICS
      our frontend code did not report Winsock error codes properly before this.
      However, the main point is to lay the groundwork for implementing %m
      in src/port/snprintf.c: the behavior we want %m to have is this one,
      not the native strerror's.
      
      Note that this throws away the prior use of src/port/strerror.c,
      which was to implement strerror() on platforms lacking it.  That's
      been dead code for nigh twenty years now, since strerror() was
      already required by C89.
      
      We should likewise cause strerror_r to use this behavior, but
      I'll tackle that separately.
      
      Patch by me, reviewed by Michael Paquier
      
      Discussion: https://postgr.es/m/2975.1526862605@sss.pgh.pa.us
      26e9d4d4
  19. 22 9月, 2018 1 次提交
  20. 17 9月, 2018 1 次提交
    • A
      Fix out-of-tree build for transform modules. · 60f6756f
      Andrew Gierth 提交于
      Neither plperl nor plpython installed sufficient header files to
      permit transform modules to be built out-of-tree using PGXS. Fix that
      by installing all plperl and plpython header files (other than those
      with special purposes such as generated data tables), and also install
      plpython's special .mk file for mangling regression tests.
      
      (This commit does not fix the windows install, which does not
      currently install _any_ plperl or plpython headers.)
      
      Also fix the existing transform modules for hstore and ltree so that
      their cross-module #include directives work as anticipated by commit
      df163230 et seq. This allows them to serve as working examples of
      how to reference other modules when doing separate out-of-tree builds.
      
      Discussion: https://postgr.es/m/87o9ej8bgl.fsf%40news-spur.riddles.org.uk
      60f6756f
  21. 07 9月, 2018 1 次提交
  22. 05 9月, 2018 1 次提交
    • P
      PL/Python: Remove use of simple slicing API · f5a6509b
      Peter Eisentraut 提交于
      The simple slicing API (sq_slice, sq_ass_slice) has been deprecated
      since Python 2.0 and has been removed altogether in Python 3, so remove
      those functions from the PLyResult class.  Instead, the non-slice
      mapping functions mp_subscript and mp_ass_subscript can take slice
      objects as an index.  Since we just pass the index through to the
      underlying list object, we already support that.  Test coverage was
      already in place.
      f5a6509b
  23. 25 8月, 2018 1 次提交
  24. 07 8月, 2018 1 次提交
  25. 03 8月, 2018 1 次提交
  26. 02 8月, 2018 1 次提交
    • R
      Merge with PostgreSQL 9.2beta2. · 4750e1b6
      Richard Guo 提交于
      This is the final batch of commits from PostgreSQL 9.2 development,
      up to the point where the REL9_2_STABLE branch was created, and 9.3
      development started on the PostgreSQL master branch.
      
      Notable upstream changes:
      
      * Index-only scan was included in the batch of upstream commits. It
        allows queries to retrieve data only from indexes, avoiding heap access.
      
      * Group commit was added to work effectively under heavy load. Previously,
        batching of commits became ineffective as the write workload increased,
        because of internal lock contention.
      
      * A new fast-path lock mechanism was added to reduce the overhead of
        taking and releasing certain types of locks which are taken and released
        very frequently but rarely conflict.
      
      * The new "parameterized path" mechanism was added. It allows inner index
        scans to use values from relations that are more than one join level up
        from the scan. This can greatly improve performance in situations where
        semantic restrictions (such as outer joins) limit the allowed join orderings.
      
      * SP-GiST (Space-Partitioned GiST) index access method was added to support
        unbalanced partitioned search structures. For suitable problems, SP-GiST can
        be faster than GiST in both index build time and search time.
      
      * Checkpoints now are performed by a dedicated background process. Formerly
        the background writer did both dirty-page writing and checkpointing. Separating
        this into two processes allows each goal to be accomplished more predictably.
      
      * Custom plan was supported for specific parameter values even when using
        prepared statements.
      
      * API for FDW was improved to provide multiple access "paths" for their tables,
        allowing more flexibility in join planning.
      
      * Security_barrier option was added for views to prevents optimizations that
        might allow view-protected data to be exposed to users.
      
      * Range data type was added to store a lower and upper bound belonging to its
        base data type.
      
      * CTAS (CREATE TABLE AS/SELECT INTO) is now treated as utility statement. The
        SELECT query is planned during the execution of the utility. To conform to
        this change, GPDB executes the utility statement only on QD and dispatches
        the plan of the SELECT query to QEs.
      Co-authored-by: NAdam Lee <ali@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      Co-authored-by: NHaozhou Wang <hawang@pivotal.io>
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      Co-authored-by: NShujie Zhang <shzhang@pivotal.io>
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
      4750e1b6
  27. 23 7月, 2018 2 次提交
  28. 11 6月, 2018 1 次提交
  29. 05 6月, 2018 1 次提交
  30. 22 5月, 2018 1 次提交
  31. 21 5月, 2018 1 次提交
    • H
      Support plpython cancellation. (#4988) · bfa232ed
      Hubert Zhang 提交于
      Add hook framework in signal handler, e.g. StatementCancelHandler or die,
      to cancel the slow function in a 3rd party library.
      
      Add PLy_handle_cancel_interrupt hook to cancel plpython, which use
      Py_AddPendingCall to cancel embedded python asynchronously.
      bfa232ed
  32. 07 5月, 2018 1 次提交
  33. 06 5月, 2018 1 次提交