1. 08 1月, 2016 1 次提交
  2. 07 1月, 2016 9 次提交
  3. 30 12月, 2015 3 次提交
  4. 22 12月, 2015 1 次提交
    • Y
      VARIADIC paramters of UDF ported from PostgreSQL. · 4665a8d5
      Yu Yang 提交于
      User could use VARIADIC to specify parameter list when defining UDF
      if they want to use variadic parameters. It is easier for user to write
      only one variadic function instead of same name function with different
      parameters. An example for using variadic:
      create function concat(text, variadic anyarray) returns text as $$
      select array_to_string($2, $1);
      $$ language sql immutable strict;
      
      select concat('%', 1, 2, 3, 4, 5);
      
      NOTE: The variadic change set is ported from upstream of PostgreSQL:
      commit 517ae403
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Thu Dec 18 18:20:35 2008 +0000
      
      Code review for function default parameters patch.  Fix numerous problems as
      per recent discussions.  In passing this also fixes a couple of bugs in
      the previous variadic-parameters patch.
      
      commit 6563e9e2
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jul 16 16:55:24 2008 +0000
      
      Add a "provariadic" column to pg_proc to eliminate the remarkably expensive
      need to deconstruct proargmodes for each pg_proc entry inspected by
      FuncnameGetCandidates().  Fixes function lookup performance regression
      caused by yesterday's variadic-functions patch.
      
      In passing, make pg_proc.probin be NULL, rather than a dummy value '-',
      in cases where it is not actually used for the particular type of function.
      This should buy back some of the space cost of the extra column.
      
      commit d89737d3
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jul 16 01:30:23 2008 +0000
      
      Support "variadic" functions, which can accept a variable number of arguments
      so long as all the trailing arguments are of the same (non-array) type.
      The function receives them as a single array argument (which is why they
      have to all be the same type).
      
      It might be useful to extend this facility to aggregates, but this patch
      doesn't do that.
      
      This patch imposes a noticeable slowdown on function lookup --- a follow-on
      patch will fix that by adding a redundant column to pg_proc.
      
      Conflicts:
      	src/backend/gpopt/gpdbwrappers.cpp
      4665a8d5
  5. 21 12月, 2015 2 次提交
  6. 20 12月, 2015 2 次提交
    • R
      Fix issue #74, make cluster in gpdemo failed(initdb failed on a 32-bit VM) · 94eacb66
      Robert Mu 提交于
      Heikki suggested that we backport atomic operation from PostgreSQL 9.5 to
      fix this
      
          commit b64d92f1
          Author: Andres Freund <andres@anarazel.de>
          Date:   Thu Sep 25 23:49:05 2014 +0200
      
          Add a basic atomic ops API abstracting away platform/architecture details.
      
          Several upcoming performance/scalability improvements require atomic
          operations. This new API avoids the need to splatter compiler and
          architecture dependent code over all the locations employing atomic
          ops.
      
          For several of the potential usages it'd be problematic to maintain
          both, a atomics using implementation and one using spinlocks or
          similar. In all likelihood one of the implementations would not get
          tested regularly under concurrency. To avoid that scenario the new API
          provides a automatic fallback of atomic operations to spinlocks. All
          properties of atomic operations are maintained. This fallback -
          obviously - isn't as fast as just using atomic ops, but it's not bad
          either. For one of the future users the atomics ontop spinlocks
          implementation was actually slightly faster than the old purely
          spinlock using implementation. That's important because it reduces the
          fear of regressing older platforms when improving the scalability for
          new ones.
      
          The API, loosely modeled after the C11 atomics support, currently
          provides 'atomic flags' and 32 bit unsigned integers. If the platform
          efficiently supports atomic 64 bit unsigned integers those are also
          provided.
      
          To implement atomics support for a platform/architecture/compiler for
          a type of atomics 32bit compare and exchange needs to be
          implemented. If available and more efficient native support for flags,
          32 bit atomic addition, and corresponding 64 bit operations may also
          be provided. Additional useful atomic operations are implemented
          generically ontop of these.
      
          The implementation for various versions of gcc, msvc and sun studio have
          been tested. Additional existing stub implementations for
          * Intel icc
          * HUPX acc
          * IBM xlc
          are included but have never been tested. These will likely require
          fixes based on buildfarm and user feedback.
      
          As atomic operations also require barriers for some operations the
          existing barrier support has been moved into the atomics code.
      
          Author: Andres Freund with contributions from Oskari Saarenmaa
          Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera
          Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com,
              20131015123303.GH5300@awork2.anarazel.de,
              20131028205522.GI20248@awork2.anarazel.de
      94eacb66
    • A
      c8b6698f
  7. 17 12月, 2015 3 次提交
    • H
      Track changes to catalogs that contain data cached in the metadata cache. · 7167ac78
      Heikki Linnakangas 提交于
      ORCA uses its own metadata cache to store information about relations,
      operators etc. Currently, we always reset the cache when planning a query,
      unless the optimizer_release_mdcache GUC is turned off, which is slow.
      
      To make it safe to turn optimizer_release_mdcache off, use the catalog
      cache invalidation mechanism to still reset the cache when there are
      changes to the catalogs that affect the metadata cache.
      
      The ORCA-facing interface of this is the same as in the previous attempt:
      A function that returns true/false indicating whether there has been any
      catalog changes whatsoever since last call.
      7167ac78
    • H
      Revert "Metadata Versioning feature for the ORCA Query Optimizer." · 6453dac7
      Heikki Linnakangas 提交于
      This reverts commit 6c31a3b4. Per
      discussion, we will implement the same functionality in a simpler way.
      6453dac7
    • H
      Fix outdated syscache information. · ddfb89a9
      Heikki Linnakangas 提交于
      Fix the fake-tidycat definitions of OPFAMILYOID and OPFAMILYAMNAMENSP
      syscaches. Remove reference to INHRELID - that syscache removed by one of
      the merged upstream 8.3 patches. Re-run tidycat.pl, to correct the "last"
      id number in syscache.h.
      ddfb89a9
  8. 14 12月, 2015 2 次提交
  9. 11 12月, 2015 1 次提交
    • E
      Refactor and cleanup of memory management of the new optimizer. · 9e5dd61a
      Entong Shen 提交于
      This commit eliminates the global new/delete overrides that were causing
      compatibility problems (the Allocators.(h/cpp/inl) files have been
      completely removed). The GPOS `New()` macro is retained and works the
      same way, but has been renamed `GPOS_NEW()` to avoid confusion and
      possible name collisions. `GPOS_NEW()` works only for allocating
      singleton objects. For allocating arrays, `GPOS_NEW_ARRAY()` is
      provided. Because we no longer override the global delete,
      objects/arrays allocated by `GPOS_NEW()` and `GPOS_NEW_ARRAY()` must now
      be deleted by the new functions `GPOS_DELETE()` and
      `GPOS_DELETE_ARRAY()` respectively. All code in GPOS has been
      retrofitted for these changes, but Orca and other code that depends on
      GPOS should also be changed.
      
      Note that `GPOS_NEW()` and `GPOS_NEW_ARRAY()` should both be
      exception-safe and not leak memory when a constructor throws.
      
      Closes #166
      9e5dd61a
  10. 09 12月, 2015 1 次提交
    • H
      Backport the 5 second wait if a database is in use. · f042e3e8
      Heikki Linnakangas 提交于
      As promised in previous commit. Upstream patch:
      
      commit bd0a2609
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Fri Jun 1 19:38:07 2007 +0000
      
          Make CREATE/DROP/RENAME DATABASE wait a little bit to see if other backends
          will exit before failing because of conflicting DB usage.  Per discussion,
          this seems a good idea to help mask the fact that backend exit takes nonzero
          time.  Remove a couple of thereby-obsoleted sleeps in contrib and PL
          regression test sequences.
      f042e3e8
  11. 04 12月, 2015 5 次提交
  12. 03 12月, 2015 1 次提交
  13. 02 12月, 2015 2 次提交
  14. 01 12月, 2015 2 次提交
    • G
      Rework SSE42 implementation and runtime logic to be more similar to PostgreSQL 9.5 · 6c025b52
      Garrett Thornburg 提交于
      This patch merges the PostgreSQL 9.5 implementation of SSE4.2 into GPDB.
      The SSE4.2 implementation was lifted right out of PostgreSQL without
      change to make merging later PostgreSQL releases easier.
      
      Update win32 configuration to support SSE4.2 runtime checks
      
      This change was pulled from "src/include/pg_config.h.win32" from the
      commits below.
      
      configure.in changes, determining if cpu instruction for perfomring runtime
      checks are availible, and moving some of the code to port SSE4.2 came from the
      following PostgreSQL commits:
      
      commit 3dc2d62d
      Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
      Date:   Tue Apr 14 17:05:03 2015 +0300
      
          Use Intel SSE 4.2 CRC instructions where available.
      
          Modern x86 and x86-64 processors with SSE 4.2 support have special
          instructions, crc32b and crc32q, for calculating CRC-32C. They greatly
          speed up CRC calculation.
      
          Whether the instructions can be used or not depends on the compiler and the
          target architecture. If generation of SSE 4.2 instructions is allowed for
          the target (-msse4.2 flag on gcc and clang), use them. If they are not
          allowed by default, but the compiler supports the -msse4.2 flag to enable
          them, compile just the CRC-32C function with -msse4.2 flag, and check at
          runtime whether the processor we're running on supports it. If it doesn't,
          fall back to the slicing-by-8 algorithm. (With the common defaults on
          current operating systems, the runtime-check variant is what you get in
          practice.)
      
          Abhijit Menon-Sen, heavily modified by me, reviewed by Andres Freund.
      
      commit b4eb2d16
      Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
      Date:   Tue Apr 14 19:56:00 2015 +0300
      	On gcc and clang, the _mm_crc32_u8 and _mm_crc32_u64 intrinsics are not
      	defined at all, when not building with -msse4.2. But on icc, they are.
      	So we cannot assume that if those intrinsics are defined, we can always use
      	them safely, we might still need the runtime check.
      
      	To fix, check if the __SSE_4_2__ preprocessor symbol is defined. That's
      	supposed to be defined only when the compiler is targeting a processor that
      	has SSE 4.2 support.
      
      	Per buildfarm members fulmar and okapi.
      6c025b52
    • K
      Fix row estimation of some catalog tables in planner. · 7ff79c6d
      Kenan Yao 提交于
      When the cluster has multiple segments, planner now mistakenly calculates the
      row number of catalog tables. One example is: SELECT COUNT(*) FROM pg_class may
      say the table has 200 rows of records, while EXPLAIN SELECT * FROM pg_class
      would say there are 200 * N rows of records, even if you have already issued ANALYZE
      upon pg_class and restarted the session(to reload the relation cache), where N
      is the number of primary segments.
      
      Another bug is for query like SELECT * FROM gp_dist_random('pg_class'), in this
      case, planner should assume the size of pg_class to be 200 * N, instead of the
      current 200.
      7ff79c6d
  15. 24 11月, 2015 1 次提交
  16. 21 11月, 2015 1 次提交
    • A
      Avoid SEGV during backend initialization due to elog(ERROR). · a0eda0cb
      Asim Praveen 提交于
      This is fixed in PostgreSQL and I found parts of those fixes pulled in already.
      This commit pulls in the remnants.  The SEGV was found at a customer site
      where default_tablespace was configured for a role.  While new connections were
      made by the role, GRANT/REVOKE on the same tablespace was running concurrently.
      The assign hook for default_tablespace invoked elog(ERROR) but target for
      siglongjmp was not initialized, causing a segv and PANIC.
      
      Some unit tests had to be changed to accommodate the fix.
      
      Original commits from PostgreSQL:
      88f1fd29
      79ca7ffe
      a0eda0cb
  17. 19 11月, 2015 3 次提交
    • H
      Remove tidycat.pl machinery to specify the rowtype OID of catalog tables. · 99636d49
      Heikki Linnakangas 提交于
      The only reason we did that was to keep the OIDs the same across major
      version upgrades, because the old catalog upgrade mechanism relied on
      being able to read old catalog version with new binaries. We no longer try
      to maintain that property - future upgrades will use pg_upgrade or
      something similar that can deal with more drastic catalog changes.
      
      This allows removing the hacks in bootparse.y, toasting.c, and elsewhere,
      reducing the chance of merge conflicts in the future.
      99636d49
    • H
      Remove more upgrade machinery. · bdc1d880
      Heikki Linnakangas 提交于
      No need to force particular OIDs for these catalog tables.
      bdc1d880
    • H
      Remove gp_upgrade_mode and related machinery. · d9b60cd2
      Heikki Linnakangas 提交于
      The current plan is to use something like pg_upgrade for future in-place
      upgrades. The gpupgrade mechanism will not scale to the kind of drastic
      catalog and other data directory layout changes that are coming as we
      merge with later PostgreSQL releases.
      
      Kept gpupgrademirror for now. Need to check if there's some logic that's
      worth saving, for a possible pg_upgrade based solution later.
      d9b60cd2