1. 23 12月, 2015 6 次提交
  2. 22 12月, 2015 5 次提交
    • Y
      6946ee2c
    • Y
      fix dead loop issue when file is empty · 69be90c5
      Yandong Yao 提交于
      69be90c5
    • Y
    • Y
      fix failed orca-case: fix information_schema.out ans files · 358b99c3
      Yu Yang 提交于
      358b99c3
    • Y
      VARIADIC paramters of UDF ported from PostgreSQL. · 4665a8d5
      Yu Yang 提交于
      User could use VARIADIC to specify parameter list when defining UDF
      if they want to use variadic parameters. It is easier for user to write
      only one variadic function instead of same name function with different
      parameters. An example for using variadic:
      create function concat(text, variadic anyarray) returns text as $$
      select array_to_string($2, $1);
      $$ language sql immutable strict;
      
      select concat('%', 1, 2, 3, 4, 5);
      
      NOTE: The variadic change set is ported from upstream of PostgreSQL:
      commit 517ae403
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Thu Dec 18 18:20:35 2008 +0000
      
      Code review for function default parameters patch.  Fix numerous problems as
      per recent discussions.  In passing this also fixes a couple of bugs in
      the previous variadic-parameters patch.
      
      commit 6563e9e2
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jul 16 16:55:24 2008 +0000
      
      Add a "provariadic" column to pg_proc to eliminate the remarkably expensive
      need to deconstruct proargmodes for each pg_proc entry inspected by
      FuncnameGetCandidates().  Fixes function lookup performance regression
      caused by yesterday's variadic-functions patch.
      
      In passing, make pg_proc.probin be NULL, rather than a dummy value '-',
      in cases where it is not actually used for the particular type of function.
      This should buy back some of the space cost of the extra column.
      
      commit d89737d3
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Wed Jul 16 01:30:23 2008 +0000
      
      Support "variadic" functions, which can accept a variable number of arguments
      so long as all the trailing arguments are of the same (non-array) type.
      The function receives them as a single array argument (which is why they
      have to all be the same type).
      
      It might be useful to extend this facility to aggregates, but this patch
      doesn't do that.
      
      This patch imposes a noticeable slowdown on function lookup --- a follow-on
      patch will fix that by adding a redundant column to pg_proc.
      
      Conflicts:
      	src/backend/gpopt/gpdbwrappers.cpp
      4665a8d5
  3. 21 12月, 2015 4 次提交
  4. 20 12月, 2015 3 次提交
    • R
      Fix issue #74, make cluster in gpdemo failed(initdb failed on a 32-bit VM) · 94eacb66
      Robert Mu 提交于
      Heikki suggested that we backport atomic operation from PostgreSQL 9.5 to
      fix this
      
          commit b64d92f1
          Author: Andres Freund <andres@anarazel.de>
          Date:   Thu Sep 25 23:49:05 2014 +0200
      
          Add a basic atomic ops API abstracting away platform/architecture details.
      
          Several upcoming performance/scalability improvements require atomic
          operations. This new API avoids the need to splatter compiler and
          architecture dependent code over all the locations employing atomic
          ops.
      
          For several of the potential usages it'd be problematic to maintain
          both, a atomics using implementation and one using spinlocks or
          similar. In all likelihood one of the implementations would not get
          tested regularly under concurrency. To avoid that scenario the new API
          provides a automatic fallback of atomic operations to spinlocks. All
          properties of atomic operations are maintained. This fallback -
          obviously - isn't as fast as just using atomic ops, but it's not bad
          either. For one of the future users the atomics ontop spinlocks
          implementation was actually slightly faster than the old purely
          spinlock using implementation. That's important because it reduces the
          fear of regressing older platforms when improving the scalability for
          new ones.
      
          The API, loosely modeled after the C11 atomics support, currently
          provides 'atomic flags' and 32 bit unsigned integers. If the platform
          efficiently supports atomic 64 bit unsigned integers those are also
          provided.
      
          To implement atomics support for a platform/architecture/compiler for
          a type of atomics 32bit compare and exchange needs to be
          implemented. If available and more efficient native support for flags,
          32 bit atomic addition, and corresponding 64 bit operations may also
          be provided. Additional useful atomic operations are implemented
          generically ontop of these.
      
          The implementation for various versions of gcc, msvc and sun studio have
          been tested. Additional existing stub implementations for
          * Intel icc
          * HUPX acc
          * IBM xlc
          are included but have never been tested. These will likely require
          fixes based on buildfarm and user feedback.
      
          As atomic operations also require barriers for some operations the
          existing barrier support has been moved into the atomics code.
      
          Author: Andres Freund with contributions from Oskari Saarenmaa
          Reviewed-By: Amit Kapila, Robert Haas, Heikki Linnakangas and Álvaro Herrera
          Discussion: CA+TgmoYBW+ux5-8Ja=Mcyuy8=VXAnVRHp3Kess6Pn3DMXAPAEA@mail.gmail.com,
              20131015123303.GH5300@awork2.anarazel.de,
              20131028205522.GI20248@awork2.anarazel.de
      94eacb66
    • E
      OSX commercial blds, generate i686 instructions · 4f80ebf5
      Ed Espino 提交于
      For OSX commercial builds, instruct the gcc compiler to produce
      i686 (CPU) instructions.  This is needed to support an upstream
      PostgreSQL merge which uses atomic instructions introduced in one of
      those later CPUs.
      
      Previously, there was no autoconf test for those instructions, and we
      just compiled hand-crafted assembly to use those instructions anyway,
      even though the rest of the compiler was targeting the older cpus.
      
      reference: https://wiki.gentoo.org/wiki/GCC_optimization#-march
      
        Different CPUs have different capabilities, support different
        instruction sets, and have different ways of executing code. The
        -march flag will instruct the compiler to produce specific code for
        the system's CPU, with all its capabilities, features, instruction
        sets, quirks, and so on.
      4f80ebf5
    • A
      c8b6698f
  5. 19 12月, 2015 1 次提交
  6. 18 12月, 2015 3 次提交
    • H
      Fix bug in IN-to-join conversion. · f51ff613
      Heikki Linnakangas 提交于
      This bug was introduced in merge commit 1f4ad703, by a mishap in merging
      the upstream changes to convert_IN_to_join. The constructed InInfoClause
      entry was accidentally added to the list of in-clauses twice. That was
      usually harmless, but if the in-clause was later pulled up from a subquery,
      the code adjusts the varnos while merging the range tables of the subquery
      and its parent, ran twice. In doing so, it adjusted the varno twice,
      pointing it to wrong or non-existing range table entry.
      f51ff613
    • D
      Remove unused gnutar command lookup · b287a990
      Daniel Gustafsson 提交于
      While GNU TAR is used throughout GPDB this definition doesn't seem
      to be used so avoid spending cycles performing it.
      b287a990
    • N
      Fixed coverity issue about not freeing a variable · 28637e54
      Nikhil Kak 提交于
      28637e54
  7. 17 12月, 2015 6 次提交
    • D
      Merge commit '978fff79' from PostgreSQL 8.3 · 4b672210
      Daniel Gustafsson 提交于
      Almost everything in this set of commits was already merged via the XML merge
      from 9.1 which is in the tree already. The largeobject tests included in this
      merge (parallel and serial schedule) are currently left disabled due to test
      breakage, this needs to be handled in subsequent commits. The test is altered
      to distribute around the oid column (which is alse the default choice by GPDB
      in case no explicit distribution column in specified).
      
      Conflicts:
        Makefile
        doc/src/sgml/installation.sgml
        doc/src/sgml/manage-ag.sgml
        src/backend/Makefile
        src/backend/access/Makefile
        src/backend/access/common/Makefile
        src/backend/access/gin/Makefile
        src/backend/access/gist/Makefile
        src/backend/access/hash/Makefile
        src/backend/access/heap/Makefile
        src/backend/access/index/Makefile
        src/backend/access/nbtree/Makefile
        src/backend/bootstrap/Makefile
        src/backend/commands/Makefile
        src/backend/executor/Makefile
        src/backend/executor/execQual.c
        src/backend/lib/Makefile
        src/backend/libpq/Makefile
        src/backend/main/Makefile
        src/backend/nodes/Makefile
        src/backend/optimizer/Makefile
        src/backend/optimizer/geqo/Makefile
        src/backend/optimizer/path/Makefile
        src/backend/optimizer/plan/Makefile
        src/backend/optimizer/prep/Makefile
        src/backend/optimizer/util/Makefile
        src/backend/port/darwin/Makefile
        src/backend/port/nextstep/Makefile
        src/backend/port/sunos4/Makefile
        src/backend/port/win32/Makefile
        src/backend/postmaster/Makefile
        src/backend/rewrite/Makefile
        src/backend/storage/Makefile
        src/backend/storage/buffer/Makefile
        src/backend/storage/file/Makefile
        src/backend/storage/freespace/Makefile
        src/backend/storage/ipc/Makefile
        src/backend/storage/large_object/Makefile
        src/backend/storage/lmgr/Makefile
        src/backend/storage/page/Makefile
        src/backend/storage/smgr/Makefile
        src/backend/tcop/Makefile
        src/backend/utils/adt/Makefile
        src/backend/utils/adt/xml.c
        src/backend/utils/cache/Makefile
        src/backend/utils/error/Makefile
        src/backend/utils/error/elog.c
        src/backend/utils/hash/Makefile
        src/backend/utils/init/Makefile
        src/backend/utils/mb/Makefile
        src/backend/utils/mmgr/Makefile
        src/backend/utils/resowner/Makefile
        src/backend/utils/sort/Makefile
        src/backend/utils/time/Makefile
        src/bin/Makefile
        src/bin/initdb/initdb.c
        src/bin/psql/large_obj.c
        src/include/catalog/catversion.h
        src/include/utils/xml.h
        src/interfaces/Makefile
        src/interfaces/ecpg/compatlib/Makefile
        src/interfaces/ecpg/ecpglib/Makefile
        src/interfaces/ecpg/pgtypeslib/Makefile
        src/interfaces/ecpg/test/Makefile
        src/interfaces/ecpg/test/Makefile.regress
        src/test/regress/expected/xml.out
        src/test/regress/expected/xml_1.out
        src/test/regress/parallel_schedule
        src/test/regress/sql/xml.sql
      4b672210
    • E
      Copy perl modules (atmsort.pm & explain.pm) to bin · b6019439
      Ed Espino 提交于
      Additional perl module copy target in gpMgmt/Makefile:
      Supporting the commercial build, the new perl modules (atmsort.pm &
      explain.pm) supporting gpdiff.pl need to be copied to the bin directory.
      b6019439
    • E
      Copy perl modules (atmsort.pm & explain.pm) to bin · bf30b6a2
      Ed Espino 提交于
      Supporting the commercial build, the new perl modules (atmsort.pm &
      explain.pm) supporting gpdiff.pl need to be copied to the bin directory.
      bf30b6a2
    • H
      Track changes to catalogs that contain data cached in the metadata cache. · 7167ac78
      Heikki Linnakangas 提交于
      ORCA uses its own metadata cache to store information about relations,
      operators etc. Currently, we always reset the cache when planning a query,
      unless the optimizer_release_mdcache GUC is turned off, which is slow.
      
      To make it safe to turn optimizer_release_mdcache off, use the catalog
      cache invalidation mechanism to still reset the cache when there are
      changes to the catalogs that affect the metadata cache.
      
      The ORCA-facing interface of this is the same as in the previous attempt:
      A function that returns true/false indicating whether there has been any
      catalog changes whatsoever since last call.
      7167ac78
    • H
      Revert "Metadata Versioning feature for the ORCA Query Optimizer." · 6453dac7
      Heikki Linnakangas 提交于
      This reverts commit 6c31a3b4. Per
      discussion, we will implement the same functionality in a simpler way.
      6453dac7
    • H
      Fix outdated syscache information. · ddfb89a9
      Heikki Linnakangas 提交于
      Fix the fake-tidycat definitions of OPFAMILYOID and OPFAMILYAMNAMENSP
      syscaches. Remove reference to INHRELID - that syscache removed by one of
      the merged upstream 8.3 patches. Re-run tidycat.pl, to correct the "last"
      id number in syscache.h.
      ddfb89a9
  8. 15 12月, 2015 1 次提交
  9. 14 12月, 2015 5 次提交
    • G
      Metadata Versioning feature for the ORCA Query Optimizer. · 6c31a3b4
      George Caragea 提交于
      Added a generation-based Metadata Versioning mechanism
      which will be used by ORCA to cache and invalidate catalog
      data in its Metadata Cache.
      Versioning is disabled by default at this point, until the
      Metadata Cache eviction policy is completed.
      6c31a3b4
    • A
      Replace pgsql-bugs@postgresql.org with bugs@greenplum.org · b7365f58
      Andreas Scherbaum 提交于
      This patch replaces all occurrences of pgsql-bugs@postgresql.org with
      bugs@greenplum.org. Pointers to the pgsql-bugs mailinglist in the code
      remain as they re. Regression and BugBuster tests are also patched.
      b7365f58
    • H
      Refactor generated test cases slightly, to speed up the tests. · acecf14f
      Heikki Linnakangas 提交于
      The main difference is that setting up all the test tables and types are
      done in one large transaction, to reduce overhead of transaction commits.
      Another is that all the objects are created in a separate schema, and
      we don't bother to drop them all at the end of the tests. These changes
      shave off a few seconds from the test runtime on my laptop.
      acecf14f
    • H
      Move auto-generated part of alter_distribution_policy to separate file. · dc3fe37d
      Heikki Linnakangas 提交于
      This makes it easier to modify the script used to generate the tests (which
      I will do in the next commit).
      dc3fe37d
    • D
      Merge commit '9a83bd50' from PostgreSQL 8.3 · f41f6c10
      Daniel Gustafsson 提交于
      This set of commits only bring superficial changes with all functionality
      retained from HEAD due to previous merging of the functionality.
      
      Conflicts:
            doc/TODO
            doc/src/FAQ/TODO.html
            doc/src/sgml/config.sgml
            src/backend/utils/adt/pg_lzcompress.c
            src/backend/utils/adt/version.c
            src/backend/utils/adt/xml.c
            src/backend/utils/misc/guc.c
            src/include/utils/xml.h
            src/test/regress/expected/xml.out
            src/test/regress/expected/xml_1.out
            src/test/regress/pg_regress.c
            src/test/regress/sql/xml.sql
      f41f6c10
  10. 11 12月, 2015 2 次提交
    • H
      Rename local variables to 'typename', to match upstream. · 14b78f40
      Heikki Linnakangas 提交于
      These were renamed to 'typname' in GPDB a long time ago because 'typename'
      is a reserved keyword in C++. That's a valid concern for header files that
      need to be included in C++ code (like src/backend/gpopt), but it's not
      necessary for local variables that appear in .c files, as long as we're
      careful to compile those files as C code, not C++. So rename local
      variables back to 'typename', to reduce diff vs. upstream and make merging
      slightly easier.
      
      In the upstream header files, the 'typename' fields were renamed to
      'typeName', in a later PostgreSQL version. In GPDB, we've renamed them
      to 'typname' instead. Would be good to adopt the upstream naming at some
      point, but not in this patch.
      14b78f40
    • E
      Refactor and cleanup of memory management of the new optimizer. · 9e5dd61a
      Entong Shen 提交于
      This commit eliminates the global new/delete overrides that were causing
      compatibility problems (the Allocators.(h/cpp/inl) files have been
      completely removed). The GPOS `New()` macro is retained and works the
      same way, but has been renamed `GPOS_NEW()` to avoid confusion and
      possible name collisions. `GPOS_NEW()` works only for allocating
      singleton objects. For allocating arrays, `GPOS_NEW_ARRAY()` is
      provided. Because we no longer override the global delete,
      objects/arrays allocated by `GPOS_NEW()` and `GPOS_NEW_ARRAY()` must now
      be deleted by the new functions `GPOS_DELETE()` and
      `GPOS_DELETE_ARRAY()` respectively. All code in GPOS has been
      retrofitted for these changes, but Orca and other code that depends on
      GPOS should also be changed.
      
      Note that `GPOS_NEW()` and `GPOS_NEW_ARRAY()` should both be
      exception-safe and not leak memory when a constructor throws.
      
      Closes #166
      9e5dd61a
  11. 10 12月, 2015 4 次提交
    • H
      Move remaining test queries from aggfunc to olap_group. · 1022be29
      Heikki Linnakangas 提交于
      I'm trying to eliminate bugbuster test suite in the long run, so I'm
      moving any tests from there that seem genuinely useful to the main
      regression suite.
      1022be29
    • H
      Remove unnecessary cruft from two bugbuster tests. · 5eec7988
      Heikki Linnakangas 提交于
      Some of the tests were broken, because the necessary test tables were
      missing. Just remove those as they didn't test anything that wasn't tested
      elsewhere anyway.
      5eec7988
    • K
      gpinitsystem -p does not produce correct postgresql.conf file · 3636746b
      Kenan Yao 提交于
      when the add-on config file contains GUCs which exist in postgresql.conf.sample,
      and are commented out(e.g, #max_resource_queues), these GUC settings would not
      take effect because incorrect posgresql.conf would be generated by gpinitsystem.
      3636746b
    • J
      Add feature to backup and restore catalog statistics. · c52bb9cf
      Jamie McAtamney 提交于
      A --dump-stats flag has been added to gpcrondump.py to dump the
      pg_class table tuple counts and pg_statistic catalog statistics
      to a SQL file, to later be restored manually or with gpdbrestore.
      
      A --restore-stats flag has been added to gpdbrestore to restore
      the dumped statistics.  Passing it the argument "include", or
      passing it no argument, will restore the statistics in addition
      to performing a normal restore.  Passing it the argument "only"
      will restore statistics but will not restore anything else; if
      any tables do not exist, using this option will skip restoring
      statistics to those tables, print a warning for each, and then
      continue restoring statistics to other tables.
      
      If --dump-stats or --restore-stats is used in conjunction with
      other flags that include or exclude certain tables or schemas,
      the statistics dump or restore will be filtered in the same way.
      
      Authors: James McAtamney and Jimmy Yih
      c52bb9cf