1. 31 3月, 2016 2 次提交
  2. 30 3月, 2016 1 次提交
  3. 29 3月, 2016 2 次提交
  4. 28 3月, 2016 4 次提交
    • K
      Fix TupleDesc leak when COPY partition table from file, and improve · 5d9f7b46
      Kenan Yao 提交于
      the cleanup of EState
      5d9f7b46
    • E
      Fix DML_over_joins non-deterministic expected error condition. · 04c194d8
      Ed Espino 提交于
      Below is the error condition that was being filtered by the cdbfast test
      framework.  I have added an ignore section around the update that is
      expected to fail in the planner scenario.
      
      *** ./expected/DML_over_joins.out2016-03-27 16:44:49.000000000 -0700
      --- ./results/DML_over_joins.out2016-03-27 16:44:49.000000000 -0700
      ***************
      *** 1342,1348 ****
      (2 rows)
      
      update sales_par set region = 'new_region' where id in (select s.b from s, r where s.a = r.b) and day in (select a from r);
      ! ERROR:  moving tuple from partition "sales_par_1_prt_usa" to partition "sales_par_1_prt_other_regions" not supported
      select sales_par.* from sales_par where id in (select s.b from s, r where s.a = r.b) and day in (select a from r);
      id | year | month | day | region
      ----+------+-------+-----+--------
      --- 1342,1348 ----
      (2 rows)
      
      update sales_par set region = 'new_region' where id in (select s.b from s, r where s.a = r.b) and day in (select a from r);
      ! ERROR:  moving tuple from partition "sales_par_1_prt_europe" to partition "sales_par_1_prt_other_regions" not supported
      select sales_par.* from sales_par where id in (select s.b from s, r where s.a = r.b) and day in (select a from r);
      id | year | month | day | region
      ----+------+-------+-----+--------
      
      ======================================================================
      04c194d8
    • E
      Resolve #572: Migrate list_queries sirv_functions to ICG. · 28601ac4
      Ed Espino 提交于
      This enhances the existing "sirv_functions" ICG tests.  This is
      accomplished by migrating the cdbfast regression "list_queries" test
      schedule's "sirv_functions" test suite to ICG.
      
      Here are some characteristics of the test suite:
      
        Schedule file: greenplum_schedule
        Schema: sirv_functions
        Set to run in parallel: yes
        Orca specific test output: no
        Test execution time: <1 minute
        Pivotal Tracker Story: #113967799
      28601ac4
    • E
      Resolve #571: Migrate list_queries DML_over_joins to ICG. · 8b4c3de2
      Ed Espino 提交于
      Migrate the cdbfast "list_queries" test schedule "DML_over_joins" to
      ICG.
      
      Here are some characteristics of the test suite:
      
        Schedule file: greenplum_schedule
        Schema: DML_over_joins
        Set to run in parallel: yes
        Orca specific test output: yes
        Test execution time: <1 minute
        Pivotal Tracker Story: #113967799
      8b4c3de2
  5. 26 3月, 2016 2 次提交
  6. 25 3月, 2016 5 次提交
    • H
      Fix the PR #518: arrays of composite types · 25cd73c6
      Haozhou Wang 提交于
      - unify source-code of #518 commit according to upstream
      - add regression tests (previously stored in TINC)
      - fix parameter type of the function AddNewRelationType()
      
          Thanks to Heikki Linnakangas for suggesting
      25cd73c6
    • S
      Adding GPDB code generation utils · 88e9baba
      Shreedhar Hardikar 提交于
      Squashed commit of the following:
      
      commit db752c0c916f36479b0c5049c671538565294c25
      Author: Nikos Armenatzoglou <narmenatzoglou@gmail.com>
      Date:   Thu Mar 24 14:01:38 2016 -0700
      
          Adding check to avoid unit test failures when sizeof(long) = 8
      
      commit 1c39c58279cf73af92811ed1b2e6f2bc22d01414
      Author: Shreedhar Hardikar <shardikar@pivotal.io>
      Date:   Thu Mar 17 19:42:47 2016 +0000
      
          --with-codegen-prefix instead of CMAKE_PREFIX_PATH
      
      commit 77978e92444580bc4e84dc3f6b3e366a09e92dba
      Author: Shreedhar Hardikar <shardikar@pivotal.io>
      Date:   Thu Mar 10 14:57:48 2016 +0000
      
          Make codegen work with GPDB build system and other minor fixes
      
            * Update CMakeLists to build SUBSYS.o
            * Update CMakeLists to build release build by default
            * Update CMakeLists to add tests in a loop.
            * Add codegen and cmake specific .gitignore
            * Update license and READMEs
            * Move codegen utils related header files back into src/backend/codegen/include/codegen
      Signed-off-by: NNikos Armenatzoglou <nikos.armenatzoglou@gmail.com>
      
      commit 1b61c12f4eb082f49a66c28957c22810e98a5074
      Author: Shreedhar Hardikar <shardikar@pivotal.io>
      Date:   Wed Mar 16 21:06:28 2016 +0000
      
          Remove codegen specific gtest source tree and use the one in gpAux.
      Signed-off-by: NNikos Armenatzoglou <nikos.armenatzoglou@gmail.com>
      
      commit 13bb2dcf3465be0ef240e316a149b2d8a125a9e9
      Author: Shreedhar Hardikar <shardikar@pivotal.io>
      Date:   Thu Mar 10 02:51:29 2016 +0000
      
          Set up GPDB to use codegen utilities
      
            * Move in codegen utility source and include files to GPDB specific locations.
            * Configure codegen with --enable-codegen and use gpcodegen namespace
      Signed-off-by: NNikos Armenatzoglou <nikos.armenatzoglou@gmail.com>
      
      commit 9a4e40f931f71cb7a82e15b51745944116bc10ab
      Author: Craig Chasseur <spoonboy42@gmail.com>
      Date:   Mon Nov 2 09:10:37 2015 -0800
      
          Introduce codegen utilties into GPDB.
      
          Support library for integrating LLVM code-generation and runtime
          compilation with Clang in a larger codebase.
      Signed-off-by: NNikos Armenatzoglou <nikos.armenatzoglou@gmail.com>
      88e9baba
    • A
      Default estimates for missing stats [#115534065] · e0409357
      Atri Sharma 提交于
      Change behaviour to use default estimates for relation size
      for cases when no statistics are available.
      
      This avoid having heavy statstistics computation as part
      of query compilation.
      
      Customer can explicitly using ANALYZE to compute the stats
      before running queries to use more accurate stats instead
      of default statistics.
      
      Currently, the default size of table is:
      - Internal table: 100 pages
      - External table: 1000 pages
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      e0409357
    • A
      Make relation open error message user friendly · c9f97455
      Ashwin Agrawal 提交于
      Currently, when relation open fails it emits error message which is very
      dev centric plus stack trace gets printed. The message and behavior is
      confusing as more hints towards some catastrophic problem. While its
      more legitimate due to concurrency so best to communicate the same, plus
      avoid printing stacktrace.
      c9f97455
    • J
      Add more filespace/tablespace tests to installcheck. · 23ab1bd5
      Jimmy Yih 提交于
      This commit adds more filespace and tablespace test coverage in installcheck.
      Most of these test additions are modified and inspired from Pivotal's own
      internal tests.
      23ab1bd5
  7. 24 3月, 2016 5 次提交
  8. 23 3月, 2016 5 次提交
  9. 22 3月, 2016 14 次提交
    • H
      Fix compiler warnning on commit 3ab625 · 688e9427
      Haozhou Wang 提交于
      688e9427
    • G
      Remove "reorganize=true/false" from reloptions of pg_class on segments. · fd43bded
      Gang Xiong 提交于
      When alter table set distributed command specifying "reorganize=true/false", it
      will set reloptions of pg_class on segments but not on master.
      fd43bded
    • G
      When recovering in-doubt transactions, there were some transactions · 8283f748
      Gang Xiong 提交于
      missed due to incorrect iterating logic.
      8283f748
    • D
      Remove leftovers from gpcheckdb · d95f7dcc
      Daniel Gustafsson 提交于
      This seems to be leftovers from when gpcheckdb was in src/bin and
      was written in C++. Nothing references this and the directory is
      empty so remove.
      d95f7dcc
    • H
      Support explicit cast for ARRAY[] constructor · 3ab62529
      Haozhou Wang 提交于
      Backport below commits from upstream:
      
          commit adac22bf
          Author: Tom Lane <tgl@sss.pgh.pa.us>
          Date:   Fri Dec 19 05:04:35 2008 +0000
      
              When we added the ability to have zero-element ARRAY[] constructs by adding an
              explicit cast to show the intended array type, we forgot to teach ruleutils.c
              to print out such constructs properly.  Found by noting bogus output from
              recent changes in polymorphism regression test.
      
          commit 30137bde6db48a8b8c1ffc736eb239bd7381f04d
          Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
          Date:   Fri Nov 13 19:48:26 2009 +0000
      
              A better fix for the "ARRAY[...]::domain" problem. The previous patch worked,
              but the transformed ArrayExpr claimed to have a return type of "domain",
              even though the domain constraint was only checked by the enclosing
              CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with
              the base type of the domain. Per gripe by Tom Lane.
      
          commit 6b0706ac
          Author: Tom Lane <tgl@sss.pgh.pa.us>
          Date:   Thu Mar 20 21:42:48 2008 +0000
      
              Arrange for an explicit cast applied to an ARRAY[] constructor to be applied
              directly to all the member expressions, instead of the previous implementation
              where the ARRAY[] constructor would infer a common element type and then we'd
              coerce the finished array after the fact.  This has a number of benefits,
              one being that we can allow an empty ARRAY[] construct so long as its
              element type is specified by such a cast.
      
      Besides, this commit also adds 'location' field in array related
      structures, but they are not actived yet. Thanks to Heikki's suggestion.
      3ab62529
    • A
      Fix a bug in creating the demo cluster · 16abad91
      Andreas Scherbaum 提交于
      This bug was introduced in fd2e045f.
      
      Thanks to Shoaib Lari for pointing this out.
      16abad91
    • C
      Modifying output of gpcheckcat to include the name of the test on the same... · 7673c4b4
      Chumki Roy 提交于
      Modifying output of gpcheckcat to include the name of the test on the same line to make grepping easier
      7673c4b4
    • A
      Remove ignored error message when building demo cluster · 5e0eb9f8
      Andreas Scherbaum 提交于
      This patch slightly changes the way the demo cluster is built.
      A gpinitdbsystem return code of 1 (warning) is ignored, and the script
      exits with return code 0. Any error code greather than that is returned.
      The Makefile is no longer ignoring errors.
      
      This will fix #456.
      5e0eb9f8
    • A
      Make data directory and TCP ports for demo cluster configurable on the fly · fd2e045f
      Andreas Scherbaum 提交于
      The data directory and the TCP ports can be changed when "make cluster" is
      executed to build the demo cluster:
      
      DATADIRS=/tmp/gpdb-cluster MASTER_PORT=15432 PORT_BASE=25432 make cluster
      
      This fixes #441 and #442
      
      Also make the TCP port for the regression tests configurable.
      fd2e045f
    • K
    • H
      Remove bogus function definitions. · 6d425f44
      Heikki Linnakangas 提交于
      These were added by merge commit 60f45387, but we had backported a later
      version of this file already, where these static functions had been removed
      again.
      6d425f44
    • H
      Convert a few Value fields that are always of type String into "char *". · fce4fcc9
      Heikki Linnakangas 提交于
      This saves a little bit of memory when parsing massively partitioned
      CREATE TABLE statements.
      fce4fcc9
    • H
      Remove unnecessary 'need_free' argument from defGetString(). · e520d50c
      Heikki Linnakangas 提交于
      All of the callers are in places where leaking a few bytes of memory to
      the current memory context will do no harm. Either parsing, or processing
      a DDL command, or planning. So let's simplify the callers by removing the
      argument. That makes the code match the upstream again, which makes merging
      easier.
      
      These changes were originally made to reduce the memory consumption when
      doing parse analysis on a heavily partitioned table, but the previous
      commit provided a more whole-sale solution for that, so we don't need to
      nickel-and-dime every allocation anymore.
      e520d50c
    • H
      Use a temporary memory context while parsing parts of a CreateStmt. · 74098d27
      Heikki Linnakangas 提交于
      With a heavily partitioned table, the parse analysis of each CreateStmt
      consumes significant amounts of memory. By using a temporary memory
      context, we can reclaim some of the garbage left behind by the parsing of
      each one. In very quick testing on my laptop, this reduces the memory
      consumption of parse analysis of a massively-partitioned CREATE TABLE by
      about 10%, but YMMV.
      
      To make this work, transformAttributeEncoding() and its subroutines have
      to be more careful to not modify the input CreateStmt, as any new Nodes
      stored in it would be allocated in the temporary context and be blown away
      at the end of transformCreateStmt. That's not cool, if there are more
      partitions to process, which rely on the same CreateStmt.
      74098d27