1. 12 9月, 2018 3 次提交
  2. 11 9月, 2018 9 次提交
  3. 10 9月, 2018 4 次提交
  4. 08 9月, 2018 10 次提交
  5. 07 9月, 2018 10 次提交
    • P
      266013fa
    • R
      Remove ojscope_relids from RestrictInfo. · 8ae102cc
      Richard Guo 提交于
      The only consumer of RestrictInfo->ojscope_relids is
      gen_implied_qual(), to tell if RestrictInfo is an outer join clause.
      RestrictInfo->outer_relids can be used to do the same job. So remove
      ojscope_relids from RestrictInfo and from arguments of
      make_restrictinfo(). PostgreSQL does not have ojscope_relids for
      RestrictInfo.
      8ae102cc
    • P
      pg_upgrade: Fix dump file diff caused by default value with type bit varying... · 749d810a
      Paul Guo 提交于
      pg_upgrade: Fix dump file diff caused by default value with type bit varying in column of a relation. (#4823)
      
      Here is the repro case:
      psql -X -d regression -c "CREATE TABLE t111 ( a40 bit varying(5) DEFAULT '1');"
      psql -X -d regression -c "CREATE TABLE t222 ( a40 bit varying(5) DEFAULT B'1');"
      
      After pg_upgrade testing, we will see failure and the diff of dump sql files looks like this:
      
       CREATE TABLE t111 (
      -    a40 bit varying(5) DEFAULT B'1'::bit varying
      +    a40 bit varying(5) DEFAULT (B'1'::"bit")::bit varying
       ) DISTRIBUTED BY (a40);
      
      No diff for the table t222. From perspective of functionality, the difference
      seems to mean nothing, but it is annoying for CI.
      
      This issue was found when testing pg_upgrade using the pg_regression database which is generated after running regression tests so we do not need to add a test case for it.
      
      This issue exists on latest PG also. We submitted a patch to upstream and the patch is under reviewing. We'll check in GP in advance.
      Co-authored-by: NRichard Guo <riguo@pivotal.io>
      Co-authored-by: NPaul Guo <pguo@pivotal.io>
      749d810a
    • H
      Remove start_equiv/end_equiv support from gpdiff. · 2a5b7355
      Heikki Linnakangas 提交于
      We can manage without it. Convert them into human-oriented comments, and
      rely on the usual "compare with expected output" method for all of these
      tests.
      
      Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/lrJFgQR-KhI/KFTnrJj2BQAJ
      2a5b7355
    • B
      Storage in listTables function does not show anything conditionally (#5685) · d4bdd338
      BaiShaoqi 提交于
      Storage in listTables function uesed to show nothing when the
      pg_catalog.pg_class's relstorage is 'p' or 'f'
      
      Add comment for RELSTORAGE_PARQUET in pg_class.h
      
      Review comments from Heikki Linnakangas, Daniel Gustafsson
      d4bdd338
    • R
      Re-enable ANY_SUBLINK pullup based on LHS input. · 0dac173a
      Richard Guo 提交于
      Previously, for query below, we disabled ANY_SUBLINK pullup to work
      around the assertion failure that left_ec/right_ec not being set.
      
      ```
      select * from A where exists
           (select * from B where A.i in
               (select C.i from C where C.i = B.i));
      ```
      
      This commit sets left_ec/right_ec properly in gen_implied_qual() and
      re-enables above ANY_SUBLINK pullup.
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      0dac173a
    • J
      Fix intermittent issue with appendonly regression test · 7943d890
      Jimmy Yih 提交于
      A `CREATE TABLE AS` without a `DISTRIBUTED BY` clause will create a
      randomly distributed table, when optimized by ORCA. The plan for the
      CTAS will have a redistribute motion (random) between the scan and the
      insert. Depending on your data, this style of plan could be more even,
      equally even, or less even than a hash distributed table (the kind of
      distribution usually assumed by planner).
      
      This commit changes the test to explicitly distribute by the same column
      that planner would guess.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      7943d890
    • D
      Fix typos in comments · 1fd349a7
      Daniel Gustafsson 提交于
      1fd349a7
    • H
      Remove gp_cancel_query() and friends. · 318e6c39
      Heikki Linnakangas 提交于
      The upstream pg_cancel_backend() function should work fine on GPDB. It's
      not exactly the same: in gp_cancel_query(), you would pass the sesssion ID
      and command ID as argument, while pg_cancel_backend() takes a PID. But it
      seems just as good from a usability point of view. Let's avoid the
      duplicate code, and remove gp_cancel_query().
      
      Also remove the gp_cancel_query_print_log and gp_cancel_query_delay_time
      GUCs. They were not directly related to gp_cancel_query() - they would
      have an effect on cancellations caused by pg_cancel_backend() or
      statement_timeout, too. But they were marked as "developer options", and
      they printed the session and command ID so I think they were meant to be
      used with gp_cancel_backend(). I don't think anyone uses them, though, so
      let's just remove them, rather than change them to print PIDs.
      318e6c39
    • J
      Fix NIL (empty) list confusion with bool · 8880c885
      Jesse Zhang 提交于
      My compiler (Clang-8) is complaining about this, and I agree:
      
      ```
      cdbgroup.c:4193:24: warning: expression which evaluates to zero treated
      as a null pointer constant of type 'List *' (aka 'struct List *')
      [-Wnon-literal-null-conversion]
      
          fref->aggdistinct = false; /* handled in preliminary aggregation */
                              ^~~~~
      ../../../src/include/c.h:195:15: note: expanded from macro 'false'
      #define false   ((bool) 0)
                      ^~~~~~~~~~
      
      ```
      
      Commit 35e60338 introduced this, most likely accidentally.
      
      Come to think of it, Postgres (until 11) isn't even using a "real
      boolean": we just `typdef char bool`, which is why we are *not* getting
      more complaints from more compilers at the default warning level.
      8880c885
  6. 06 9月, 2018 4 次提交
    • T
      Handle aggr_args correctly in the parser · 06a2472b
      Taylor Vesely 提交于
      In GPDB AGGREGATE functions can be either 'ordered' or 'hypothetical',
      and as a result the token aggr_args has more information than in
      upstream. Excepting CREATE [ORDERED] AGGREGATE, the parser will extract
      the function arguments from the aggr_args token using
      extractAggrArgTypes().
      
      The ALTER EXTENSION ADD/DROP AGGREGATE and SECURITY LIMIT syntax has
      been added as part of the merge of PostgreSQL between 9.0 and 9.2, so
      add a call to extract the function arguments.
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      06a2472b
    • D
      Fix typos in comment · 956138f7
      Daniel Gustafsson 提交于
      956138f7
    • D
      Tidy up pg_upgrade file header comments · 60689ddf
      Daniel Gustafsson 提交于
      In the recent refactoring which separated PostgreSQL and Greenplum
      functionality in pg_upgrade, these header comments were overlooked
      and came out of sync.
      60689ddf
    • T
      Integrate Gang management from portal to Dispatcher and simplify AssignGangs for init plans (#5555) · 78a4890a
      Tang Pengzhou 提交于
      * Simplify the AssignGangs() logic for init plans
      
      Previously, AssignGangs() assign gangs for both main plans and
      init plans in one shot. Because init plans and main plan are
      executed sequentially, so the gangs can be reused between main
      plan and init plans, function AccumSliceReq() is designed for
      this.
      
      This process can be simplified: already know the root slice
      index id will be adjusted to according init plan id, init plan
      only need to assign their own slices.
      
      * Integrate Gang management from portal to Dispatcher
      
      Previously, Gang was managed by portal, freeGangsForPortal()
      was used to cleanup gang resource, DTM related commands also
      needed a gang to dispatch command outside of a portal and
      used freeGangsForPortal() too. There might be multiple
      command/plan/utility executed within one portal, all commands
      relied on a dispatcher routine like CdbDispatchCommand /
      CdbDispatchPlan/CdbDispatchUtility... to dispatch, gangs were
      created by each dispatcher routines, but not be recycled or
      destroyed when a routine finished except for primary writer
      gang, one defect of this is gang resource cannot be reused
      between dispatcher routines. GPDB already had an optimization
      for init plans, if a plan contained init plans, AssignGangs
      was called before execution of any of them it went through
      the whole slice tree and created the maximum gang that both
      main plan and init plans needed, this was doable because init
      plans and main plan were executed sequentially, but it also
      made AssignGangs logic complex, meanwhile, reusing an not
      clean gang was not safe.
      
      Another confusing thing was the gang and dispatcher were
      managed separately which cause context inconsistent like:
      when a dispatcher state was destroyed, gang was not recycled,
      when a gang was destroyed by portal, the dispatcher state was
      still in use and may refer to the context of a destroyed gang.
      
      As described above, this commit integrates gang management
      with dispatcher, a dispather state is responsible for creating
      and tracking gangs as needed and destroy them when dispatcher
      state is destroyed.
      
      * Handle the case when primary writer gang has gone
      
      When members of primary writer gang gone, the writer gang
      is destroyed immediately (primaryWriterGang is set to NULL)
      when a dispatcher rountine (eg.CdbDispatchCommand) finished.
      So when dispatching two-phase-DTM/DTX related command, QD
      doesn't know writer gang has gone, it may get unexpected
      error like 'savepoint not exist', 'subtransaction level not
      match', 'temp file not exist'.
      
      Previously, primaryWriterGang is not reset when DTM/DTX
      commands start even it is pointing to invalid segments, so
      those DTM/DTX commands will not actually sent to segments,
      an normal error reported on QD looks like 'could not
      connect to segment: initialization of segworker'.
      
      So we need a way to info global transaction that its writer
      gang has lost. so when aborting transaction, QD can:
      1. disconnect all reader gangs, this is usefull to skip
      dispatching "ABORT_NO_PREPARE"
      2. reset session and drop temp files because temp files in
      segment is gone.
      3. report a error when dispatching "rollback savepoint" DTX
      because savepoint in segment is gone.
      4. report a error when dispatch "abort subtransaction" DTX
      because subtransaction is rollback when writer segment is down.
      78a4890a