1. 05 5月, 2017 16 次提交
  2. 04 5月, 2017 20 次提交
    • H
      Clean up useless vars in prepare_plan_for_sharing · d10b9bbb
      Haisheng Yuan 提交于
      Remove xslice in prepare_plan_for_sharing, since it is always false.  cross
      slice share type will be updated in function shareinput_mutator_xslice_2 and
      shareinput_mutator_xslice_3.
      d10b9bbb
    • Y
      Remove extra blank line of gpstart · 65946861
      yanchaozhong 提交于
      65946861
    • D
      Move gpmapreduce demo from gpMgmt to gpAux · c7be472e
      Daniel Gustafsson 提交于
      The demo makes sense to keep but colocate with gpmapreduce rather
      than keeping it in gpMgmt where it seems out of place. Also avoid
      installing it.
      c7be472e
    • D
      Remove unused gpmapreduce tests · 26470524
      Daniel Gustafsson 提交于
      The bundled tests no longer worked, and looking at their setup they
      probably haven't worked for some time. We already have tests for
      gpmapreduce in src/test/regress covering the usecases (and expected
      lifespan of the product) so kill these rather than resuscitate.
      26470524
    • M
      Remove figleaf · 6cc40bbd
      Marbin Tan 提交于
      Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
      6cc40bbd
    • M
      Remove gpcoverage · 536554fd
      Marbin Tan 提交于
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      536554fd
    • L
      Refactor whitespace · e0c3a8da
      Larry Hamel 提交于
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      e0c3a8da
    • A
      Cleanup UnpackCheckPointRecord() implementation. · eea17ddf
      Ashwin Agrawal 提交于
      Simplify the implementation of UnpackCheckPointRecord() by alligning it to our
      current checkpoint structure and removing all dead old code.
      eea17ddf
    • A
      Remove incorrect assertion from heap_deform_tuple. · 36b64cb6
      Ashwin Agrawal 提交于
      Fixes #2115.
      36b64cb6
    • A
      Fixing compiler warning for uninitialized desc. · bc973d40
      Ashwin Agrawal 提交于
      Purely making change to have compiler happy, zero impact as
      FileRepOperationDescription_u is unused for those OperationTypes.
      bc973d40
    • L
      gpperfmon: remove filerep stat collection · 93638f09
      Larry Hamel 提交于
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      93638f09
    • L
      gpperfmon: remove health history and other DCA appliance code (#2340) · e5f54069
      Larry Hamel 提交于
      * gpperfmon: remove health history
      * gpperfmon: remove code that pertains to DCA appliances
      Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
      e5f54069
    • A
      Fix flakiness for FTS test primary_sync_mirror_cannot_keepup_failover. · 7edb8ac9
      Ashwin Agrawal 提交于
      Setting the fault to always hit using `-o 0` instead of only once. Also, loose
      out running SQL command. Instead make it clear `hearbeat` operation between
      primary and mirror (triggering every minute) is what test relies on to validate,
      if mirror is not responding back in `gp_segment_connect_timeout` should mark it
      down and transition primary to change tracking.
      7edb8ac9
    • H
      Remove unused argument mt_bind in functions of memtup.h and callers · 102ef73a
      Haisheng Yuan 提交于
      These functions are also removed:
      memtuple_set_size
      memtuple_clear_hasnull
      memtuple_clear_islarge
      memtuple_clear_hasext
      memtuple_aligned_clone
      102ef73a
    • M
      Address pylint warnings and errors (gpconfig | gpexpand) (#2348) · e72d01d0
      Marbin Tan 提交于
      * Address pylint warnings and errors
      
      - Fix whitespace and indentation
      - Remove unused variables
      - Fix syntax errors
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      e72d01d0
    • H
      Fix bug that partition selector may generate incomplete results for NLJ · befc063b
      Haisheng Yuan 提交于
      By setting material->cdb_strict to true if the outer child of material is a
      partition selector.  Before this patch, the following query return incomplete
      results quite often:
      
      create table t(id int, a int);
      create table pt(id int, b int) DISTRIBUTED BY (id)
      PARTITION BY RANGE(b) (START (0) END (5) EVERY (1));
      insert into t select i, i from generate_series(0,4) i;
      insert into pt select i, i from generate_series(0,4) i;
      analyze t;
      analyze pt;
      set enable_hashjoin=off;
      set enable_nestloop=on;
      select * from t, pt where a = b;
      
      In 3 segments cluster, it may return different result shown below:
      hyuan=# select * from t, pt where a = b;
       id | a | id | b
      ----+---+----+---
        0 | 0 |  0 | 0
        1 | 1 |  1 | 1
        2 | 2 |  2 | 2
      (3 rows)
      
      hyuan=# select * from t, pt where a = b;
       id | a | id | b
      ----+---+----+---
        3 | 3 |  3 | 3
        4 | 4 |  4 | 4
      (2 rows)
      
      hyuan=# select * from t, pt where a = b;
       id | a | id | b
      ----+---+----+---
        3 | 3 |  3 | 3
        4 | 4 |  4 | 4
        0 | 0 |  0 | 0
        1 | 1 |  1 | 1
        2 | 2 |  2 | 2
      (5 rows)
      
      But only the last one is correct result.
      
      The plan for above query is:
      -------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)
         ->  Nested Loop  (cost=2.27..9.00 rows=2 width=16)
               Join Filter: t.a = public.pt.b
               ->  Append  (cost=0.00..5.05 rows=2 width=8)
                     ->  Result  (cost=0.00..1.01 rows=1 width=8)
                           One-Time Filter: PartSelected
                           ->  Seq Scan on pt_1_prt_1 pt
                     ->  Result  (cost=0.00..1.01 rows=1 width=8)
                           One-Time Filter: PartSelected
                           ->  Seq Scan on pt_1_prt_2 pt
                     ->  Result  (cost=0.00..1.01 rows=1 width=8)
                           One-Time Filter: PartSelected
                           ->  Seq Scan on pt_1_prt_3 pt
                     ->  Result  (cost=0.00..1.01 rows=1 width=8)
                           One-Time Filter: PartSelected
                           ->  Seq Scan on pt_1_prt_4 pt
                     ->  Result  (cost=0.00..1.01 rows=1 width=8)
                           One-Time Filter: PartSelected
                           ->  Seq Scan on pt_1_prt_5 pt
               ->  Materialize  (cost=2.27..2.42 rows=5 width=8)
                     ->  Partition Selector for pt (dynamic scan id: 1)
                           Filter: t.a
                           ->  Broadcast Motion 3:3  (slice1; segments: 3)
                                 ->  Seq Scan on t
       Settings:  enable_hashjoin=off; enable_nestloop=on
       Optimizer status: legacy query optimizer
      
      The data distribution for table t and pt in 3 segments environment is:
      hyuan=# select gp_segment_id, * from t;
       gp_segment_id | id | a
      ---------------+----+---
                   1 |  3 | 3
                   1 |  4 | 4
                   0 |  0 | 0
                   0 |  1 | 1
                   0 |  2 | 2
      (5 rows)
      
      hyuan=# select gp_segment_id, * from pt;
       gp_segment_id | id | b
      ---------------+----+---
                   0 |  0 | 0
                   0 |  1 | 1
                   0 |  2 | 2
                   1 |  3 | 3
                   1 |  4 | 4
      (5 rows)
      
      Tuples {0,1,2} of t and pt are in segment 0, tuples {3,4} of t and pt are in
      segment 1. Segment 2 has no data for t and pt.
      
      In this query, planner decides to prefetch inner child to avoid deadlock hazard
      and the cdb_strict of Material is set to false. Let's see how the query goes in
      segment 0.
      
      1. The inner child of nestloop join, material fetch one tuple from partition
      selector and then output it. Let's assume the output order of partition
      selector/broadcast motion is {0,1,2,3,4}. So the 1st tuple output by partition
      selector and material is 0.
      
      2. The partition selector decides that the selected partition for table pt is
      pt_1_prt_1, because t.a = pt.b = 0 in this partition. The outer child of
      nestloop join, Append, fetches 1 tuple from that partition, with pt.b=0.
      
      3. Nestloop join continues to execute Material of inner child to fetch other
      tuples, 1,2,3,4, but all these tuples from t don't match the join condition,
      because pt=0. No more tuples output by nestloop join for this round of loop.
      But all the partition of pt are matched and selected.
      
      4. Nestloop join fetch another tuple from pt_1_prt_2, which is 1, which can
      match with a tuple from inner child, output 1. And then fetch tuple from
      pt_1_prt_3, which is 2, matched, output 2. But pt_1_prt_4 and pt_1_prt_5 have
      no data in this segment, so the output ends with {0,1,2} in segment 0.
      
      But in segment 1, let's still assume the tuple output order of partition
      selector/broadcast motion is {0,1,2,3,4}. Since the first output tuple from
      inner child is 0, only pt_1_prt_1 is selected. But when nestloop join tries to
      fetch tuple from outer child, which in fact fetch from pt_1_prt_1 in this case,
      it returns no tuple, because pt_1_prt_1 is empty in this segment. So nestloop
      join decides that since it can't fetch any tuple from outer child, it must be
      empty, no need to execute the join, return NULL and finish it directly.
      
      Segment 2 has no data for t and pt, no tuple is output. So the final result
      gathered on master segment is {0,1,2} in this case. But if the broadcast motion
      output tuple order is {3,4,0,1,2}, the final result may be {3,4}. If the
      braodcast motion output tuple order on segment 0 is {0,1,2,3,4}, and on segment
      1 is {3,4,0,1,2}, then the final result on master is {0,1,2,3,4}, which is
      correct.
      
      The bug is fixed by setting cdb_strict of material to true when planner
      generates partition selector for the inner child of nestloop join, the material
      will fetch all the tuples from child and materialize them before emitting any
      tuple.  Thus we can make sure the partitions of pt are selected correctly.
      
      RCA by Lirong Jian <jianlirong@gmail.com>
      befc063b
    • C
      gpperfmon: link to wiki · 64b30837
      C.J. Jameson 提交于
      [ci skip]
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      64b30837
    • M
      gpperfmon: remove unused variable to quiet warnings · e2477236
      Marbin Tan 提交于
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      e2477236
    • L
      gpperfmon: remove unused variable · 360d42c2
      Larry Hamel 提交于
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      360d42c2
    • J
  3. 03 5月, 2017 4 次提交
    • X
      ALTER RESOURCE GROUP SET CONCURRENCY N · 043511e9
      xiong-gang 提交于
      Increase the 'concurrency' limit can take effect immediately, and the
      queueing transactions can be woken up. Decrease the 'concurrency' is
      different, if the new limit is smaller than the number of current running
      transactions, ALTER statement won't cancel the running transactions to
      the limit. Therefore, we use column 'proposed' in pg_resgroupcapability
      to represent the effective limit, and use column 'value' to record the
      historical limit.
      For example, we have a resource group with concurrency=3, and there
      are 3 running transactions and 3 queueing transactions. If we alter the
      concurrency to 2, 'proposed' will be updated to 2 and 'value' will stay
      as 3. When one running transaction is finished, it won't wake up the
      transactions in the queue as the current concurrency is 2. If we execute
      the statement again to alter the concurrency to 2, it will update the
      'value' column to 2, and the 'value' is consistent with 'proposed'
      again.
      Signed-off-by: NRichard Guo <riguo@pivotal.io>
      043511e9
    • A
      Support COPY ON SEGMENT command · 49b12f18
      Adam Lee 提交于
      Support COPY statement that exports the table directly from segment
      to local file parallelly.
      
      This commit adds a keyword "on segment" to save the copied file on
      "segment" instead of on "master".
      
      Two place holders are used, which are "<SEG_DATA_DIR>" and "<SEGID>"
      and will be replced to segment datadir and segment id.
      
      E.g.
      
      ```
      COPY tbl TO '/tmp/<SEG_DATA_DIR>filename<SEGID>.txt' ON SEGMENT;
      ```
      Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
      Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      49b12f18
    • M
      fix links to Pivotal Network. · 558be9d8
      mkiyama 提交于
      558be9d8
    • D
      removing info for older, incompatible netbackup versions (#2337) · 26b50340
      David Yozie 提交于
      * removing info for older, incompatible netbackup versions
      
      * removing system requirements from backup topic; adding conditionalized reference to release notes for supported netbackup versions
      26b50340