- 04 5月, 2017 20 次提交
-
-
由 Haisheng Yuan 提交于
Remove xslice in prepare_plan_for_sharing, since it is always false. cross slice share type will be updated in function shareinput_mutator_xslice_2 and shareinput_mutator_xslice_3.
-
由 yanchaozhong 提交于
-
由 Daniel Gustafsson 提交于
The demo makes sense to keep but colocate with gpmapreduce rather than keeping it in gpMgmt where it seems out of place. Also avoid installing it.
-
由 Daniel Gustafsson 提交于
The bundled tests no longer worked, and looking at their setup they probably haven't worked for some time. We already have tests for gpmapreduce in src/test/regress covering the usecases (and expected lifespan of the product) so kill these rather than resuscitate.
-
由 Marbin Tan 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Ashwin Agrawal 提交于
Simplify the implementation of UnpackCheckPointRecord() by alligning it to our current checkpoint structure and removing all dead old code.
-
由 Ashwin Agrawal 提交于
Fixes #2115.
-
由 Ashwin Agrawal 提交于
Purely making change to have compiler happy, zero impact as FileRepOperationDescription_u is unused for those OperationTypes.
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Larry Hamel 提交于
* gpperfmon: remove health history * gpperfmon: remove code that pertains to DCA appliances Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Ashwin Agrawal 提交于
Setting the fault to always hit using `-o 0` instead of only once. Also, loose out running SQL command. Instead make it clear `hearbeat` operation between primary and mirror (triggering every minute) is what test relies on to validate, if mirror is not responding back in `gp_segment_connect_timeout` should mark it down and transition primary to change tracking.
-
由 Haisheng Yuan 提交于
These functions are also removed: memtuple_set_size memtuple_clear_hasnull memtuple_clear_islarge memtuple_clear_hasext memtuple_aligned_clone
-
由 Marbin Tan 提交于
* Address pylint warnings and errors - Fix whitespace and indentation - Remove unused variables - Fix syntax errors Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Haisheng Yuan 提交于
By setting material->cdb_strict to true if the outer child of material is a partition selector. Before this patch, the following query return incomplete results quite often: create table t(id int, a int); create table pt(id int, b int) DISTRIBUTED BY (id) PARTITION BY RANGE(b) (START (0) END (5) EVERY (1)); insert into t select i, i from generate_series(0,4) i; insert into pt select i, i from generate_series(0,4) i; analyze t; analyze pt; set enable_hashjoin=off; set enable_nestloop=on; select * from t, pt where a = b; In 3 segments cluster, it may return different result shown below: hyuan=# select * from t, pt where a = b; id | a | id | b ----+---+----+--- 0 | 0 | 0 | 0 1 | 1 | 1 | 1 2 | 2 | 2 | 2 (3 rows) hyuan=# select * from t, pt where a = b; id | a | id | b ----+---+----+--- 3 | 3 | 3 | 3 4 | 4 | 4 | 4 (2 rows) hyuan=# select * from t, pt where a = b; id | a | id | b ----+---+----+--- 3 | 3 | 3 | 3 4 | 4 | 4 | 4 0 | 0 | 0 | 0 1 | 1 | 1 | 1 2 | 2 | 2 | 2 (5 rows) But only the last one is correct result. The plan for above query is: ------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) -> Nested Loop (cost=2.27..9.00 rows=2 width=16) Join Filter: t.a = public.pt.b -> Append (cost=0.00..5.05 rows=2 width=8) -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_1 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_2 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_3 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_4 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_5 pt -> Materialize (cost=2.27..2.42 rows=5 width=8) -> Partition Selector for pt (dynamic scan id: 1) Filter: t.a -> Broadcast Motion 3:3 (slice1; segments: 3) -> Seq Scan on t Settings: enable_hashjoin=off; enable_nestloop=on Optimizer status: legacy query optimizer The data distribution for table t and pt in 3 segments environment is: hyuan=# select gp_segment_id, * from t; gp_segment_id | id | a ---------------+----+--- 1 | 3 | 3 1 | 4 | 4 0 | 0 | 0 0 | 1 | 1 0 | 2 | 2 (5 rows) hyuan=# select gp_segment_id, * from pt; gp_segment_id | id | b ---------------+----+--- 0 | 0 | 0 0 | 1 | 1 0 | 2 | 2 1 | 3 | 3 1 | 4 | 4 (5 rows) Tuples {0,1,2} of t and pt are in segment 0, tuples {3,4} of t and pt are in segment 1. Segment 2 has no data for t and pt. In this query, planner decides to prefetch inner child to avoid deadlock hazard and the cdb_strict of Material is set to false. Let's see how the query goes in segment 0. 1. The inner child of nestloop join, material fetch one tuple from partition selector and then output it. Let's assume the output order of partition selector/broadcast motion is {0,1,2,3,4}. So the 1st tuple output by partition selector and material is 0. 2. The partition selector decides that the selected partition for table pt is pt_1_prt_1, because t.a = pt.b = 0 in this partition. The outer child of nestloop join, Append, fetches 1 tuple from that partition, with pt.b=0. 3. Nestloop join continues to execute Material of inner child to fetch other tuples, 1,2,3,4, but all these tuples from t don't match the join condition, because pt=0. No more tuples output by nestloop join for this round of loop. But all the partition of pt are matched and selected. 4. Nestloop join fetch another tuple from pt_1_prt_2, which is 1, which can match with a tuple from inner child, output 1. And then fetch tuple from pt_1_prt_3, which is 2, matched, output 2. But pt_1_prt_4 and pt_1_prt_5 have no data in this segment, so the output ends with {0,1,2} in segment 0. But in segment 1, let's still assume the tuple output order of partition selector/broadcast motion is {0,1,2,3,4}. Since the first output tuple from inner child is 0, only pt_1_prt_1 is selected. But when nestloop join tries to fetch tuple from outer child, which in fact fetch from pt_1_prt_1 in this case, it returns no tuple, because pt_1_prt_1 is empty in this segment. So nestloop join decides that since it can't fetch any tuple from outer child, it must be empty, no need to execute the join, return NULL and finish it directly. Segment 2 has no data for t and pt, no tuple is output. So the final result gathered on master segment is {0,1,2} in this case. But if the broadcast motion output tuple order is {3,4,0,1,2}, the final result may be {3,4}. If the braodcast motion output tuple order on segment 0 is {0,1,2,3,4}, and on segment 1 is {3,4,0,1,2}, then the final result on master is {0,1,2,3,4}, which is correct. The bug is fixed by setting cdb_strict of material to true when planner generates partition selector for the inner child of nestloop join, the material will fetch all the tuples from child and materialize them before emitting any tuple. Thus we can make sure the partitions of pt are selected correctly. RCA by Lirong Jian <jianlirong@gmail.com>
-
由 C.J. Jameson 提交于
[ci skip] Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 03 5月, 2017 13 次提交
-
-
由 xiong-gang 提交于
Increase the 'concurrency' limit can take effect immediately, and the queueing transactions can be woken up. Decrease the 'concurrency' is different, if the new limit is smaller than the number of current running transactions, ALTER statement won't cancel the running transactions to the limit. Therefore, we use column 'proposed' in pg_resgroupcapability to represent the effective limit, and use column 'value' to record the historical limit. For example, we have a resource group with concurrency=3, and there are 3 running transactions and 3 queueing transactions. If we alter the concurrency to 2, 'proposed' will be updated to 2 and 'value' will stay as 3. When one running transaction is finished, it won't wake up the transactions in the queue as the current concurrency is 2. If we execute the statement again to alter the concurrency to 2, it will update the 'value' column to 2, and the 'value' is consistent with 'proposed' again. Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Adam Lee 提交于
Support COPY statement that exports the table directly from segment to local file parallelly. This commit adds a keyword "on segment" to save the copied file on "segment" instead of on "master". Two place holders are used, which are "<SEG_DATA_DIR>" and "<SEGID>" and will be replced to segment datadir and segment id. E.g. ``` COPY tbl TO '/tmp/<SEG_DATA_DIR>filename<SEGID>.txt' ON SEGMENT; ``` Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 mkiyama 提交于
-
由 David Yozie 提交于
* removing info for older, incompatible netbackup versions * removing system requirements from backup topic; adding conditionalized reference to release notes for supported netbackup versions
-
由 Jim Doty 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Daniel Gustafsson 提交于
We don't have pl/java in gpAux/extensions anymore and there are no proprietary modules either. [ci skip]
-
-
由 Larry Hamel 提交于
* add behave test for diskspace_history * remove commented-out test (moved to new tracker story to redo) * refactor test for qamode Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Shreedhar Hardikar 提交于
-
由 Jingyi Mei 提交于
We did the following changes: 1. changed make files to reference sles instead of suse, and use sles11-x86_64 instead of sles11_x86_64 in the zip artifact 2. changed make files to use rhel6/7-x86_64 instead of RHEL6/7-x86_64 in the zip artifact 3. changed set_bld_arch.sh to set BLD_ARCH=sles instead of suse 4. in dependencies ivy file, add sles11_x86_64 as configurations to multiple repos which only had suseXX-x86_64 before 5. set multiple config path/flags for sles11_x86_64 in Makefile which didn't exist before Signed-off-by: NTom Meyer <tmeyer@pivotal.io> Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io> Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 David Yozie 提交于
-
由 Jane Beckman 提交于
* Add new OPERATOR_FAMILY pages * Updates for Postgres 8.3 * Corrections to nav map * Correct xref pointer * xml format fix * Standardize xml for new xrefs * Remove duplicate link * Change ref to Greenplum Database * Updates for new PostreSQL commands ALTER/CREATE/DROP OPERATOR FAMILY, ALTER VIEW, and DISCARD. * Updates from Heikki * Remove 8.3 comparison * Update GET DIAGNOSTICS
-
- 02 5月, 2017 5 次提交
-
-
由 Daniel Gustafsson 提交于
While looking at other things, spotted that there were quite a few functions in catalog.py that were unused. Remove these, and also remove imports of gppylib.db.catalog when unused.
-
由 C.J. Jameson 提交于
- We think that external tables previously broke with newlines. This is no longer the case, so removing space replacement logic. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
For appendonly_read_check, setting `optimizer_disable_missing_stats_collection=on` to hide the additional NOTICES/HINTS produced by ORCA. For uao/uaocs crash on update tests, we are setting `optimizer=off` as ORCA does not go through the code path intended for these tests.
-
由 Jamie McAtamney 提交于
Authors: Karen Huddleston, Jamie McAtamney
-
由 David Yozie 提交于
-
- 01 5月, 2017 2 次提交
-
-
The sub-transaction tests are injecting faults at a point that only happens in a reader gang on segments. The original query, when planned by ORCA, would lead to a plan where the reader slice executes on master. This commit changes the test (while mostly staying true to its spirits): instead of using `INSERT INTO ... VALUES ...` we put the value in a temporary table, and do an `INSERT INTO ... SELECT ... FROM`. The new test should now pass under both ORCA and planner.
-
由 mkiyama 提交于
-