- 05 5月, 2017 16 次提交
-
-
由 Daniel Gustafsson 提交于
There is a lot of tab/space confusion in the code, this fixes some of the worst offenders but there is lots of low-hanging fruit left.
-
由 Daniel Gustafsson 提交于
There was a lot of unused code in the gpMgmt bash code that hasn't been running for quite some time. On top of really dead code, some codepaths were reachable, but useless (like printing a non-existing version string). Remove the dead code and replace the version print with working code which pulls the version from the canonical source.
-
由 Ning Wu 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Mainly fix spacing to avoid rendering the docs as "--option =val" but also some small superflous whitespace cleanups.
-
由 Daniel Gustafsson 提交于
The option description should aptly describe what the option is and how it's used, a shorter version is not required. Also, the short version could be mistaken for a parameter to pass in on a quick eye so remove.
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NChris Hajas <c.hajas@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
These tests do not ideally test gptransfer functionality, so removing them from the suite. Signed-off-by: NChris Hajas <c.hajas@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NChris Hajas <c.hajas@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
ORCA and planner answer files are similar, so we need not have additional ORCA answer files.
-
由 Omer Arap 提交于
-
由 Foyzur Rahman 提交于
With this porting, we support multiple reader in tuplestore. Also we did the following modification: * Modified tuplestore to use GPDB's MemTuple. * Changed Buffile API to take file number.
-
由 Jingyi Mei 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jim Doty 提交于
Also bump ivy.xml third-party/ext 3.2 configuration to use the latest tarball Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jim Doty 提交于
- Initial commit to start testing on a dev pipeline Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Jesse Zhang 提交于
-
- 04 5月, 2017 20 次提交
-
-
由 Haisheng Yuan 提交于
Remove xslice in prepare_plan_for_sharing, since it is always false. cross slice share type will be updated in function shareinput_mutator_xslice_2 and shareinput_mutator_xslice_3.
-
由 yanchaozhong 提交于
-
由 Daniel Gustafsson 提交于
The demo makes sense to keep but colocate with gpmapreduce rather than keeping it in gpMgmt where it seems out of place. Also avoid installing it.
-
由 Daniel Gustafsson 提交于
The bundled tests no longer worked, and looking at their setup they probably haven't worked for some time. We already have tests for gpmapreduce in src/test/regress covering the usecases (and expected lifespan of the product) so kill these rather than resuscitate.
-
由 Marbin Tan 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Ashwin Agrawal 提交于
Simplify the implementation of UnpackCheckPointRecord() by alligning it to our current checkpoint structure and removing all dead old code.
-
由 Ashwin Agrawal 提交于
Fixes #2115.
-
由 Ashwin Agrawal 提交于
Purely making change to have compiler happy, zero impact as FileRepOperationDescription_u is unused for those OperationTypes.
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Larry Hamel 提交于
* gpperfmon: remove health history * gpperfmon: remove code that pertains to DCA appliances Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Ashwin Agrawal 提交于
Setting the fault to always hit using `-o 0` instead of only once. Also, loose out running SQL command. Instead make it clear `hearbeat` operation between primary and mirror (triggering every minute) is what test relies on to validate, if mirror is not responding back in `gp_segment_connect_timeout` should mark it down and transition primary to change tracking.
-
由 Haisheng Yuan 提交于
These functions are also removed: memtuple_set_size memtuple_clear_hasnull memtuple_clear_islarge memtuple_clear_hasext memtuple_aligned_clone
-
由 Marbin Tan 提交于
* Address pylint warnings and errors - Fix whitespace and indentation - Remove unused variables - Fix syntax errors Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Haisheng Yuan 提交于
By setting material->cdb_strict to true if the outer child of material is a partition selector. Before this patch, the following query return incomplete results quite often: create table t(id int, a int); create table pt(id int, b int) DISTRIBUTED BY (id) PARTITION BY RANGE(b) (START (0) END (5) EVERY (1)); insert into t select i, i from generate_series(0,4) i; insert into pt select i, i from generate_series(0,4) i; analyze t; analyze pt; set enable_hashjoin=off; set enable_nestloop=on; select * from t, pt where a = b; In 3 segments cluster, it may return different result shown below: hyuan=# select * from t, pt where a = b; id | a | id | b ----+---+----+--- 0 | 0 | 0 | 0 1 | 1 | 1 | 1 2 | 2 | 2 | 2 (3 rows) hyuan=# select * from t, pt where a = b; id | a | id | b ----+---+----+--- 3 | 3 | 3 | 3 4 | 4 | 4 | 4 (2 rows) hyuan=# select * from t, pt where a = b; id | a | id | b ----+---+----+--- 3 | 3 | 3 | 3 4 | 4 | 4 | 4 0 | 0 | 0 | 0 1 | 1 | 1 | 1 2 | 2 | 2 | 2 (5 rows) But only the last one is correct result. The plan for above query is: ------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) -> Nested Loop (cost=2.27..9.00 rows=2 width=16) Join Filter: t.a = public.pt.b -> Append (cost=0.00..5.05 rows=2 width=8) -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_1 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_2 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_3 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_4 pt -> Result (cost=0.00..1.01 rows=1 width=8) One-Time Filter: PartSelected -> Seq Scan on pt_1_prt_5 pt -> Materialize (cost=2.27..2.42 rows=5 width=8) -> Partition Selector for pt (dynamic scan id: 1) Filter: t.a -> Broadcast Motion 3:3 (slice1; segments: 3) -> Seq Scan on t Settings: enable_hashjoin=off; enable_nestloop=on Optimizer status: legacy query optimizer The data distribution for table t and pt in 3 segments environment is: hyuan=# select gp_segment_id, * from t; gp_segment_id | id | a ---------------+----+--- 1 | 3 | 3 1 | 4 | 4 0 | 0 | 0 0 | 1 | 1 0 | 2 | 2 (5 rows) hyuan=# select gp_segment_id, * from pt; gp_segment_id | id | b ---------------+----+--- 0 | 0 | 0 0 | 1 | 1 0 | 2 | 2 1 | 3 | 3 1 | 4 | 4 (5 rows) Tuples {0,1,2} of t and pt are in segment 0, tuples {3,4} of t and pt are in segment 1. Segment 2 has no data for t and pt. In this query, planner decides to prefetch inner child to avoid deadlock hazard and the cdb_strict of Material is set to false. Let's see how the query goes in segment 0. 1. The inner child of nestloop join, material fetch one tuple from partition selector and then output it. Let's assume the output order of partition selector/broadcast motion is {0,1,2,3,4}. So the 1st tuple output by partition selector and material is 0. 2. The partition selector decides that the selected partition for table pt is pt_1_prt_1, because t.a = pt.b = 0 in this partition. The outer child of nestloop join, Append, fetches 1 tuple from that partition, with pt.b=0. 3. Nestloop join continues to execute Material of inner child to fetch other tuples, 1,2,3,4, but all these tuples from t don't match the join condition, because pt=0. No more tuples output by nestloop join for this round of loop. But all the partition of pt are matched and selected. 4. Nestloop join fetch another tuple from pt_1_prt_2, which is 1, which can match with a tuple from inner child, output 1. And then fetch tuple from pt_1_prt_3, which is 2, matched, output 2. But pt_1_prt_4 and pt_1_prt_5 have no data in this segment, so the output ends with {0,1,2} in segment 0. But in segment 1, let's still assume the tuple output order of partition selector/broadcast motion is {0,1,2,3,4}. Since the first output tuple from inner child is 0, only pt_1_prt_1 is selected. But when nestloop join tries to fetch tuple from outer child, which in fact fetch from pt_1_prt_1 in this case, it returns no tuple, because pt_1_prt_1 is empty in this segment. So nestloop join decides that since it can't fetch any tuple from outer child, it must be empty, no need to execute the join, return NULL and finish it directly. Segment 2 has no data for t and pt, no tuple is output. So the final result gathered on master segment is {0,1,2} in this case. But if the broadcast motion output tuple order is {3,4,0,1,2}, the final result may be {3,4}. If the braodcast motion output tuple order on segment 0 is {0,1,2,3,4}, and on segment 1 is {3,4,0,1,2}, then the final result on master is {0,1,2,3,4}, which is correct. The bug is fixed by setting cdb_strict of material to true when planner generates partition selector for the inner child of nestloop join, the material will fetch all the tuples from child and materialize them before emitting any tuple. Thus we can make sure the partitions of pt are selected correctly. RCA by Lirong Jian <jianlirong@gmail.com>
-
由 C.J. Jameson 提交于
[ci skip] Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Jingyi Mei 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 03 5月, 2017 4 次提交
-
-
由 xiong-gang 提交于
Increase the 'concurrency' limit can take effect immediately, and the queueing transactions can be woken up. Decrease the 'concurrency' is different, if the new limit is smaller than the number of current running transactions, ALTER statement won't cancel the running transactions to the limit. Therefore, we use column 'proposed' in pg_resgroupcapability to represent the effective limit, and use column 'value' to record the historical limit. For example, we have a resource group with concurrency=3, and there are 3 running transactions and 3 queueing transactions. If we alter the concurrency to 2, 'proposed' will be updated to 2 and 'value' will stay as 3. When one running transaction is finished, it won't wake up the transactions in the queue as the current concurrency is 2. If we execute the statement again to alter the concurrency to 2, it will update the 'value' column to 2, and the 'value' is consistent with 'proposed' again. Signed-off-by: NRichard Guo <riguo@pivotal.io>
-
由 Adam Lee 提交于
Support COPY statement that exports the table directly from segment to local file parallelly. This commit adds a keyword "on segment" to save the copied file on "segment" instead of on "master". Two place holders are used, which are "<SEG_DATA_DIR>" and "<SEGID>" and will be replced to segment datadir and segment id. E.g. ``` COPY tbl TO '/tmp/<SEG_DATA_DIR>filename<SEGID>.txt' ON SEGMENT; ``` Signed-off-by: NYuan Zhao <yuzhao@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 mkiyama 提交于
-
由 David Yozie 提交于
* removing info for older, incompatible netbackup versions * removing system requirements from backup topic; adding conditionalized reference to release notes for supported netbackup versions
-