- 14 4月, 2018 4 次提交
-
-
由 Ashwin Agrawal 提交于
When the blocksize is 2MB, the function AppendOnlyStorage_GetUsableBlockSize would give out the wrong usable block size. The expected result is 2MB. But the return value of the function call would give out (2M -4). This is because the macro AOSmallContentHeader_MaxLength is defined as (2M -1). After rounding down to 4 byte aligned, the result is (2M - 4). Without the fix can encounter errors as follows: "ERROR: Used length 2097152 greater than bufferLen 2097148 at position 8388592 in table 'xxxx'". Also removed some related, but unused macro variables, just for cleaning up codes related to AO storage. Co-authored-by: NLirong Jian <jian@hashdata.cn>
-
由 Ashwin Agrawal 提交于
commit b9b8831a introduced "relation mapping" infrastructure, which stores relfilenode as "0" in pg_class for shared and nailed catalog tables. The actual relfilenode is stored in separate file which provides mapping from OID to relfilenode. gp_replica_check builds hashtable for relfilnodes to perform lookup for actual files on-disk. So, while populating this hashtable, consult relmap file to get relfilenode for tables with relfilenode = 0. This should help avoid seeing WARNINGs like "relfilenode XXXX not present in primary's pg_class" or "found extra unknown file on mirror:" for gp_replica_check. Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Venkatesh Raghavan 提交于
Changes in 8.4 for statistics computation is comprehensive. Will fix any fallouts of this change as we observe them. Current tests show that things are kosher so removing the fix me.
-
由 Bhuvnesh Chaudhary 提交于
-
- 13 4月, 2018 17 次提交
-
-
由 David Yozie 提交于
* add RETURNING clause syntax and information to DELETE reference * add RETURNING clause syntax for INSERT; update output column description for SELECT * more basic edits to match postgres docs; adding/updating info for AS clause
-
由 Bhuvnesh Chaudhary 提交于
-
由 Goutam Tadi 提交于
Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJemish Patel <jpatel@pivotal.io>
-
由 Asim R P 提交于
Commit b3f300b9 introduced the novel idea tracking oldest xmin among all distributed snapshots on QEs. However, the idea is not applicable to QD because all distributed transactions can be found in ProcArray on QD. Local oldest xmin is therefore the oldest xmin among all distributed snapshots on QD. This patch fixes the maintenance of oldest xmin on QD by avoiding DistributedLog_AdvanceOldestXmin() and all the heavy-lifting that it performs. Calling this on QD was also hitting the "local snapshot's xmin is older than recorded distributed oldestxmin" error occasionally in CI.
-
由 Kris Macoskey 提交于
The prefixes were commonly out of date, misleading as to whom to talk to should a job fail, and otherwise not an approach developers agree with anymore for how to bin different tests. The approach being attempted now is to bin by component. Furthermore, acronyms are bad so they are avoided more so with the changes in this commit. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Jim Doty 提交于
-
由 Kris Macoskey 提交于
The convention from release pipelines is for packaging jobs to be labeled as `gpdb_[RC_|]packaging_[platform]`. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
It serves no purpose anymore and doesn't aid in visability of the flow of the pipeline. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
Aconyms are confusing and there is no specific reason they need exist in this context. This does mean someone will need to type the "ResourceGroups" or "Interconnect" if specifying those groups while using the gen_pipeline.py script. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The staggered starting was necessary when the performance of concourse was improved by limiting the spike in started jobs at any one time. Performance is not as much a driving factor with concourse changes and with reduction in size of the pipeline. Therefore the staggered starting idea can be retired. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The functional components are the interconnect and resource groups, this splits up the tests in MPP to fit that separation. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
Groups should represent logical components rather then teams. In relation to DP and CM, they fall into a logical group of CLI components, therefore we create a new CLI group and add both sets of tests to this group. Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
The teams have been renamed, lets rename the jobs as well Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
With dropping the gate_compile_end the groups needed sensible reorganization based on the necessary platform builds for each particular group. Additionally we modified each group slightly to be more explicit on every job associated with the group, not just the tests. A new CL group splits out Clients and Loaders because they have a logical separation that is similar to the other groups. Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
Its not a pattern we use elsewhere and it doesn't provide any improved discoverability in the concourse UI. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The end gates originally served the purpose of chaining groups to execute consecutively. This is no longer necassary, so the remaining end gates no longer server a purpose. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 12 4月, 2018 3 次提交
-
-
由 Jacob Champion 提交于
8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO tables, but not column-oriented. Correct that here. Store upgraded Datum data in a per-DatumStream buffer, to avoid "upgrading" the same data multiple times (multiple tuples may be pointing at the same data buffer, for example with RLE compression). Cache the column's base type in the DatumStreamRead struct. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
This reverts commit 8b0a7fed. Due to this commit, full join queries with condition on varchar columns started failing due to the below error. It is expected that there is a relabelnode on top of varchar columns while looking up the sort operator, however because of the said commit we removed the relabelnode. ```sql create table foo(a varchar(30), b varchar(30)); postgres=# select X.a from foo X full join (select a from foo group by 1) Y ON X.a = Y.a; ERROR: could not find member 1(1043,1043) of opfamily 1994 (createplan.c:4664) ``` Will reopen the issue #4175 which brought this patch.
-
由 Andreas Scherbaum 提交于
-
- 11 4月, 2018 8 次提交
-
-
由 Pengzhou Tang 提交于
Executing a query plan containing a large number of slices may slow down the entire Greenplum cluster: each "n-gang" slice corresponds to a separate process per segment. An example of such queries is a UNION ALL atop several complex views. To prevent such a situation, add a GUC gp_max_slices and refuse to execute plans of which the number of slices exceed that limit. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 xiong-gang 提交于
-
由 xiong-gang 提交于
This GUC determines whether curl verifies the authenticity of the gpfdist's certificate
-
由 Alexandra Wang 提交于
Update template pipeline to reflect this change 553b8754Authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Alexandra Wang 提交于
Re-apply 3a772cfd , this commit was accidently overwritten when introducing the CCP 2.0 change. Authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 David Sharp 提交于
Authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
The patch 62aba765 from upstream fixed the CVE-2009-4136 (security vulnerability) with the intent to properly manage session-local state during execution of an index function by a database superuser, which in some cases allowed remote authenticated users to gain privileges via a table with crafted index functions. Looking into the details of the CVE-2009-4136 and related CVE-2007-6600, the patch should ideally have limited the scope while we calculate the stats on the index expressions, where we run functions to evaluate the expression and could potentially present a security threat. However, the patch changed the user to table owner before collecting the sample, due to which even if analyze was run by superuser the sample couldn't be collected as the table owner did not had sufficient privileges to access the table. With this commit, we switch back to the original user while collecting the sample as it does not deal with indexes, or function call which was the original intention of the patch. Upstream did not face the privilege issue, as it does block sampling instead of issuing a query. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Abhijit Subramanya 提交于
This FIXME is two-fold: - Handling LIMIT 0 The LIMIT is already handled in the caller, convert_EXISTS_sublink_to_join(): When an existential sublink contains an aggregate without GROUP BY or HAVING, we can safely replace it by a one-time TRUE/FALSE filter based on the type of sublink since the result of aggregate is always going to be one row even if it's input rows are 0. However this assumption is incorrect when sublink contains LIMIT/OFFSET, such as, if the final limit count after applying the offset is 0. - Rules for demoting HAVING to WHERE previously, simplify_EXISTS_query() only disallowed demoting HAVING quals to WHERE, if it did not contain any aggregates. To determine the same, previously it used query->hasAggs, which is incorrect since hasAggs indicates that aggregate is present either in targetlist or HAVING. This penalized the queries, wherein HAVING did not contain the agg but targetlist did (as demonstrated in the newly added test). This check is now replaced by contain_aggs_of_level(). Also, do not demote if HAVING contains volatile functions since they need to be evaluated once per group. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
- 10 4月, 2018 8 次提交
-
-
由 Richard Guo 提交于
Relations with external storage, as well as AO and CO should have InvalidTransactionId in relfrozenxid during upgrade. The idea here is to keep the same logic with function should_have_valid_relfrozenxid().
-
由 Richard Guo 提交于
Attribute 'DESCRIBE' is added by gpdb to describe the name of a callback function. Currently pg_dump does not handle this attribute. Co-authored-by: NPaul Guo <paulguo@gmail.com>
-
由 Bhuvnesh Chaudhary 提交于
This reverts commit 54ee5b5c. In qual_is_pushdown_safe_set_operation it crashed while Assert(subquery)
-
由 Bhuvnesh Chaudhary 提交于
Previously, if there was a subquery contaning window functions, pushing down of the filters was banned. This commit fixes the issue, by pushing downs filters which are not on the columns projected using window functions. Adding relevant tests. Test case: After porting the fix to gpdb master, in the below case the filter `b = 1` is pushed down on ``` explain select b from (select b, row_number() over (partition by b) from foo) f where b = 1; QUERY PLAN ---------------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=0.00..1.05 rows=1 width=4) -> Subquery Scan on f (cost=0.00..1.05 rows=1 width=4) -> WindowAgg (cost=0.00..1.04 rows=1 width=4) -> Redistribute Motion 3:3 (slice1; segments: 3) (cost=0.00..1.03 rows=1 width=4) Hash Key: foo.b -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=4) Filter: b = 1 Optimizer: legacy query optimizer ``` Currently on master the plan is as below where the filter is not pushed down. ``` explain select b from (select b, row_number() over (partition by b) from foo) f where b = 1; QUERY PLAN ---------------------------------------------------------------------------------------------------- Gather Motion 3:1 (slice2; segments: 3) (cost=1.04..1.07 rows=1 width=4) -> Subquery Scan on f (cost=1.04..1.07 rows=1 width=4) Filter: f.b = 1 -> WindowAgg (cost=1.04..1.06 rows=1 width=4) Partition By: foo.b -> Sort (cost=1.04..1.04 rows=1 width=4) Sort Key: foo.b -> Redistribute Motion 3:3 (slice1; segments: 3) (cost=0.00..1.03 rows=1 width=4) Hash Key: foo.b -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=4) Optimizer: legacy query optimizer
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - remove gpcrondump/gpdbrestore/gpmfr refs, doc files * use more generic description for gpbackup/gprestore * gpbackup has same limitation w.r.t. leaf child parts
-
由 Mel Kiyama 提交于
-
由 Ben Christel 提交于
This is just refactoring. Related to (#4690). Co-authored-by: NBen Christel <bchristel@pivotal.io> Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
-