- 19 4月, 2018 4 次提交
-
-
由 Ning Yu 提交于
Cgroup mount point is detected during startup, but due to a bug the mount point was used even if cgroup is not mounted at all. Fixed by correcting the checking logic.
-
由 Ning Yu 提交于
Memory auditor was a new feature introduced to allow external components (e.g. pl/container) managed by resource group. This feature requires a new gpdb dir to be created in cgroup memory controller, however on 5X branch unless the users created this new dir manually then the upgrade from a previous version would fail. In this commit we provide backward compatibility by checking the release version: - on 6X and master branches the memory auditor feature is always enabled so the new gpdb dir is mandatory; - on 5X branch only if the new gpdb dir is created with proper permissions the memory auditor feature could be enabled, when it's disabled `CREATE RESOURCE GROUP WITH (memory_auditor='cgroup') will fail with guide information on how to enable it; Binary swap tests are also provided to verify backward compatibility in future releases. As cgroup need to be configured to enable resgroup we split the resgroup binary swap tests into two parts: - resqueue mode only tests which can be triggered in the icw_gporca_centos6 pipeline job after the ICW tests, these parts have no requirements on cgroup; - complete resqueue & resgroup modes tests which can be triggered in the mpp_resource_group_centos{6,7} pipeline jobs after the resgroup tests, these parts need cgroup to be properly configured;
-
由 Bhuvnesh Chaudhary 提交于
This commit includes the stainherit columns in minirepro output recently added.
-
由 Omer Arap 提交于
Recent postgres merge introduced a new column `stainherit` to `pg_statistic` table. When `gpsd` dumps the pg_statistic not only a column is missing but also all the other columns are shifted and saved in wrong column index. This causes failure on reloading the gpsd. This commit fixes the issue and adds the missing column.
-
- 18 4月, 2018 1 次提交
-
-
由 Tom Lane 提交于
We are developing on a machine with flex 2.6.4 and we couldn't build. This was fixed in 72b1e3a2 in upstream (9.6 at the time) in 2016, but wasn't backported to 9.1 or older. Original commit message: > Now that we know about the %top{} trick, we can revert to building flex > lexers as separate .o files. This is worth doing for a couple of reasons > besides sheer cleanliness. We can narrow the scope of the -Wno-error flag > that's forced on scan.c. Also, since these grammar and lexer files are > so large, splitting them into separate build targets should have some > advantages in build speed, particularly in parallel or ccache'd builds. > > We have quite a few other .l files that could be changed likewise, but the > above arguments don't apply to them, so the benefit of fixing them seems > pretty minimal. Leave the rest for some other day. (cherry picked from commit 72b1e3a2) Fixes greenplum-db/gpdb#4863 Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 17 4月, 2018 7 次提交
-
-
由 Jialun 提交于
1) Global shared memory will be used if the query has run out of the group shared memory. 2) The limit of memory_spill_ratio changes to [0, INT_MAX], because global shared memory can be allocated, 100% limitation is not make sense. 3) Using Atomic Operation & "Compare And Save" instead of lock to get high performance. 4) Modify the test cases according to the new rules.
-
由 Pengzhou Tang 提交于
'distribution_policy' test do constraints check on randomly-distributed tables, however, attrnums = null in gp_distribution_policy is no longer effective to identify a randomly distributed table after we involved new replicated distributed policy, so add more filter to make things right.
-
由 David Kimura 提交于
Large objects are currently not supported in Greenplum. Rather than deceive the user with a non-functional large object api, we disable them for now. We disable the large object tests in privileges regress test by using ignore blocks instead of commenting them out or deleting them to reduce merge conflicts in future postgres merges. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Sambitesh Dash 提交于
-
由 Mel Kiyama 提交于
-
由 Mel Kiyama 提交于
* docs: Add guc verify_gpfdists_cert -added guc definition to list of gucs -added link to guc from appropriate topics. PR for 5X_STABLE Will be ported to MAIN * docs: verify_gpfdists_cert guc updates -add SSL exceptions that are ignored -other minor edits * docs: guc verify_gpfdists_cert - fix typos
- 14 4月, 2018 4 次提交
-
-
由 Ashwin Agrawal 提交于
When the blocksize is 2MB, the function AppendOnlyStorage_GetUsableBlockSize would give out the wrong usable block size. The expected result is 2MB. But the return value of the function call would give out (2M -4). This is because the macro AOSmallContentHeader_MaxLength is defined as (2M -1). After rounding down to 4 byte aligned, the result is (2M - 4). Without the fix can encounter errors as follows: "ERROR: Used length 2097152 greater than bufferLen 2097148 at position 8388592 in table 'xxxx'". Also removed some related, but unused macro variables, just for cleaning up codes related to AO storage. Co-authored-by: NLirong Jian <jian@hashdata.cn>
-
由 Ashwin Agrawal 提交于
commit b9b8831a introduced "relation mapping" infrastructure, which stores relfilenode as "0" in pg_class for shared and nailed catalog tables. The actual relfilenode is stored in separate file which provides mapping from OID to relfilenode. gp_replica_check builds hashtable for relfilnodes to perform lookup for actual files on-disk. So, while populating this hashtable, consult relmap file to get relfilenode for tables with relfilenode = 0. This should help avoid seeing WARNINGs like "relfilenode XXXX not present in primary's pg_class" or "found extra unknown file on mirror:" for gp_replica_check. Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Venkatesh Raghavan 提交于
Changes in 8.4 for statistics computation is comprehensive. Will fix any fallouts of this change as we observe them. Current tests show that things are kosher so removing the fix me.
-
由 Bhuvnesh Chaudhary 提交于
-
- 13 4月, 2018 17 次提交
-
-
由 David Yozie 提交于
* add RETURNING clause syntax and information to DELETE reference * add RETURNING clause syntax for INSERT; update output column description for SELECT * more basic edits to match postgres docs; adding/updating info for AS clause
-
由 Bhuvnesh Chaudhary 提交于
-
由 Goutam Tadi 提交于
Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NJemish Patel <jpatel@pivotal.io>
-
由 Asim R P 提交于
Commit b3f300b9 introduced the novel idea tracking oldest xmin among all distributed snapshots on QEs. However, the idea is not applicable to QD because all distributed transactions can be found in ProcArray on QD. Local oldest xmin is therefore the oldest xmin among all distributed snapshots on QD. This patch fixes the maintenance of oldest xmin on QD by avoiding DistributedLog_AdvanceOldestXmin() and all the heavy-lifting that it performs. Calling this on QD was also hitting the "local snapshot's xmin is older than recorded distributed oldestxmin" error occasionally in CI.
-
由 Kris Macoskey 提交于
The prefixes were commonly out of date, misleading as to whom to talk to should a job fail, and otherwise not an approach developers agree with anymore for how to bin different tests. The approach being attempted now is to bin by component. Furthermore, acronyms are bad so they are avoided more so with the changes in this commit. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Jim Doty 提交于
-
由 Kris Macoskey 提交于
The convention from release pipelines is for packaging jobs to be labeled as `gpdb_[RC_|]packaging_[platform]`. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
It serves no purpose anymore and doesn't aid in visability of the flow of the pipeline. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
Aconyms are confusing and there is no specific reason they need exist in this context. This does mean someone will need to type the "ResourceGroups" or "Interconnect" if specifying those groups while using the gen_pipeline.py script. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The staggered starting was necessary when the performance of concourse was improved by limiting the spike in started jobs at any one time. Performance is not as much a driving factor with concourse changes and with reduction in size of the pipeline. Therefore the staggered starting idea can be retired. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The functional components are the interconnect and resource groups, this splits up the tests in MPP to fit that separation. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
Groups should represent logical components rather then teams. In relation to DP and CM, they fall into a logical group of CLI components, therefore we create a new CLI group and add both sets of tests to this group. Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
The teams have been renamed, lets rename the jobs as well Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
With dropping the gate_compile_end the groups needed sensible reorganization based on the necessary platform builds for each particular group. Additionally we modified each group slightly to be more explicit on every job associated with the group, not just the tests. A new CL group splits out Clients and Loaders because they have a logical separation that is similar to the other groups. Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-
由 Kris Macoskey 提交于
Its not a pattern we use elsewhere and it doesn't provide any improved discoverability in the concourse UI. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The end gates originally served the purpose of chaining groups to execute consecutively. This is no longer necassary, so the remaining end gates no longer server a purpose. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
- 12 4月, 2018 3 次提交
-
-
由 Jacob Champion 提交于
8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO tables, but not column-oriented. Correct that here. Store upgraded Datum data in a per-DatumStream buffer, to avoid "upgrading" the same data multiple times (multiple tuples may be pointing at the same data buffer, for example with RLE compression). Cache the column's base type in the DatumStreamRead struct. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
This reverts commit 8b0a7fed. Due to this commit, full join queries with condition on varchar columns started failing due to the below error. It is expected that there is a relabelnode on top of varchar columns while looking up the sort operator, however because of the said commit we removed the relabelnode. ```sql create table foo(a varchar(30), b varchar(30)); postgres=# select X.a from foo X full join (select a from foo group by 1) Y ON X.a = Y.a; ERROR: could not find member 1(1043,1043) of opfamily 1994 (createplan.c:4664) ``` Will reopen the issue #4175 which brought this patch.
-
由 Andreas Scherbaum 提交于
-
- 11 4月, 2018 4 次提交
-
-
由 Pengzhou Tang 提交于
Executing a query plan containing a large number of slices may slow down the entire Greenplum cluster: each "n-gang" slice corresponds to a separate process per segment. An example of such queries is a UNION ALL atop several complex views. To prevent such a situation, add a GUC gp_max_slices and refuse to execute plans of which the number of slices exceed that limit. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 xiong-gang 提交于
-
由 xiong-gang 提交于
This GUC determines whether curl verifies the authenticity of the gpfdist's certificate
-
由 Alexandra Wang 提交于
Update template pipeline to reflect this change 553b8754Authored-by: NAlexandra Wang <lewang@pivotal.io>
-