- 02 8月, 2017 7 次提交
-
-
由 Andreas Scherbaum 提交于
-
由 Richard Guo 提交于
Resource group memory spill is similar to 'statement_mem' in resource queue, the difference is memory spill is calculated according to the memory quota of the resource group. The related GUCs, variables and functions shared by both resource queue and resource group are moved to the namespace resource manager. Also codes of resource queue relating to memory policy are refactored in this commit. Signed-off-by: NPengzhou Tang <ptang@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Marbin Tan 提交于
We suspect that the gpperfmon test to be flakey. This is not acceptable in a PR pipeline. gpperfmon tests needs more work to be stable.
-
由 Andreas Scherbaum 提交于
* Add sequence overflow documentation and example.
-
由 Michael Roth 提交于
Discussed w/ CJ - we'll work offline on a better way to address groups but will temporarily accept this
-
由 Taylor Vesely 提交于
The job runs FTS tests and has failed intermittently. By increasing gp_segment_connect_timeout, we reduce the chance that test environment will cause failures. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Andreas Scherbaum 提交于
-
- 01 8月, 2017 7 次提交
-
-
由 Xiaoran Wang 提交于
If 'COPY ON SEGMENT', STDIN / STDOUT refers to segments' own STDIN / STDOUT, which are not available for data stream. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Pengzhou Tang 提交于
This is a typo issue that cause segment 0 was always assigned as singleton reader. It existed for a long time with no functional issue, but may result in performance issue somehow. Beside, root->config->cdbpath_segments is tuneable by GUC gp_segments_for_planner, so gp_singleton_segindex may point to an invalid segment, we use real segment count instead to avoid mismatch.
-
由 Pengzhou Tang 提交于
This is a mistake introduced by 353a937d and it flood the pg_log at a noticeable rate, change it back to DEBUG1.
-
由 Marbin Tan 提交于
We would like to have the VMs up if there's a failure.
-
由 C.J. Jameson 提交于
Added tests to make sure only one of them need to be specified for the gpinisystem to work. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
Add test for gppkg --migrate * gppkgs installed on the original master should be installed on the new master and all segments Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 31 7月, 2017 6 次提交
-
-
由 Larry Hamel 提交于
Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Adam Lee 提交于
USE_PGXS=1 is mandatory, and it's no need to compile if not overwriting.
-
由 Zhenghua Lyu 提交于
1. Detect cgroup mount point in test code. 2. Fix bug when buflen is 0. 3. Check cgroup status on master in gpconfig. 4. Fix coverity warnings.
-
由 Adam Lee 提交于
-
由 Zhenghua Lyu 提交于
When GPDB is running in a container, the swap and ram read via sysinfo is the value of host machine. To find the correct value of swap and ram in the container context, we take both the value from sysinfo and the value from cgroup into account.
-
由 Ming LI 提交于
Support COPY statement that imports the data file on segments directly parallel. It could be used to import data files generated by "COPY ... to ... ON SEGMENT'. This commit also supports all kinds of data file formats which "COPY ... TO" supports, processes reject limit numbers and logs errors accordingly. Key workflow: a) For COPY FROM, nothing changed by this commit, dispatch modified COPY command to segments at first, then read data file on master, and dispatch the data to relevant segment to process. b) For COPY FROM ON SEGMENT, on QD, read dummy data file, other parts keep unchanged, on QE, process the data stream (empty) dispatched from QD at first, then re-do the same workflow to read and process the local segment data file. Signed-off-by: NMing LI <mli@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
- 29 7月, 2017 6 次提交
-
-
由 Marbin Tan 提交于
- Sort arguments alphabetically - Add -I and -O description
-
由 David Yozie 提交于
-
由 dyozie 提交于
-
由 dyozie 提交于
-
由 David Yozie 提交于
-
由 Chris Hajas 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 28 7月, 2017 6 次提交
-
-
由 Lisa Owen 提交于
* docs - resource groups catalog additions/updates * address review comments from gang, mel * clarify rsgqueueuduration - for current query, not all
-
由 Chuck Litzell 提交于
* De-conflate gpperfmon & gpcc. Conditionalize gpcc. Remove unreferenced topics. * Update GPAdminGuide.ditaval Remove addition to Admin Guide ditaval * Fixes from review.
-
由 Chris Hajas 提交于
-
由 Marbin Tan 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Shoaib Lari 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Add a behave test to verify that the checksum configuration is preserved after a segment is recovered using gprecoveseg. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 27 7月, 2017 8 次提交
-
-
由 Kris Macoskey 提交于
This will allow a user to cancel debug_sleep and be ensured the ccp_destroy will still cleanup any created clusters.
-
由 Ashwin Agrawal 提交于
Commit d50f429c added xlog lock record, but missed to tune into for Greenplum which is to add persistent table information. Hence caused failure during recovery with FATAL message "xlog record with zero persistenTID". Using xl_heaptid_set() which calls `RelationGetPTInfo()` making sure PT info is populated for xlog record.
-
由 Pengzhou Tang 提交于
The 'insufficient memory reserved' issue existed for a long time, the root cause is the default statement_mem (125MB) is not enough for queries using by gpcheckcat script when regression database is huge. This commit add STATEMENT_MEM in demo_cluster.sh to initialize gpdb with required statement_mem and set statement_mem to 225MB in common.bash
-
由 Adam Lee 提交于
It's useful and important for debugging.
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The gp_inject_fault() function is now available in pg_regress so a contrib module is not required. The test was not being run, it trips an assertion. So it is not added to greenplum_schedule.
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The function gp_inject_fault() was defined in a test specific contrib module (src/test/dtm). It is moved to a dedicated contrib module gp_inject_fault. All tests can now make use of it. Two pg_regress tests (dispatch and cursor) are modified to demonstrate the usage. The function is modified so that it can inject fault in any segment, specified by dbid. No more invoking gpfaultinjector python script from SQL files. The new module is integrated into top level build so that it is included in make and make install.
-