- 04 8月, 2017 3 次提交
-
-
由 Nadeem Ghani 提交于
This test was asserting on a higher level mock which wasn't being used. This commit uses the correct mock and the tests are passing. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
It looks like this is passing in concourse, however, this test suite is having issues running properly. One of the mocks is not returning the proper behavior causing the test to fail. Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 C.J. Jameson 提交于
Refactor similar usage to share code with gpperfmon behave tests Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 03 8月, 2017 15 次提交
-
-
由 Ning Yu 提交于
Resource group may have memory overuse in below case: CREATE RESOURCE GROUP rg_concurrency_test WITH (concurrency=1, cpu_rate_limit=20, memory_limit=60, memory_shared_quota=0, memory_spill_ratio=10); CREATE ROLE role_concurrency_test RESOURCE GROUP rg_concurrency_test; 11:SET ROLE role_concurrency_test; 11:BEGIN; 21:SET ROLE role_concurrency_test; 22:SET ROLE role_concurrency_test; 21&:BEGIN; 22&:BEGIN; ALTER RESOURCE GROUP rg_concurrency_test SET CONCURRENCY 2; 11:END; The cause is that we didn't check overall memory quota usage in the past, so pending queries can be waken up as long as the concurrency limit is not reached, in such a case if the currently running tranctions have used all the memory quota in the resource group then the overall memory usage will be exceeded. To fix this issue we now checks both concurrency limit and memory quota usage to decide whether to wake up pending queries. Signed-off-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Huan Zhang 提交于
-
由 Ming LI 提交于
This bug is caused by: For COPY FROM ON SEGMENT, on QE, process the data stream (empty) dispatched from QD at first, then re-do the same workflow to read and process the local segment data file. Before redo, all flags need to be reset to initial values too. However we missed one flag.
-
由 Xin Zhang 提交于
Add the checksum masking and clearly separate where that masking is applied separately from when the lp array masking is applied. This will ensure the data in buf only updated once. Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Xin Zhang 提交于
Validate every BufferPool page sent to the mirror by the primary prior to writing. Signed-off-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Taylor Vesely 提交于
During resync, filerep copied block with out-of-date checksum over from primary to mirror. This caused checksum verification failure later on the mirror side, and also caused inconsistency between the two on disk images of primary and mirror. The fix introduced here will always recompute the checksum during resync. The performance impact is very low, since we only recompute the checksum for changed blocks. However, for the full copy, we will recompute checksum for all the blocks to be copied. We have to do it, because there is no easy way to gurantee there no other changes like hint bit change during resync, since it's an online operation. Also fixed wrong comments regarding to page lsn. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Simon Riggs 提交于
In CLUSTER, VACUUM FULL and ALTER TABLE SET TABLESPACE I erroneously set checksum before log_newpage, which sets the LSN and invalidates the checksum. So set checksum immediately *after* log_newpage. Bug report Fujii Masao, Fix and patch by Jeff Davis (cherry picked from commit cf8dc9e1)
-
由 Tom Lane 提交于
Some compilers understand that this coding is safe, and some don't. (cherry picked from commit 4912385b) Signed-off-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Simon Riggs 提交于
(cherry picked from commit 9df56f6d)
-
由 Asim R P 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
This test is based on ao_checksum_corruption.sql. We added new UDF `invalidate_buffers()` to invalidate buffers for given relation, so that we can read the content from the corrupted file again. We tested corruptions on heap table, toast table, btree and bitmap indexes. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Nadeem Ghani 提交于
This is a new config option, on by default. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Simon Riggs 提交于
Checksums are set immediately prior to flush out of shared buffers and checked when pages are read in again. Hint bit setting will require full page write when block is dirtied, which causes various infrastructure changes. Extensive comments, docs and README. WARNING message thrown if checksum fails on non-all zeroes page; ERROR thrown but can be disabled with ignore_checksum_failure = on. Feature enabled by an initdb option, since transition from option off to option on is long and complex and has not yet been implemented. Default is not to use checksums. Checksum used is WAL CRC-32 truncated to 16-bits. Simon Riggs, Jeff Davis, Greg Smith Wide input and assistance from many community members. Thank you. (cherry picked from commit 96ef3b8f)
-
由 Simon Riggs 提交于
Remove use of PageSetTLI() from all page manipulation functions and adjust README to indicate change in the way we make changes to pages. Repurpose those bytes into the pd_checksum field and explain how that works in comments about page header. Refactoring ahead of actual feature patch which would make use of the checksum field, arriving later. Jeff Davis, with comments and doc changes by Simon Riggs Direction suggested by Robert Haas; many others providing review comments. (cherry picked from bb7cc262)
-
由 Bhuvnesh Chaudhary 提交于
Relation metadata reference was added twice due to which memory leak is detected and PQO fallsback to planner. This patch removes redundant AddRef for Relation Metadata and fixes fallback. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 02 8月, 2017 9 次提交
-
-
由 Larry Hamel 提交于
We saw the following on macOS and this mock patch solves it: `[Errno 13] Permission denied: '/usr/local/gpdb/share/package` Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Haisheng Yuan 提交于
-
由 Andreas Scherbaum 提交于
-
由 Richard Guo 提交于
Resource group memory spill is similar to 'statement_mem' in resource queue, the difference is memory spill is calculated according to the memory quota of the resource group. The related GUCs, variables and functions shared by both resource queue and resource group are moved to the namespace resource manager. Also codes of resource queue relating to memory policy are refactored in this commit. Signed-off-by: NPengzhou Tang <ptang@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Marbin Tan 提交于
We suspect that the gpperfmon test to be flakey. This is not acceptable in a PR pipeline. gpperfmon tests needs more work to be stable.
-
由 Andreas Scherbaum 提交于
* Add sequence overflow documentation and example.
-
由 Michael Roth 提交于
Discussed w/ CJ - we'll work offline on a better way to address groups but will temporarily accept this
-
由 Taylor Vesely 提交于
The job runs FTS tests and has failed intermittently. By increasing gp_segment_connect_timeout, we reduce the chance that test environment will cause failures. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Andreas Scherbaum 提交于
-
- 01 8月, 2017 7 次提交
-
-
由 Xiaoran Wang 提交于
If 'COPY ON SEGMENT', STDIN / STDOUT refers to segments' own STDIN / STDOUT, which are not available for data stream. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Pengzhou Tang 提交于
This is a typo issue that cause segment 0 was always assigned as singleton reader. It existed for a long time with no functional issue, but may result in performance issue somehow. Beside, root->config->cdbpath_segments is tuneable by GUC gp_segments_for_planner, so gp_singleton_segindex may point to an invalid segment, we use real segment count instead to avoid mismatch.
-
由 Pengzhou Tang 提交于
This is a mistake introduced by 353a937d and it flood the pg_log at a noticeable rate, change it back to DEBUG1.
-
由 Marbin Tan 提交于
We would like to have the VMs up if there's a failure.
-
由 C.J. Jameson 提交于
Added tests to make sure only one of them need to be specified for the gpinisystem to work. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
Add test for gppkg --migrate * gppkgs installed on the original master should be installed on the new master and all segments Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 31 7月, 2017 6 次提交
-
-
由 Larry Hamel 提交于
Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Adam Lee 提交于
USE_PGXS=1 is mandatory, and it's no need to compile if not overwriting.
-
由 Zhenghua Lyu 提交于
1. Detect cgroup mount point in test code. 2. Fix bug when buflen is 0. 3. Check cgroup status on master in gpconfig. 4. Fix coverity warnings.
-
由 Adam Lee 提交于
-
由 Zhenghua Lyu 提交于
When GPDB is running in a container, the swap and ram read via sysinfo is the value of host machine. To find the correct value of swap and ram in the container context, we take both the value from sysinfo and the value from cgroup into account.
-
由 Ming LI 提交于
Support COPY statement that imports the data file on segments directly parallel. It could be used to import data files generated by "COPY ... to ... ON SEGMENT'. This commit also supports all kinds of data file formats which "COPY ... TO" supports, processes reject limit numbers and logs errors accordingly. Key workflow: a) For COPY FROM, nothing changed by this commit, dispatch modified COPY command to segments at first, then read data file on master, and dispatch the data to relevant segment to process. b) For COPY FROM ON SEGMENT, on QD, read dummy data file, other parts keep unchanged, on QE, process the data stream (empty) dispatched from QD at first, then re-do the same workflow to read and process the local segment data file. Signed-off-by: NMing LI <mli@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-