- 02 8月, 2017 5 次提交
-
-
由 Marbin Tan 提交于
We suspect that the gpperfmon test to be flakey. This is not acceptable in a PR pipeline. gpperfmon tests needs more work to be stable.
-
由 Andreas Scherbaum 提交于
* Add sequence overflow documentation and example.
-
由 Michael Roth 提交于
Discussed w/ CJ - we'll work offline on a better way to address groups but will temporarily accept this
-
由 Taylor Vesely 提交于
The job runs FTS tests and has failed intermittently. By increasing gp_segment_connect_timeout, we reduce the chance that test environment will cause failures. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Andreas Scherbaum 提交于
-
- 01 8月, 2017 7 次提交
-
-
由 Xiaoran Wang 提交于
If 'COPY ON SEGMENT', STDIN / STDOUT refers to segments' own STDIN / STDOUT, which are not available for data stream. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Pengzhou Tang 提交于
This is a typo issue that cause segment 0 was always assigned as singleton reader. It existed for a long time with no functional issue, but may result in performance issue somehow. Beside, root->config->cdbpath_segments is tuneable by GUC gp_segments_for_planner, so gp_singleton_segindex may point to an invalid segment, we use real segment count instead to avoid mismatch.
-
由 Pengzhou Tang 提交于
This is a mistake introduced by 353a937d and it flood the pg_log at a noticeable rate, change it back to DEBUG1.
-
由 Marbin Tan 提交于
We would like to have the VMs up if there's a failure.
-
由 C.J. Jameson 提交于
Added tests to make sure only one of them need to be specified for the gpinisystem to work. Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
Add test for gppkg --migrate * gppkgs installed on the original master should be installed on the new master and all segments Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 31 7月, 2017 6 次提交
-
-
由 Larry Hamel 提交于
Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Adam Lee 提交于
USE_PGXS=1 is mandatory, and it's no need to compile if not overwriting.
-
由 Zhenghua Lyu 提交于
1. Detect cgroup mount point in test code. 2. Fix bug when buflen is 0. 3. Check cgroup status on master in gpconfig. 4. Fix coverity warnings.
-
由 Adam Lee 提交于
-
由 Zhenghua Lyu 提交于
When GPDB is running in a container, the swap and ram read via sysinfo is the value of host machine. To find the correct value of swap and ram in the container context, we take both the value from sysinfo and the value from cgroup into account.
-
由 Ming LI 提交于
Support COPY statement that imports the data file on segments directly parallel. It could be used to import data files generated by "COPY ... to ... ON SEGMENT'. This commit also supports all kinds of data file formats which "COPY ... TO" supports, processes reject limit numbers and logs errors accordingly. Key workflow: a) For COPY FROM, nothing changed by this commit, dispatch modified COPY command to segments at first, then read data file on master, and dispatch the data to relevant segment to process. b) For COPY FROM ON SEGMENT, on QD, read dummy data file, other parts keep unchanged, on QE, process the data stream (empty) dispatched from QD at first, then re-do the same workflow to read and process the local segment data file. Signed-off-by: NMing LI <mli@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
- 29 7月, 2017 6 次提交
-
-
由 Marbin Tan 提交于
- Sort arguments alphabetically - Add -I and -O description
-
由 David Yozie 提交于
-
由 dyozie 提交于
-
由 dyozie 提交于
-
由 David Yozie 提交于
-
由 Chris Hajas 提交于
Signed-off-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 28 7月, 2017 6 次提交
-
-
由 Lisa Owen 提交于
* docs - resource groups catalog additions/updates * address review comments from gang, mel * clarify rsgqueueuduration - for current query, not all
-
由 Chuck Litzell 提交于
* De-conflate gpperfmon & gpcc. Conditionalize gpcc. Remove unreferenced topics. * Update GPAdminGuide.ditaval Remove addition to Admin Guide ditaval * Fixes from review.
-
由 Chris Hajas 提交于
-
由 Marbin Tan 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Shoaib Lari 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Add a behave test to verify that the checksum configuration is preserved after a segment is recovered using gprecoveseg. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
- 27 7月, 2017 10 次提交
-
-
由 Kris Macoskey 提交于
This will allow a user to cancel debug_sleep and be ensured the ccp_destroy will still cleanup any created clusters.
-
由 Ashwin Agrawal 提交于
Commit d50f429c added xlog lock record, but missed to tune into for Greenplum which is to add persistent table information. Hence caused failure during recovery with FATAL message "xlog record with zero persistenTID". Using xl_heaptid_set() which calls `RelationGetPTInfo()` making sure PT info is populated for xlog record.
-
由 Pengzhou Tang 提交于
The 'insufficient memory reserved' issue existed for a long time, the root cause is the default statement_mem (125MB) is not enough for queries using by gpcheckcat script when regression database is huge. This commit add STATEMENT_MEM in demo_cluster.sh to initialize gpdb with required statement_mem and set statement_mem to 225MB in common.bash
-
由 Adam Lee 提交于
It's useful and important for debugging.
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The gp_inject_fault() function is now available in pg_regress so a contrib module is not required. The test was not being run, it trips an assertion. So it is not added to greenplum_schedule.
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The function gp_inject_fault() was defined in a test specific contrib module (src/test/dtm). It is moved to a dedicated contrib module gp_inject_fault. All tests can now make use of it. Two pg_regress tests (dispatch and cursor) are modified to demonstrate the usage. The function is modified so that it can inject fault in any segment, specified by dbid. No more invoking gpfaultinjector python script from SQL files. The new module is integrated into top level build so that it is included in make and make install.
-
由 Jesse Zhang 提交于
SharedInputScan (a.k.a. "Shared Scan" in EXPLAIN) is the operator through which Greenplum implements Common Table Expression execution. It executes in two modes: writer (a.k.a. producer) and reader (a.k.a. consumer). Writers will execute the common table expression definition and materialize the output, and readers can read the materialized output (potentially in parallel). Because of the parallel nature of Greenplum execution, slices containing Shared Scans need to synchronize among themselves to ensure that readers don't start until writers are finished writing. Specifically, a slice with readers depending on writers on a different slice will block during `ExecutorRun`, before even pulling the first tuple from the executor tree. Greenplum's Hash Join implementation will skip executing its outer ("probe side") subtree if it detects an empty inner ("hash side"), and declare all motions in the skipped subtree as "stopped" (we call this "squelching"). That means we can potentially squelch a subtree that contains a shared scan writer, leaving cross-slice readers waiting forever. For example, with ORCA enabled, the following query: ```SQL CREATE TABLE foo (a int, b int); CREATE TABLE bar (c int, d int); CREATE TABLE jazz(e int, f int); INSERT INTO bar VALUES (1, 1), (2, 2), (3, 3); INSERT INTO jazz VALUES (2, 2), (3, 3); ANALYZE foo; ANALYZE bar; ANALYZE jazz; SET statement_timeout = '15s'; SELECT * FROM ( WITH cte AS (SELECT * FROM foo) SELECT * FROM (SELECT * FROM cte UNION ALL SELECT * FROM cte) AS X JOIN bar ON b = c ) AS XY JOIN jazz on c = e AND b = f; ``` leads to a plan that will expose this problem: ``` QUERY PLAN ------------------------------------------------------------------------------------------------------------ Gather Motion 3:1 (slice2; segments: 3) (cost=0.00..2155.00 rows=1 width=24) -> Hash Join (cost=0.00..2155.00 rows=1 width=24) Hash Cond: bar.c = jazz.e AND share0_ref2.b = jazz.f AND share0_ref2.b = jazz.e AND bar.c = jazz.f -> Sequence (cost=0.00..1724.00 rows=1 width=16) -> Shared Scan (share slice:id 2:0) (cost=0.00..431.00 rows=1 width=1) -> Materialize (cost=0.00..431.00 rows=1 width=1) -> Table Scan on foo (cost=0.00..431.00 rows=1 width=8) -> Hash Join (cost=0.00..1293.00 rows=1 width=16) Hash Cond: share0_ref2.b = bar.c -> Redistribute Motion 3:3 (slice1; segments: 3) (cost=0.00..862.00 rows=1 width=8) Hash Key: share0_ref2.b -> Append (cost=0.00..862.00 rows=1 width=8) -> Shared Scan (share slice:id 1:0) (cost=0.00..431.00 rows=1 width=8) -> Shared Scan (share slice:id 1:0) (cost=0.00..431.00 rows=1 width=8) -> Hash (cost=431.00..431.00 rows=1 width=8) -> Table Scan on bar (cost=0.00..431.00 rows=1 width=8) -> Hash (cost=431.00..431.00 rows=1 width=8) -> Table Scan on jazz (cost=0.00..431.00 rows=1 width=8) Filter: e = f Optimizer status: PQO version 2.39.1 (20 rows) ``` where processes executing slice1 on the segments that have an empty `jazz` will hang. We fix this by ensuring we execute the Shared Scan writer even if it's in the sub tree that we're squelching. Signed-off-by: NMelanie Plageman <mplageman@pivotal.io> Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Andres Freund 提交于
When heap_update needs to look for a page for the new tuple version, because the current one doesn't have sufficient free space, or when columns have to be processed by the tuple toaster, it has to release the lock on the old page during that. Otherwise there'd be lock ordering and lock nesting issues. To avoid concurrent sessions from trying to update / delete / lock the tuple while the page's content lock is released, the tuple's xmax is set to the current session's xid. That unfortunately was done without any WAL logging, thereby violating the rule that no XIDs may appear on disk, without an according WAL record. If the database were to crash / fail over when the page level lock is released, and some activity lead to the page being written out to disk, the xid could end up being reused; potentially leading to the row becoming invisible. There might be additional risks by not having t_ctid point at the tuple itself, without having set the appropriate lock infomask fields. To fix, compute the appropriate xmax/infomask combination for locking the tuple, and perform WAL logging using the existing XLOG_HEAP_LOCK record. That allows the fix to be backpatched. This issue has existed for a long time. There appears to have been partial attempts at preventing dangers, but these never have fully been implemented, and were removed a long time ago, in 11919160 (cf. HEAP_XMAX_UNLOGGED). In master / 9.6, there's an additional issue, namely that the visibilitymap's freeze bit isn't reset at that point yet. Since that's a new issue, introduced only in a892234f, that'll be fixed in a separate commit. Author: Masahiko Sawada and Andres Freund Reported-By: Different aspects by Thomas Munro, Noah Misch, and others Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com Backpatch: 9.1/all supported versions
-