- 13 2月, 2019 5 次提交
-
-
由 Adam Berlin 提交于
-
由 Ashwin Agrawal 提交于
crash_recovery_dtm test has a scenario which intends to test if QE undergoes crash recovery after writing prepare record but before responding to QD, the abort processing is completed fine. For the same test used GUC `debug_abort_after_segment_prepared` to PANIC all the QE at that specific point for DELETE. Next test executes SELECT query to validate the DELETE was aborted. But flakiness comes if this SELECT query gets executed while PANIC processing is still underway as test had no way to wait till PANIC and restart completed before running the SELECT. Now the test instead uses fault injector to sleep at intended point and uses pg_ctl restart -w to make sure recovery is completed and only after that the SELECT query will be executed. So, as a result removing the test only GUC `debug_abort_after_segment_prepared` and related code for it.
-
由 Ekta Khanna 提交于
explain_format tests validate for memory related information. But the printing for that information is not stable and varies based on orca vs planner, assert enabled vs disabled, query reusing the slice vs run on fresh session. Plus, future modifications not related to explain formatting can cause this test to fail. Hence, only minimal explain format validation which is found to be stable currently is being retained for this test. Better alternative needs to be found to perform for full validation for explain formatting, seems sql way is too fragile for it. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 David Kimura 提交于
Currently exlusion constraints do not work correctly in MPP environment. For example, if the exclusion constraint is on a column which is not the table's distribution key, then it is possible to get wrong results. Following statements should give 1 row because first tuple should exclude the second. ``` CREATE TABLE t(a int, b int, EXCLUDE (b WITH =)) DISTIBUTED BY (a); INSERT INTO t values (1, 1), (2, 1); ``` However, that is not currently the case if the distribution key hashes to different segments. This commit removes exclusion constraints feature entirely until there is a way to coordinate exclusion constriants between the segments. Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NMelanie Plageman <melanieplageman@gmail.com> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 David Kimura 提交于
We noticed a case in CI where it seemed like it took longer than 30 seconds to promote the mirror during recovery. Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
-
- 12 2月, 2019 19 次提交
-
-
由 Heikki Linnakangas 提交于
In plans with a Nested Loop join on the inner side of another Nested Loop join, the planner could produce a plan where a Motion node was rescanned. That produced an error at execution time: ERROR: illegal rescan of motion node: invalid plan (nodeMotion.c:1604) (seg0 slice4 127.0.0.1:40000 pid=27206) (nodeMotion.c:1604) HINT: Likely caused by bad NL-join, try setting enable_nestloop to off Make sure we add a Materialize node to shield the from rescanning in such cases. While we're at it, add an explicit flag to MaterialPaths and plans, to indicate that the Material node was added to shield the child node from rescanning. There was a weaker test in ExecInitMaterial itself, which just checked if the immediately child was a Motion node, but that feels sketchy; what if there's a Result node in between, for example? However, I kept the direct check for a Motion node, too, because I'm not sure if there are other places where we add Material nodes on top of Motions, aside from the call in create_nestloop_path() that this fixes. ORCA probably does that, at least. Fixes https://github.com/greenplum-db/gpdb/issues/6769Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NPaul Guo <pguo@pivotal.io>
-
由 Heikki Linnakangas 提交于
update_mergeclause_eclasses() asserts that 'left_ac' and 'right_ac' are not NULL. Therefore, it makes no sense to call it when left_ac is NULL. But we should call it unconditionally, like in other similar functions in the same file. I can't come up with a test case that would trip the assertion, the ECs have been correctly set by the time we get here. And I haven't been able to come up with a query where the lack of the call would matter, although I think it is potentially a problem. So, just fix it. Also remove redundant assertions on left_ac and right_ac being not NULL. update_mergeclause_eclasses() asserts that, too. Fixes https://github.com/greenplum-db/gpdb/issues/5372Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Daniel Gustafsson 提交于
These variables are since commit 4eaeb7bc no longer used by any caller, so remove. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NAdam Berlin <aberlin@pivotal.io>
-
由 Heikki Linnakangas 提交于
cdb_util.sql was installed in "make install", but it was not loaded by initdb. Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Lisa Owen 提交于
* docs - appendoptimized is an alias for appendonly storage option * option names lower case, add legacy, misc edits
-
由 David Yozie 提交于
-
由 Alexandra Wang 提交于
Just checking gp_stat_replication is not enough at the end of the test, we need to actually validate gp_segment_configuration reflects primary and mirror are in sync.
-
由 Mel Kiyama 提交于
* docs - update compression information -Add a topic that lists features that support configuring compression. -update gp_workfile_compression GUC. Remove requirement to install zstd. * docs - update compression information based on review comments. -changed description to point to feature/utility for specific compression information. -added general, pivotal, and OSS specific information about compression requirements. * docs - compression information - reivew edit.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Ashwin Agrawal 提交于
missing_xlog brings the mirror down and fiddles with its state. Before this commit it only validates mirror is up but misses to make its in sync state. Tests following to this test make assumption of starting when all the primaries and mirrors are in sync state. Hence, every test should end leaving primaries and mirrors as up and sync. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
Bump ORCA version to v3.27.0 Signed-off-by: NRahul Iyer <riyer@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
-
由 Ashwin Agrawal 提交于
Test uao_crash_compaction_column needs to wait for record to be replayed on mirror for checking. Depending on work load generated by the tests this can take long time. Hence, moving the test to run at start of schedule to reduce waiting for that time. Also, to reduce the flakiness seen in failure increase the number of retries in test to check if replay_location = flush_location on mirror from 1000 to 5000. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Chris Hajas 提交于
Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io>
-
由 Nandish Jayaram 提交于
Co-authored-by: NNandish Jayaram <njayaram@pivotal.io> Co-authored-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Ashwin Agrawal 提交于
This commit is to fix PANIC on mirorr with "WAL contains references to invalid pages" on xlog replay for truncate record with offset 0. Primary creates the file first and then writes the xlog record for the creation for AO tables similar to heap. Hence, file can get created on primary without writing xlog record if failure happens on primary just after creating the file. This creates situation where VACUUM can generate truncate record based on aoseg entry with eof 0 and file present on primary. Then during replay mirror may not have the file, as was never created on mirror. So, avoid adding the entry to invalid hash table for truncate at offset zero (EOF=0). This avoids mirror PANIC, as anyways truncate to zero is same as file not present. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Ashwin Agrawal 提交于
This was spotted by Pengzhou Tang, during code inspection. So, fixing this, I don't think had any ill effect but definitely is unnecessary.
-
由 David Krieger 提交于
This commit adds an option for the caller to pass in a string to allow extra logging in RemoteOperation. This is currently unused in MASTER, but we are forward porting it from 5X_STABLE to keep MASTER in sync with it. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> Cherry-picked from the following three 5X_STABLE commits: 38235b4b 37c8ec01 dde6ee08
-
- 11 2月, 2019 14 次提交
-
-
由 Adam Berlin 提交于
Cleanup things put inside of the tablespace dir, not the dir itself.
-
由 Adam Berlin 提交于
Drop the database and table to fix gpcheckcat.
-
由 Adam Berlin 提交于
Committed generated code.
-
由 Adam Berlin 提交于
Setup the new tests in the isolation2_schedule.
-
由 Adam Berlin 提交于
use template files to find location of tablespace directory
-
由 Adam Berlin 提交于
handle force overwrite scenario for --target-gp-dbid
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
- gp tablespace has already been created by this point. Co-authored-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
The >= 9.4 version of the query to get all relations was missing the qual to filter out the auxiliary heaps and indexes of bitmap indexes. The quals were added in commit 9cafd3fe, but got lost from the latest query in the 9.4 merge. Put it back. Also add missing fields to the 9.1 query, although this is just pro forma, because there has been no released version of GPDB based on 9.1. Fixes https://github.com/greenplum-db/gpdb/issues/6900
-
由 Heikki Linnakangas 提交于
The code in CTranslatorQueryToDXL::NoteDistributionPolicyOpclasses() would sometimes crash or throw an error that leads to falling back to postgres planner, if a relcache invalidation happened after fetching the Relation->rd_cdbpolicy to a local variable. To fix, change relcache invalidation to not change rd_cdbpolicy, if there has been no actual change to the relation's policy. This is similar to how rd_rel is handled. Since we're holding a lock on the table, the policy cannot change while we're looking at it. So this doesn't just reduce the odds; it fixes the issue for real. In checkPolicyForUniqueIndex(), allocate rd_cdbpolicy in correct memory context. All other places where we call GpPolicyReplace() and replcate the rd_cdbpolicy field in the relcache entry got this right. I think we got away with this until now, because a relcache invalidation happened soon afterwards in any command that called this function, which caused the rd_cdbpolicy in the wrong context to be free'd and re-allocated in the correct one. Until the previous commit that changed relcache invalidation to not do that anymore. Debugged-by: NEkta Khanna <ekhanna@pivotal.io> Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Daniel Gustafsson 提交于
In Greenplum 5.X the recursive CTE feature was hidden behind a GUC as it wasn't deemed of production quality just yet. Commit 20152cbf removed that GUC in order to make stabilization work easier, there are still enough rough edges to not consider recursive CTE a feature which is on by default. This brings back the GUC using the same name in order to be backwards compatible even though "prototype" is a bit misleading as this point. In order for the cluster, and the associated tools, to work this also turns on/off the GUC as required when there are recursive queries in the toolchain. Also adds a test and tidies up a few comments in surrounding code. This is another attempt at this, the previous coding being reverted in ea57a7aa. Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
- 09 2月, 2019 2 次提交
-
-
由 Lisa Owen 提交于
-
由 Mark Sliva 提交于
Previously when running gpaddmirrors the -a flag was being ignored when the cluster was non standard. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-