- 28 2月, 2018 4 次提交
-
-
由 Ekta Khanna 提交于
Test added for ORCA version v2.55.4 Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 dyozie 提交于
-
由 Dhanashree Kashid 提交于
Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Lisa Owen 提交于
* docs - note OOM query failure when resource groups active * address comment from simon; add info to rq/rg differences table * edits from david
-
- 27 2月, 2018 1 次提交
-
-
由 Kris Macoskey 提交于
Older platforms are behind in certificates with the recent change in upstream github for the allowable TLS versions when clients connect.
-
- 26 2月, 2018 1 次提交
-
-
由 huiliang-liu 提交于
- if the data file contains "\N" as the delimiter, it would not be recognized properly by gpload - root cause: gpload replace the quote in nullas option as well as replace '\' as '\\' - solution: add quote_no_slash function to handle nullas option
-
- 24 2月, 2018 3 次提交
-
-
由 Max Yang 提交于
* Add PG_BASEBACKUP_FIXME:if there are new tablespaces, gpinitstandby fails. pg_basebackup can't copy tablespace data to standby, besause the tablespace directory is not empty on standby. * Check standby tablespace directory. * Replace function cleanupFilespaces with function cleanup_tablespace. Author: Max Yang <myang@pivotal.io> Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Ning Yu 提交于
* resgroup: make cgroup memsw.limit_in_bytes optional. * resgroup: retry proc migration for rmdir to succeed. * resgroup: add delay in a testcase. * resgroup: use correct log level in cgroup ops.
-
由 Jamie McAtamney 提交于
This commit moves some cleanup code to finally blocks to prevent gpdeletesystem hanging on error cases. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
- 23 2月, 2018 6 次提交
-
-
由 Lisa Owen 提交于
-
由 Haisheng Yuan 提交于
In minirepro, the following query is generated: ``` DELETE FROM pg_statistic WHERE starelid='tpcds.catalog_returns_1_prt_27'::regclass AND staattnum=22; INSERT INTO pg_statistic VALUES ...... ``` It only has existing stats entries for catalog tables, so for non-catalog table, the DELETE command is useless, and slows down the speed of loading minirepro. We should NOT generate DELETE query for non-catalog tables. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
gp_toolkit tests runs a query which checks if there are any GUCs which have different value on the cluster node. However, test udf_exception_blocks_panic_scenarios is executed before it and it injects PANIC which causes recovery and it takes some time for all the GUCs to be in consistent state between the segments. But by that time gp_toolkit identifies difference and fails, so moving gp_toolkit test above it.
-
由 Venkatesh Raghavan 提交于
GPORCA uses NOT_EXIST_SUBLINK to implement co-related left anti semijoin. Therefore we still need this in the executor for GPORCA plans. Example: ```sql CREATE TABLE csq_r(a int) distributed by (a); INSERT INTO csq_r VALUES (1); CREATE OR REPLACE FUNCTION csq_f(a int) RETURNS int AS $$ select $1 $$ LANGUAGE SQL CONTAINS SQL; explain SELECT * FROM csq_r WHERE not exists (SELECT * FROM csq_f(csq_r.a)); Physical plan: +--CPhysicalMotionGather(master) rows:1 width:34 rebinds:1 cost:882688.037301 origin: [Grp:7, GrpExpr:14] +--CPhysicalCorrelatedLeftAntiSemiNLJoin("" (8)) rows:1 width:34 rebinds:1 cost:882688.037287 origin: [Grp:7, GrpExpr:13] |--CPhysicalTableScan "csq_r" ("csq_r") rows:1 width:34 rebinds:1 cost:431.000019 origin: [Grp:0, GrpExpr:1] |--CPhysicalComputeScalar rows:1 width:1 rebinds:1 cost:0.000002 origin: [Grp:5, GrpExpr:1] | |--CPhysicalConstTableGet Columns: ["" (8)] Values: [(1)] rows:1 width:1 rebinds:1 cost:0.000001 origin: [Grp:1, GrpExpr:1] | +--CScalarProjectList origin: [Grp:4, GrpExpr:0] | +--CScalarProjectElement "csq_f" (9) origin: [Grp:3, GrpExpr:0] | +--CScalarIdent "a" (0) origin: [Grp:2, GrpExpr:0] +--CScalarConst (1) origin: [Grp:8, GrpExpr:0] ", QUERY PLAN --------------------------------------------------------------------------------- Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..882688.04 rows=1 width=4) Output: a -> Table Scan on public.csq_r (cost=0.00..882688.04 rows=1 width=4) Output: a Filter: (SubPlan 1) SubPlan 1 (slice1; segments: 3) -> Result (cost=0.00..0.00 rows=1 width=1) Output: -> Result (cost=0.00..0.00 rows=1 width=1) Output: $0, -> Result (cost=0.00..0.00 rows=1 width=1) Output: true Optimizer: PQO version 2.55.2 (13 rows) ```
-
由 sambitesh 提交于
Prior to this commit, on calling `RESET`/`RESET ALL` command, the value of guc was not getting updated on all the slices(already spun up reader slices). This commit fixes the issue. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 David Kimura 提交于
There is a scenario during AO/CO delete or update where the first row number obtained is negative. The error is caused when the first row number in the aovisimap of an AO/CO table exceeds int max. There is currently a bug where the first row number is retrieved from the tuple as an int, but the first row number is an int64. We fixed this by retrieving as an int64. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 22 2月, 2018 5 次提交
-
-
由 Venkatesh Raghavan 提交于
These underlying issues where resolved in the following PRs that were merged. The fixmes where not removed. https://github.com/greenplum-db/gpdb/pull/4174/files https://github.com/greenplum-db/gporca/pull/263
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
Looks right.
-
由 Venkatesh Raghavan 提交于
During window merge, DQA was temporarily disabled with GPORCA. After the following PR, this was resolved. In this PR, I am cleaning a residual outdated fixme https://github.com/greenplum-db/gpdb/pull/4086 The string_agg function was categorized as ordered window aggregate in GPDB 5. So GPORCA did not support that and fallbacks to planner. In GPDB 6 string_agg is a straightforward window aggregate, so GPORCA supports it.
-
由 Venkatesh Raghavan 提交于
-
- 21 2月, 2018 3 次提交
-
-
由 mkiyama 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
- 19 2月, 2018 6 次提交
-
-
由 Ashwin Agrawal 提交于
To avoid flakiness in CI due to fts marking segments down as this test panics the segments, better to pause the FTS during this test.
-
由 Ashwin Agrawal 提交于
All other test were moved to ICW. Vacuum related tests would soon be extinct due to re-write of vacuum with 9.0 merge or needs to be re-written.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
These scenarios were covered in TINC, little more exhaustively. The cases are important as were added as part of fixing some bugs in DTM area. It used to cause inconsistencies between master and segment mostly due to dangling prepared transactions.
-
由 Ashwin Agrawal 提交于
The version in tinc was anyways broken long time back ever since sub-transactions were correctly dispatched to segments. Wrote new tests forcing reset of gang during commit-prepared and crossing MAX_ON_EXITS transactions.
-
由 Ashwin Agrawal 提交于
Exact same tests are already covered by alter_table_aocs2.sql in ICW.
-
- 18 2月, 2018 1 次提交
-
-
由 Mel Kiyama 提交于
* docs: gpbackup/gprestore updates -Remove experimental notes -Update email configuration informtion -Add return code information * docs: gpbackup/gprestore updated email config. file, updated email example, fix typos. * docs: gpbackup/gprestore GA- update options, -globals to -with-globals, -redirect to -redirect-db, missed one -backup-dir. * docs: gpbackup/gprestore email notification fixes -default for status section is to not send email notification -update email notification example based on fixes * docs: gpbackup/gprestore - add statement that gprestore creates a database by cloning template0 * docs: gpbackup/gprestore - review comment updates, new/changed features. AdminGuide updated email notification informmation updated gpbackup/gprestore output added report file information fixed partition table filtering example corrected gpbackup schema/table filtering support updated restore limitation. table index, rule, trigger are restored gpbackup doc change the gpbackup option name from -backupdir to -backup-dir corrected description of parallel backup gprestore doc change the gprestore option name from -createdb to -create-db updated description of restore operation in description updated restore description if schema exists in database updated restore limitation. table index, rule, trigger are restored * docs: gpbackup/gprestore late updates -new restore options -clarified leaf partition backup/restore * docs: fixed typos and issues from review comments.
-
- 17 2月, 2018 4 次提交
-
-
由 Jamie McAtamney 提交于
Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 C.J. Jameson 提交于
calling something "terraform2" or similar refers to the second cluster anchor tags for stanzas which operate on both clusters are "two clusters" - [ ] backport to 5X_STABLE - [ ] ~~set-pipeline when merge~~ no applicable change to pipeline [ci skip] Follow-on to #4546 Authored-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 mkiyama 提交于
-
由 Dhanashree 提交于
Disabling the exhaustive optimal join order search. qp_derived_table contains join intensive queries and when run with ORCA; the optimization time is higher than planner. Since there are no EXPLAIN tests in qp_derived_table we can disable the exhaustive search in ORCA and make this test run faster.
-
- 16 2月, 2018 6 次提交
-
-
由 Jimmy Yih 提交于
After filerep, filespaces, and persistent tables were removed, we disabled the gprecoverseg Behave tests which needed gprecoverseg to be refactored before the tests could run. Currently, gprecoverseg is in a usable state so it's time to fix these Behave tests and have them running again. Some refactoring, workarounds, and test removal was needed which is detailed here. Refactoring: 1. The tests used gpfaultinjector to cause a mirror or primary down situation. This is not really needed and just killing the segments is good enough. Also, injecting a fault on a mirror doesn't really do anything anymore because it would not trigger due to the mirror being in recovery mode. 2. The checksum test did not look correct so we refactored it to what we think it was suppose to look like. Workaround: Running incremental recovery after killing a primary to trigger failover will not work without pg_rewind. These calls have been changed to use full recovery similar to how gprecoverseg rebalancing works right now. Once pg_rewind is introduced, they should be changed back to use incremental recovery. Test Removal: 1. A scenario checked for failure when there were corrupted changetracking logs. If corrupted, a full recovery must be run. We delete this test since changetracking logs are no longer a thing. However, this scenario is very similar to our src/test/walrep missing_xlogs test. That might be good enough as a low-level replacement but we may want to add a full end-to-end scenario back in these Behave tests. 2. A scenario checked that gprecoverseg would not recover segments with persistent rebuild inconsistencies. Persistent tables no longer exist so the scenario is okay to remove. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Jimmy Yih 提交于
This was incorrectly modified when filespaces were removed. It was probably not caught because the gprecoverseg Behave tests were unusable. Now that they work, this issue was caught for us to fix. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Removing the test from tinc and adding simple check for the same in existing appendonly and aocs tests.
-
由 Ashwin Agrawal 提交于
The test validates if order of partitions in pg_herits doesn't match between master and segments it doesn't introduce OID mis-match between master and segment. Given the new way of dispatching oids from master with namespace and name, entries in pg_herits table doesn't matter anyway and hence the test can be deleted.
-
由 Ashwin Agrawal 提交于
Based on history the test was added for bug where a debug log would cause buffer overflow and corrupt global memory in production when transaction id is 9 characters or more. That code is long gone, plus any such issues should be reported by coverity and fixed via same. The current test is pretty heavy as performs pg_resetxlog to bump transaction-id on each and every primary and mirror to have 9 digits.
-