- 01 3月, 2018 16 次提交
-
-
由 Peifeng Qiu 提交于
-
由 Daniel Gustafsson 提交于
When building without assertions, any storage allocated for an assertion will generate a compiler warning for unused variable or function. Wrap all such locations in the USE_ASSERT_CHECKING macro to remove warnings, or remove the need for storage in the first place.
-
由 Daniel Gustafsson 提交于
The inline functions were single statement functions written on a single line, which breaks the unittest mocker code when preprocessor directives are added after these functions. The reason for this is that the mocker code hunts for an ending brace on its own line. As a later commit will introduce said preprocessor directive, it's time to bite the bullet and fix this.
-
由 Heikki Linnakangas 提交于
-
由 xiong-gang 提交于
listen_addresses in postgresql.conf doesn't taken effect now, backend and postmaster are listening on all addresses. From the point of security, we should be able to let user specify the listen address.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
oldcontext may not be set here, and if it is we will be in that context already.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The new -Wformat-overflow warning in GCC7 fails to understand that the buffer is in fact large enough, likely because it counts on the integers possibly being negative. Move to using snprintf() instead since that accounts for buffer size, and is really what we should have done in the first place. Also touch up the consumer of the GP_CSVOPT buffer even though it's using a buffer oversized enough to probably not cause that same warning.
-
由 Heikki Linnakangas 提交于
They are ignored by gpdiff, but it's nice to keep them up-to-date anyway, because if a test fails, these differences alre also printed in regression.diffs, even though they're not causing the failure. There are some more CONTEXTs that in more complicated cases that are not always the same from run to run, but this is a good start.
-
由 Heikki Linnakangas 提交于
If an error happens in the prepare phase of two-phase commit, relay the original error back to the client, instead of the fairly opaque "Abort [Prepared]' broadcast failed to one or more segments" message you got previously. A lot of things happen during the prepare phase that can legitimately fail, like checking deferred constraints, like in the 'constraints' regression test. But even without that, there can be triggers, ON COMMIT actions, etc., any of which can fail. This commit consists of several parts: * Pass 'true' for the 'raiseError' argument when dispatching the prepare dtx command in doPrepareTransaction(), so that the error is emitted to the client. * Bubble up an ErrorData struct, with as many fields intact as possible, to the caller, when dispatching a dtx command. (Instead of constructing a message in a StringInfo). So that we can re-throw the message to the client, with its original formatting. * Don't throw an error in performDtxProtocolCommand(), if we try to abort a prepared transaction that doesn't exist. That is business-as-usual, if a transaction throws an error before finishing the prepare phase. * Suppress the "NOTICE: Releasing segworker groups to retry broadcast." message, when aborting a prepared transaction. Put together, the effect is if an error happens during prepare phase, the client receives a message that is largely indistinguishable from the message you'd get if the same failure happened while running a normal statement. Fixes github issue #4530.
-
由 Heikki Linnakangas 提交于
I overlooked that these errors were thrown at wrong place, after COMMIT, because of the difference in the error message.
-
由 Daniel Gustafsson 提交于
Commit ca4c29d4 removed the error deferring from EXPLAIN such that it attempted to return results even in case of errors, but missed the DXL case. This removes the TRY/CATCH wraps from DXL EXPLAIN to match the rest of the EXPLAIN code. Also remove the last (found) callsite that expects that a palloc for a statsbuffer can have returned NULL into an Assertion.
-
由 Daniel Gustafsson 提交于
When generating DXL output in EXPLAIN {ANALYZE}, make sure to catch any exceptions generated on the C++ side to avoid a server crash on queries where ORCA fail to generate a plan. A better job of catching the various different possible exceptions and generating nice error messages can be made, this closes the current hole of trivial core dumps on for example this: EXPLAIN (dxl) SELECT * FROM pg_class;
-
由 Kevin Yeap 提交于
Install go dep by default, and remove ~/go from GOPATH.
-
由 Ashwin Agrawal 提交于
This test panics the segment, so muts make sure segment is recovered fully before moving ahead. Commit 2fea28fb tried to fix a failure due to it by moving gp_toolkit tests ahead of it but that's not right solution as some other test can hit the condition or within this test itself condition can be hit. Hence moving it to isolation2 and adding explicit check to validate segment has recovered before moving ahead.
-
- 28 2月, 2018 9 次提交
-
-
由 Zhenghua Lyu 提交于
Different queries in the same session may be in different resource queues. The resource queue field of stat cache in local hash should be cleaned after the stat message has been sent out. This commit fix the Github issue #4582 .
-
由 Asim R P 提交于
Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Lisa Owen 提交于
* docs - dblink works w/ dbs of same greenplum major version * cannot dblink to postgresql, remove ref
-
由 Asim R P 提交于
Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Asim RP 提交于
FTS makes updates to cluster configuration metadata tables. These updates are made within a transaction. If an error is encountered, the transaction must be aborted and all locks must be released before FTS process exits. A new ICW test is added to simulate error during configuration update. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ekta Khanna 提交于
Test added for ORCA version v2.55.4 Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 dyozie 提交于
-
由 Dhanashree Kashid 提交于
Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Lisa Owen 提交于
* docs - note OOM query failure when resource groups active * address comment from simon; add info to rq/rg differences table * edits from david
-
- 27 2月, 2018 1 次提交
-
-
由 Kris Macoskey 提交于
Older platforms are behind in certificates with the recent change in upstream github for the allowable TLS versions when clients connect.
-
- 26 2月, 2018 1 次提交
-
-
由 huiliang-liu 提交于
- if the data file contains "\N" as the delimiter, it would not be recognized properly by gpload - root cause: gpload replace the quote in nullas option as well as replace '\' as '\\' - solution: add quote_no_slash function to handle nullas option
-
- 24 2月, 2018 3 次提交
-
-
由 Max Yang 提交于
* Add PG_BASEBACKUP_FIXME:if there are new tablespaces, gpinitstandby fails. pg_basebackup can't copy tablespace data to standby, besause the tablespace directory is not empty on standby. * Check standby tablespace directory. * Replace function cleanupFilespaces with function cleanup_tablespace. Author: Max Yang <myang@pivotal.io> Author: Xiaoran Wang <xiwang@pivotal.io>
-
由 Ning Yu 提交于
* resgroup: make cgroup memsw.limit_in_bytes optional. * resgroup: retry proc migration for rmdir to succeed. * resgroup: add delay in a testcase. * resgroup: use correct log level in cgroup ops.
-
由 Jamie McAtamney 提交于
This commit moves some cleanup code to finally blocks to prevent gpdeletesystem hanging on error cases. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
- 23 2月, 2018 6 次提交
-
-
由 Lisa Owen 提交于
-
由 Haisheng Yuan 提交于
In minirepro, the following query is generated: ``` DELETE FROM pg_statistic WHERE starelid='tpcds.catalog_returns_1_prt_27'::regclass AND staattnum=22; INSERT INTO pg_statistic VALUES ...... ``` It only has existing stats entries for catalog tables, so for non-catalog table, the DELETE command is useless, and slows down the speed of loading minirepro. We should NOT generate DELETE query for non-catalog tables. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
gp_toolkit tests runs a query which checks if there are any GUCs which have different value on the cluster node. However, test udf_exception_blocks_panic_scenarios is executed before it and it injects PANIC which causes recovery and it takes some time for all the GUCs to be in consistent state between the segments. But by that time gp_toolkit identifies difference and fails, so moving gp_toolkit test above it.
-
由 Venkatesh Raghavan 提交于
GPORCA uses NOT_EXIST_SUBLINK to implement co-related left anti semijoin. Therefore we still need this in the executor for GPORCA plans. Example: ```sql CREATE TABLE csq_r(a int) distributed by (a); INSERT INTO csq_r VALUES (1); CREATE OR REPLACE FUNCTION csq_f(a int) RETURNS int AS $$ select $1 $$ LANGUAGE SQL CONTAINS SQL; explain SELECT * FROM csq_r WHERE not exists (SELECT * FROM csq_f(csq_r.a)); Physical plan: +--CPhysicalMotionGather(master) rows:1 width:34 rebinds:1 cost:882688.037301 origin: [Grp:7, GrpExpr:14] +--CPhysicalCorrelatedLeftAntiSemiNLJoin("" (8)) rows:1 width:34 rebinds:1 cost:882688.037287 origin: [Grp:7, GrpExpr:13] |--CPhysicalTableScan "csq_r" ("csq_r") rows:1 width:34 rebinds:1 cost:431.000019 origin: [Grp:0, GrpExpr:1] |--CPhysicalComputeScalar rows:1 width:1 rebinds:1 cost:0.000002 origin: [Grp:5, GrpExpr:1] | |--CPhysicalConstTableGet Columns: ["" (8)] Values: [(1)] rows:1 width:1 rebinds:1 cost:0.000001 origin: [Grp:1, GrpExpr:1] | +--CScalarProjectList origin: [Grp:4, GrpExpr:0] | +--CScalarProjectElement "csq_f" (9) origin: [Grp:3, GrpExpr:0] | +--CScalarIdent "a" (0) origin: [Grp:2, GrpExpr:0] +--CScalarConst (1) origin: [Grp:8, GrpExpr:0] ", QUERY PLAN --------------------------------------------------------------------------------- Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..882688.04 rows=1 width=4) Output: a -> Table Scan on public.csq_r (cost=0.00..882688.04 rows=1 width=4) Output: a Filter: (SubPlan 1) SubPlan 1 (slice1; segments: 3) -> Result (cost=0.00..0.00 rows=1 width=1) Output: -> Result (cost=0.00..0.00 rows=1 width=1) Output: $0, -> Result (cost=0.00..0.00 rows=1 width=1) Output: true Optimizer: PQO version 2.55.2 (13 rows) ```
-
由 sambitesh 提交于
Prior to this commit, on calling `RESET`/`RESET ALL` command, the value of guc was not getting updated on all the slices(already spun up reader slices). This commit fixes the issue. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 David Kimura 提交于
There is a scenario during AO/CO delete or update where the first row number obtained is negative. The error is caused when the first row number in the aovisimap of an AO/CO table exceeds int max. There is currently a bug where the first row number is retrieved from the tuple as an int, but the first row number is an int64. We fixed this by retrieving as an int64. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 22 2月, 2018 4 次提交
-
-
由 Venkatesh Raghavan 提交于
These underlying issues where resolved in the following PRs that were merged. The fixmes where not removed. https://github.com/greenplum-db/gpdb/pull/4174/files https://github.com/greenplum-db/gporca/pull/263
-
由 Venkatesh Raghavan 提交于
-
由 Venkatesh Raghavan 提交于
Looks right.
-
由 Venkatesh Raghavan 提交于
During window merge, DQA was temporarily disabled with GPORCA. After the following PR, this was resolved. In this PR, I am cleaning a residual outdated fixme https://github.com/greenplum-db/gpdb/pull/4086 The string_agg function was categorized as ordered window aggregate in GPDB 5. So GPORCA did not support that and fallbacks to planner. In GPDB 6 string_agg is a straightforward window aggregate, so GPORCA supports it.
-