- 04 2月, 2017 3 次提交
-
-
由 David Yozie 提交于
* adding source for best practices doc * removing pivotal copyright * adding missing graphics files
-
由 Marbin Tan 提交于
-
由 Ashwin Agrawal 提交于
-
- 03 2月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
The query in hashjoin_spill was choosing a different plan with ORCA, where the "other" side of the join was hashed than expected, and therefore it did not spill to disk as expected. Turn the join the query into an OUTER JOIN, to force the join order. In materialize_spill and sis_mat_sort tests, the test queries are tailored to produce a particular plan with a Materialize node that spills. ORCA chose a different plan that didn't do that. I was not able to quickly formulate the query in a way that would produce the same result with and without ORCA, so just force ORCA off for the test. The point of the test is to test workfile behavior at runtime, in executor, so that should be OK. Likewise, the sisc_sort_spill test was producing different plan with ORCA, without a Sort, so it wasn't testing what it was supposed to test. Disable ORCA for sisc_sort_spilll too. The helper function in sisc_mat_sort tests were producing "insufficient memory reserved for statement" errors with ORCA. Not sure why, but I bumped statement_mem slightly to not hit that limit anymore. It still spills to disk, which is the point.
-
由 Heikki Linnakangas 提交于
There's no need to allocate ExecWorkFile struct in TopMemoryContext. We don't rely on a callback to clean them up anymore, they will be closed by the normal ResourceOwner and MemoryContext mechanisms. Otherwise, the memory allocated for the struct is leaked on abort. In the passing, make the error handling more robust. Also modify tuplestorenew.c, to allocate everything in a MemoryContext, rather than using gp_malloc() directly. ResourceOwner and MemoryContext mechanisms will now take care of cleanup on abort, instead of the custom end-of-xact callback. Remove execWorkFile mock test. The only thing it tested was that ExecWorkFile was allocated in TopMemoryContext. That didn't seem very interesting to begin with, and the assumption is now false, anyway. We could've changed the test to test that it's allocated in CurrentMemoryContext instead, but I don't see the point. Per report in private from Karthikeyan Rajaraman. Test cases also by him, with some cleanup by me.
-
由 Jimmy Yih 提交于
This commit cleans up some tinc.py calls, updates project names, and splits the cs-walrep test into two targets for CI test parallelization.
-
This is fixed with the commit d5463511.
-
由 Ashwin Agrawal 提交于
-
由 Daniel Gustafsson 提交于
The GPOS library has been imported into the GPORCA repository so there is no need to install it separately anymore.
-
由 Marbin Tan 提交于
-
由 Jimmy Yih 提交于
This fixes a self-deadlock scenario in filerep postmaster reset. We encountered a situation in our testing where injecting PANIC on primary to force a postmaster reset would cause the mirror to hang while waiting for a child backend process to exit. This mirror backend process sends a message to its postmaster to start terminating backend child processes and then runs proc_exit. The mirror postmaster will quickly act on the message and start sending exit signals to its backend processes. In rare cases, the mirror backend process that messages the mirror postmaster can encounter a self-deadlock where authdie from SIGQUIT sent from its postmaster is called before its natural proc_exit finishes. This self-deadlock window can also occur when a psql connection is started and the forked backend process gets a bad status from ProcessStartupPacket. This will result in a proc_exit while SIGQUIT is assigned to authdie instead of quickdie so a very precise timed SIGQUIT could cause the process to self-deadlock.
-
由 Abhijit Subramanya 提交于
The function `AppendOnlyCompaction_GetHideRatio()` incorrectly truncated the fractional part while calculating the ratio of hidden tuples which caused the file to not be compacted (for e.g if the calculated threshold was 10.1% and the guc was set to 10%, the segfile would not be compacted). This patch correctly rounds up the calculated value to fix this issue.
-
由 Ashwin Agrawal 提交于
Context: gp_fastsequence is used to generate and keep track of row numbers for AO and CO tables. Row numbers for AO/CO tables act as a component to form TID, stored in index tuples and used during index scans to lookup intended tuple. Hence this number must be monotonically incrementing value. Also should not rollback irrespective of insert/update transaction aborting for AO/CO table, as reusing row numbers even across aborted transactions would yield wrong results for index scans. Also, entries in gp_fastsequence only must exist for lifespan of the corresponding table. Change: Given those special needs, now reserved entries in gp_fastsequence are created as part of create table itself instead of deffering their creation to insert time. Insert within same transaction as create table is the only scenario needs coverage from these precreated entries, reserved entries above hence means entry for segfile 0 (used by CTAS or ALTER) and segfile 1 (used by insert within same transaction as create). Rest all entries continue to use frozen inserts to gp_fastsequence as they can only happen after create table transaction has committed. With that change in logic to leverage MVCC to handle cleanup of entries for gp_fastseqeunce, enables to get rid of special recovery and abort code performing frozen deletes. With that code gone fixes issues like: 1] `REINDEX DATABASE` or `REINDEX TABLE pg_class` hang on segment nodes if encounters error after Prepare Transaction. 2] Dangling gp_fastsequence in scenario, transaction created AO table, inserted tuples and aborts after prepare phase is complete. To cleanup gp_fastsequence, must open the relation and perform frozen heap delete to mark the entry as invisible. But if the backend performing the abort prepared is not connected to the same database, then delete operation cannot be done and leaves dangling entries. Output of helpful interaction with Heikki Linnakangas and Asim R P. See discussion on gpdb-dev, thread 'reindex database abort hang': https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/ASml6lN0qRE
-
- 02 2月, 2017 3 次提交
-
-
由 Dave Cramer 提交于
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
The old coding failed to apply the mutator to the children of a SubPlan. While at it, try to explain in the comment *why* SubPlans need special treatment in GPDB, while they don't in PostgreSQL. I'm not 100% I got the reasoning correct, since I'm basically just reverse-engineering the original thinking here. But seems plausible.. Fixes github issue #368.
-
- 01 2月, 2017 6 次提交
-
-
由 Chumki Roy 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 dyozie 提交于
-
由 Marbin Tan 提交于
* This exists in 5x but not in 43x
-
由 Marbin Tan 提交于
5x has external options while 43x does not. This causes a diff, so we will ignore this.
-
由 Marbin Tan 提交于
* change inet and cidr from plain to main. Author: Chumki Roy and Marbin Tan
-
由 Shreedhar Hardikar 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
- 31 1月, 2017 4 次提交
-
-
由 Jamie McAtamney 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
This comes from a discussion on the GPDB mailing list about need of the prefetch_inner parameter in Joins and the deadlock case it is trying to prevent.
-
由 Shreedhar Hardikar 提交于
When the size of the inner tuple in a MergeJoin is large, planner decides to place a Materialize on the top of the inner Sort operator. Furthemore if there is any Motion below the MergeJoin, mjnode->prefetch_inner is set to true. During execution of plan that has a Motion underneath a Materialize underneath a MJ, MJ retrieves one tuple from the Motion (via Materialize) and then resets the stream (by calling ExecReScan). Since the Materialize underneath is initialized with incorrect eflags, it deletes its tuplestore, instead of rewinding its cursor. This fix sets the right eflags value during Init (see comments in ExecMaterialReScan).
-
- 28 1月, 2017 13 次提交
-
-
由 Christopher Hajas 提交于
Increase the timeout to start transactions after killing a primary in the gprecoverseg test suite.
-
由 Christopher Hajas 提交于
* Update kerberos init file to ignore gid value
-
由 Christopher Hajas 提交于
-
由 Christopher Hajas 提交于
* Fixes missed file in 14c0a048. * One of the steps was looking for a stale sql file. * One of the steps was looking for a sql file in the wrong location.
-
由 Abhijit Subramanya 提交于
This patch ports the tests for exception blocks in udfs from tinc.
-
由 Abhijit Subramanya 提交于
In GPDB if there was a heap tuple which was being deleted, i.e HEAPTUPLE_DELETE_IN_PROGRESS, it would error out during index rebuild, except for three scenarios: - Scenario 1: it's deleted within its own transaction - Scenario 2: it's a system catalog tuple - Scenario 3: there was a bitmap index being rebuilt The 3rd scenario is introduced long time ago in 3.3. In Postgres, the upstream behavior only handling the first 2 scenarios, and never bother with the 3rd scenario. We cannot repro the Scenario 3 on current code base using concurrent update on bitmap index with vacuum, and this scenario is already covered by test at `src/test/tinc/tincrepo/mpp/gpdb/tests/storage/vacuum/reindex/concurrency/`. It's confident that scenario 3 is also protected. Hence we take upstream behavior. [ci skip] Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Chris Hajas 提交于
* This adds gpcheckcat to the master pipeline. This test takes about 20 * minutes. Authors: Karen Huddleston and Chris Hajas
-
由 dyozie 提交于
-
由 Abhijit Subramanya 提交于
In Postgres, `remove_dbtablespaces()` are used for both `createdb()` failure, and `dropdb()` commit. In GPDB, persistent tables are used to track the state of the objects, like table space, database files, etc., and properly clean them up at the end of transaction (either commit or abort). That's handled by `AtEOXact_smgr()`. We can safely delete `remove_dbtablespaces()` function. We put the deadcode under `#if 0` to make future merge easier. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
`locally_opened` is replaced by `local_estate` in postgres by the following commit: commit 817946bb Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Wed Aug 15 21:39:50 2007 +0000 Arrange to cache a ResultRelInfo in the executor's EState for relations that are not one of the query's defined result relations, but nonetheless have triggers fired against them while the query is active. This was formerly impossible but can now occur because of my recent patch to fix the firing order for RI triggers. Caching a ResultRelInfo avoids duplicating work by repeatedly opening and closing the same relation, and also allows EXPLAIN ANALYZE to "see" and report on these extra triggers. Use the same mechanism to cache open relations when firing deferred triggers at transaction shutdown; this replaces the former one-element-cache strategy used in that case, and should improve performance a bit when there are deferred triggers on a number of relations. The deadcode was introduced by merge with 8.3, but the code was already removed in the original commit. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Daniel Gustafsson 提交于
As we have merged with upstream 8.3, update the inline links to the upstream documentation from 8.2 to their 8.3 counterparts. Also go over to HTTPS as postgresql.org is now fully HTTPS with redirects for all HTTP requests.
-
由 Daniel Gustafsson 提交于
JSON and uuid are new datatypes in the 5.0 cycle, and money has been expanded to 8 byte storage.
-
由 Heikki Linnakangas 提交于
The qp_derived_table test has the exact same queries as bugbuster/tiny. The qp_executor test is identical to the bugbuster/executor test. qp_subquery test has the exact same test queries, and more, as bugbuster/subquery.
-
- 27 1月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
There's nothing special about this table. I think it used to be used as to log errors from a subsequent COPY, but we don't even have the "error tables" feature anymore.
-