- 06 4月, 2017 6 次提交
-
-
由 David Yozie 提交于
-
由 David Yozie 提交于
[ci skip]
-
由 David Yozie 提交于
* Removing DCA v1 reference; small edit for clarity [ci skip] * line edit * more line edits * more edits [ci skip]
-
由 mkiyama 提交于
* GPDB DOCS - add/update warnings [ci skip] * GPDB DOCS - add/update warnings - conditionalized Pivotal information. [ci skip]
-
由 Christopher Hajas 提交于
The behave step assumed OID and relfilenode are equal. This was changed in commit 1fd11387.
-
由 Daniel Gustafsson 提交于
Revert the patch since it broke tests for gprestore and gptransfer in the main pipeline. While the test quoted in the original Jira passes there seems to be more to it than meets the Jira.. Revert for now. Reverts: commit 1bb0b765 Author: Daniel Gustafsson <dgustafsson@pivotal.io> Date: Wed Apr 5 13:07:10 2017 +0200 Use backend quoting functions instead of libpq libpq is front-end code and shouldn't be used in backend processes. The requirement here is to correctly quote the relation name in partitioning such that pg_dump/gp_dump can create working DDL for the partition hierarchy. For this purpose, quote_literal_internal() does the same thing as PQescapeString(). The following relation definitions were hitting the previous bug fixed by applying proper quoting:
-
- 05 4月, 2017 28 次提交
-
-
由 Daniel Gustafsson 提交于
libpq is front-end code and shouldn't be used in backend processes. The requirement here is to correctly quote the relation name in partitioning such that pg_dump/gp_dump can create working DDL for the partition hierarchy. For this purpose, quote_literal_internal() does the same thing as PQescapeString(). The following relation definitions were hitting the previous bug fixed by applying proper quoting: CREATE TABLE part_test (id int, id2 int) PARTITION BY LIST (id2) ( PARTITION "A1" VALUES (1) ); CREATE TABLE sales (trans_id int, date date) DISTRIBUTED BY (trans_id) PARTITION BY RANGE (date) ( START (date '2008-01-01') INCLUSIVE END (date '2009-01-01') EXCLUSIVE EVERY (INTERVAL '1 month') ); ALTER TABLE sales SPLIT PARTITION FOR ('2008-01-01') AT ('2008-01-16') INTO (PARTITION jan081to15, PARTITION jan0816to31); ALTER TABLE sales ADD DEFAULT PARTITION other; ALTER TABLE sales SPLIT DEFAULT PARTITION START ('2009-01-01') INCLUSIVE END ('2009-02-01') EXCLUSIVE INTO (PARTITION jan09, PARTITION other);
-
由 Daniel Gustafsson 提交于
The previous cleanup of perforce integration from the makefiles inadvertently left one reference to p4client/p4 in the releng tools.mk. Remove by partially backport a fix for PostGIS gppkg from the 4.3 tree.
-
由 Daniel Gustafsson 提交于
Commit f41552e9 removed the upgrade code from gpstart, but took some of the code to test connections with it. Reinstate the url test but with self.dburl instead since it's the only remaining value to test here.
-
由 Daniel Gustafsson 提交于
Avoid python dependent output by using terse verbosity to make the test suite more stable across environments.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Commit eb1740c6 added plpython_types to the REGRESS set as part of a backport, the test suite was however not added until 9.2 in upstream which is further up than our version. Remove until we have caught up with 9.2.
-
由 Daniel Gustafsson 提交于
Commit 644b2a9d synchronized our pl/perl with upstream and tidied up the Makefile. Add the missing init_file which that commit references for pg_regress.
-
由 Daniel Gustafsson 提交于
The ./configure for the ICW run in Concourse needs to have the same set of configure options as the compile job since it uses the config status to determine which tests to run.
-
由 Asim R P 提交于
The fix is to perform the same steps as a TRUNCATE command - set new relfiles and drop existing ones for parent AO table as well as all its auxiliary tables. This fixes issue #913. Thank you, Tao-Ma for reporting the issue and proposing a fix as PR #960. This commit implements Tao-Ma's idea but implementation differs from the original proposal.
-
由 Asim R P 提交于
This is useful if a test wants to use gp_toolkit.__gp_{aoseg|aocsseg}* functions.
-
由 Jim Doty 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
Analyze collects a sample from the table, in case the sample contains columns with huge length, it may result in memory usage to go high cancelling the query. This commit masks wide values `i.e pg_column_size(col) > WIDTH_THRESHOLD (1024)` in variable length columns to avoid high memory usage while collecting sample. Column values exceeding WIDTH_THRESHOLD will be marked as NULL and will be ignored from the collected samples tuples while computing stats on the relation. In case of expression/predicate indexes on the relation, the wide columns will be treated as NULL and will not be filtered out. Is it rare to have such indexes on very wide columns, so the effects on stats (nullfrac etc) will be minimal. Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
The breakin/out functions should be marked as STRICT, because the underlying C functions, textin/textout, don't expect a NULL to be passed to them. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Jimmy Yih 提交于
A lot of tests assumed OID == relfilenode. We updated the tests to not assume that anymore.
-
由 Jimmy Yih 提交于
A lot of tests assumed OID == relfilenode. We updated the tests to not assume that anymore.
-
由 Jimmy Yih 提交于
These tests assumed OID == relfilenode. We updated the tests to not assume it anymore.
-
由 Jimmy Yih 提交于
A query in gpcheckmirrorseg.pl would lookup relfilenodes obtained from physically diffing segment files between primaries and mirrors. The issue with the query was that it only looked at master's catalog which would most likely not contain the relfilenodes of the QE segments, especially after introducing the relfilenode counter change. We union all the QE segment catalogs with master's catalog and check by content id to correct the query.
-
由 Jimmy Yih 提交于
This is needed to prevent relations possibly overwriting each other. The O_EXCL is present in postgres's mdcreate() but for some reason we don't have it here. This adds it back. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Jimmy Yih 提交于
The master allocates an OID and provides it to segments during dispatch. The segments then check if they can use this OID as its relfilenode. If a segment cannot use the preassigned OID as the relation's relfilenode, it will generate a new relfilenode value via nextOid counter. This can result in a race condition between the generation of the new OID and the segment file being created on disk after being added to persistent tables. To combat this race condition, we have a small OID cache... but we have found in testing that it was not enough to prevent the issue. To fully solve the issue, we decouple OID and relfilenode on both QD and QE segments by introducing a nextRelfilenode counter which is similar to the nextOid counter. The QD segment will generate the OIDs and its own relfilenodes. The QE segments only use the preassigned OIDs from the QD dispatch and generate a relfilenode value from their own nextRelfilenode counter. Current sequence generation is always done on QD sequence server, and assumes the OID is always same as relfilenode when handling sequence client requests from QE segments. It is hard to change this assumption so we have a special OID/relfilenode sync for sequence relations for GP_ROLE_DISPATCH and GP_ROLE_UTILITY. Reference gpdb-dev thread: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/lv6Sb4I6iSISigned-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Marbin Tan 提交于
Authors: Marbin Tan & Karen Huddleston
-
由 C.J. Jameson 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
* gpugpart.pl is a utility to upgrade 3.1 -> 3.2 Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Melanie Plageman 提交于
* Removing deprecated utility. This was utility was used in 3.3 -> 4.0 upgrade partition. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 David Yozie 提交于
* removing some duplicate files * correctly excluding pivotal condition from build; updating Gemfile.lock to use latest version of bookbinder
-
Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Tushar Dadlani 提交于
[#137900495](https://www.pivotaltracker.com/n/projects/1321816)
-
- 04 4月, 2017 6 次提交
-
-
由 Daniel Gustafsson 提交于
[ci skip]
-
由 Ashwin Agrawal 提交于
Initialize TransactionXmin to avoid situations where scanning pg_authid or other tables mostly in BuildFlatFiles() via SnapshotNow may try to chase down pg_subtrans for older "sub-committed" transaction. File corresponding to which may not and is not supposed to exist. Setting TransactionXmin will avoid calling SubTransGetParent() in TransactionIdDidCommit() for older XIDs. Also, along the way initialize RecentGlobalXmin as Heap access method needs it set. Repro for record of one such case: ``` CREATE ROLE foo; BEGIN; SAVEPOINT sp; DROP ROLE foo; RELEASE SAVEPOINT sp; -- this is key which marks in clog as sub-committed. kill or gpstop -air < N transactions to atleast cross pg_subtrans single file limit roughly CLOG_XACTS_PER_BYTE * BLCKSZ* SLRU_PAGES_PER_SEGMENT > restart -- errors recovery with missing pg_subtrans ```
-
由 Daniel Gustafsson 提交于
The extension for executable binaries is defined in X, replace the old (and now defunct) references to EXE_EXT. Also remove a commented out dead gpfdist rule in gpMgmt from before the move to core.
-
由 Daniel Gustafsson 提交于
[ci skip]
-
This reverts commit ab4398dd. [#142986717]
-
Similar to ea818f0e, we remove the sensitivity to segment count in test `dml_oids_delete`. Without this, this test was passing for the wrong reason: 0. The table `dml_heap_r` was set up with 3 tuples, whose values in the distribution column `a` are 1, 2, NULL respectively. On a 2-segment system, the 1-tuple and 2-tuple are on distinct segments, and because of a quirk of our local OID counter synchronization, they will get the same oids. 0. The table `tempoid` will be distributed randomly under ORCA, with tuples copied from `dml_heap_r` 0. The intent of the final assertion is checking that the OIDs are not changed by the DELETE. Also hidden in the assumption is that the tuples stay on the same segment as the source table. 0. However, the compounding effect of that "same oid" with a randomly distributed `tempoid` will lead to a passing test when we have two segments! This commit fixes that. So this test will pass for the right reason, and also on any segment count.
-