- 05 4月, 2017 27 次提交
-
-
由 Daniel Gustafsson 提交于
The previous cleanup of perforce integration from the makefiles inadvertently left one reference to p4client/p4 in the releng tools.mk. Remove by partially backport a fix for PostGIS gppkg from the 4.3 tree.
-
由 Daniel Gustafsson 提交于
Commit f41552e9 removed the upgrade code from gpstart, but took some of the code to test connections with it. Reinstate the url test but with self.dburl instead since it's the only remaining value to test here.
-
由 Daniel Gustafsson 提交于
Avoid python dependent output by using terse verbosity to make the test suite more stable across environments.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Commit eb1740c6 added plpython_types to the REGRESS set as part of a backport, the test suite was however not added until 9.2 in upstream which is further up than our version. Remove until we have caught up with 9.2.
-
由 Daniel Gustafsson 提交于
Commit 644b2a9d synchronized our pl/perl with upstream and tidied up the Makefile. Add the missing init_file which that commit references for pg_regress.
-
由 Daniel Gustafsson 提交于
The ./configure for the ICW run in Concourse needs to have the same set of configure options as the compile job since it uses the config status to determine which tests to run.
-
由 Asim R P 提交于
The fix is to perform the same steps as a TRUNCATE command - set new relfiles and drop existing ones for parent AO table as well as all its auxiliary tables. This fixes issue #913. Thank you, Tao-Ma for reporting the issue and proposing a fix as PR #960. This commit implements Tao-Ma's idea but implementation differs from the original proposal.
-
由 Asim R P 提交于
This is useful if a test wants to use gp_toolkit.__gp_{aoseg|aocsseg}* functions.
-
由 Jim Doty 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
Analyze collects a sample from the table, in case the sample contains columns with huge length, it may result in memory usage to go high cancelling the query. This commit masks wide values `i.e pg_column_size(col) > WIDTH_THRESHOLD (1024)` in variable length columns to avoid high memory usage while collecting sample. Column values exceeding WIDTH_THRESHOLD will be marked as NULL and will be ignored from the collected samples tuples while computing stats on the relation. In case of expression/predicate indexes on the relation, the wide columns will be treated as NULL and will not be filtered out. Is it rare to have such indexes on very wide columns, so the effects on stats (nullfrac etc) will be minimal. Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
The breakin/out functions should be marked as STRICT, because the underlying C functions, textin/textout, don't expect a NULL to be passed to them. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Tom Meyer 提交于
Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Jimmy Yih 提交于
A lot of tests assumed OID == relfilenode. We updated the tests to not assume that anymore.
-
由 Jimmy Yih 提交于
A lot of tests assumed OID == relfilenode. We updated the tests to not assume that anymore.
-
由 Jimmy Yih 提交于
These tests assumed OID == relfilenode. We updated the tests to not assume it anymore.
-
由 Jimmy Yih 提交于
A query in gpcheckmirrorseg.pl would lookup relfilenodes obtained from physically diffing segment files between primaries and mirrors. The issue with the query was that it only looked at master's catalog which would most likely not contain the relfilenodes of the QE segments, especially after introducing the relfilenode counter change. We union all the QE segment catalogs with master's catalog and check by content id to correct the query.
-
由 Jimmy Yih 提交于
This is needed to prevent relations possibly overwriting each other. The O_EXCL is present in postgres's mdcreate() but for some reason we don't have it here. This adds it back. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Jimmy Yih 提交于
The master allocates an OID and provides it to segments during dispatch. The segments then check if they can use this OID as its relfilenode. If a segment cannot use the preassigned OID as the relation's relfilenode, it will generate a new relfilenode value via nextOid counter. This can result in a race condition between the generation of the new OID and the segment file being created on disk after being added to persistent tables. To combat this race condition, we have a small OID cache... but we have found in testing that it was not enough to prevent the issue. To fully solve the issue, we decouple OID and relfilenode on both QD and QE segments by introducing a nextRelfilenode counter which is similar to the nextOid counter. The QD segment will generate the OIDs and its own relfilenodes. The QE segments only use the preassigned OIDs from the QD dispatch and generate a relfilenode value from their own nextRelfilenode counter. Current sequence generation is always done on QD sequence server, and assumes the OID is always same as relfilenode when handling sequence client requests from QE segments. It is hard to change this assumption so we have a special OID/relfilenode sync for sequence relations for GP_ROLE_DISPATCH and GP_ROLE_UTILITY. Reference gpdb-dev thread: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/lv6Sb4I6iSISigned-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Marbin Tan 提交于
Authors: Marbin Tan & Karen Huddleston
-
由 C.J. Jameson 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Marbin Tan 提交于
* gpugpart.pl is a utility to upgrade 3.1 -> 3.2 Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Melanie Plageman 提交于
* Removing deprecated utility. This was utility was used in 3.3 -> 4.0 upgrade partition. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 David Yozie 提交于
* removing some duplicate files * correctly excluding pivotal condition from build; updating Gemfile.lock to use latest version of bookbinder
-
Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Tushar Dadlani 提交于
[#137900495](https://www.pivotaltracker.com/n/projects/1321816)
-
- 04 4月, 2017 12 次提交
-
-
由 Daniel Gustafsson 提交于
[ci skip]
-
由 Ashwin Agrawal 提交于
Initialize TransactionXmin to avoid situations where scanning pg_authid or other tables mostly in BuildFlatFiles() via SnapshotNow may try to chase down pg_subtrans for older "sub-committed" transaction. File corresponding to which may not and is not supposed to exist. Setting TransactionXmin will avoid calling SubTransGetParent() in TransactionIdDidCommit() for older XIDs. Also, along the way initialize RecentGlobalXmin as Heap access method needs it set. Repro for record of one such case: ``` CREATE ROLE foo; BEGIN; SAVEPOINT sp; DROP ROLE foo; RELEASE SAVEPOINT sp; -- this is key which marks in clog as sub-committed. kill or gpstop -air < N transactions to atleast cross pg_subtrans single file limit roughly CLOG_XACTS_PER_BYTE * BLCKSZ* SLRU_PAGES_PER_SEGMENT > restart -- errors recovery with missing pg_subtrans ```
-
由 Daniel Gustafsson 提交于
The extension for executable binaries is defined in X, replace the old (and now defunct) references to EXE_EXT. Also remove a commented out dead gpfdist rule in gpMgmt from before the move to core.
-
由 Daniel Gustafsson 提交于
[ci skip]
-
This reverts commit ab4398dd. [#142986717]
-
Similar to ea818f0e, we remove the sensitivity to segment count in test `dml_oids_delete`. Without this, this test was passing for the wrong reason: 0. The table `dml_heap_r` was set up with 3 tuples, whose values in the distribution column `a` are 1, 2, NULL respectively. On a 2-segment system, the 1-tuple and 2-tuple are on distinct segments, and because of a quirk of our local OID counter synchronization, they will get the same oids. 0. The table `tempoid` will be distributed randomly under ORCA, with tuples copied from `dml_heap_r` 0. The intent of the final assertion is checking that the OIDs are not changed by the DELETE. Also hidden in the assumption is that the tuples stay on the same segment as the source table. 0. However, the compounding effect of that "same oid" with a randomly distributed `tempoid` will lead to a passing test when we have two segments! This commit fixes that. So this test will pass for the right reason, and also on any segment count.
-
由 Todd Sedano 提交于
-
由 Todd Sedano 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 mkiyama 提交于
[ci skip]
-
由 Haisheng Yuan 提交于
When I was trying to understand how does Orca generate plan for CTE using shared input scan, I found that the share input scan is generated during CTE producer & consumer DXL node to PlannedStmt translation stage, instead of Expr to DXL stage inside Orca. It turns out CDXLPhysicalSharedScan is not used anywhere, so remove all the related dead code.
-
由 Heikki Linnakangas 提交于
It's an error in standard C - at least in older standards - to typedef the same type more than once, even if the definition is the same. Newer versions of gcc don't complain about it, but you can see the warnings with -pedantic (among a ton of other warnings, search for "redefinition"). To fix, remove the duplicate typedefs. The ones in src/backend/gpopt and src/include/gpopt were actually OK, because a duplicate typedef is OK in C++, and those files are compiled with a C++ compiler. But many of the typedefs in those files were not used for anything, so I nevertheless removed duplicate ones there too, that caught my eye. In gpmon.h, we were redefining apr_*_t types when postgres.h had been included. But as far as I can tell, that was always - all the files that included gpmon, included postgres.h directly or indirectly before that. Search & replace the references to apr_*_t types in that file with the postgres equivalents, to make it more clear what they actually are.
-
由 Heikki Linnakangas 提交于
CdbCellBuf was only used in hash aggregates, and it only used a fraction of the functionality. In essence, it was using it as a very simple memory allocator, where each allocation was fixed size, and the only way to free was to reset the whole cellbuf. But the same code was using a different, but similar, mpool_* mechanism for allocating other things stored in the hash buckets. We might as well use mpool_alloc for the HashAggEntry struct as well, and get rid of all the cellbuf code. CdbPtrBuf was completely unused.
-
- 03 4月, 2017 1 次提交
-
-
由 Dave Cramer 提交于
* We don't ship jdbc, or odbc For building the installers this repo is not gone, just unlinked from gpdb5 * remove references to odbc, and jdbc * remove more references to jdbc and odbc as well as client documentation * correctly remove windows specific code
-