- 06 5月, 2016 1 次提交
-
-
由 Amil Khanzada 提交于
-
- 05 5月, 2016 5 次提交
-
-
由 Kuien Liu 提交于
Files changed: modified: src/backend/cdb/motion/ic_common.c modified: src/backend/executor/spi.c modified: src/backend/nodes/outfuncs.c modified: src/backend/optimizer/path/costsize.c modified: src/backend/storage/file/compress_zlib.c Note: The warning in function _outScanInfo() of outfuncs.c is temporally fixed and would be treated as dead code to be removed soon. Thanks to Heikki Linnakangas' comments.
-
由 Nikos Armenatzoglou 提交于
-
由 Larry Hamel 提交于
* a bug was introduced by a feature that looked for partition tables that were moved to other schemas. * fixed here by qualifying with schema name, tightening the criteria * added Behave test
-
-
由 Omer Arap and Xin Zhang 提交于
Refactor test_with_orca.py and build_with_orca.py Extract the common piece out as GporcaCommon package. Disable gpfdist tests with --disable-gpfdist option for configure, because GPORCA doesn't impact the management utilities like gpfdist.
-
- 04 5月, 2016 4 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
-
由 Omer Arap and Xin Zhang 提交于
-
由 Pengcheng Tang 提交于
When user dumps database to Data Domain Boost server, storage unit and backup directory must be already created and specified, previously, we hard coded the storage unit to "GPDB" and user had no option to use others. This commit adds --ddboost-storage-unit option, which allows user to dynamically specify storage unit for dump and restore. This commits allows user to have storage unit information statically saved into configure file in their cluster host. This commit added storage unit option into gpmfr for replicating and recovering dump copies, in which case it uses identical storage unit and backup directory between primary and secondary DDBoost server. --ddboost-storage-unit option takes higher priority than using statically configured storage unit. Authors: Pengcheng Tang, Marbin Tan, Nikhil Kak Lawrence Hamel, Stephen Wu, Chris Hajas, Chumki Roy
-
- 03 5月, 2016 10 次提交
-
-
由 Adam Lee 提交于
Now s3ext could recognize then decompress gzip encoded files automatically, doesn't require any extra parameter, configuration or extended filename.
-
由 Heikki Linnakangas 提交于
I don't see anything in this test that would require a huge number of rows. Half a million should be more than enough to show up in reltuples/relpages.
-
由 Heikki Linnakangas 提交于
Almost all tests in bfv_legacy were also in qp_misc_rio. I kept the ones in qp_misc_rio, except for a few that have different output with ORCA. This way, only bfv_legacy needs to have an alternative expected output file for ORCA.
-
由 Heikki Linnakangas 提交于
Test 31 was identical to test 25.
-
由 Heikki Linnakangas 提交于
This exact same test case, with some extra EXPLAINs, is in the co_nestloop_idxscan regression test.
-
由 Heikki Linnakangas 提交于
Instead of pushing the responsibility of rescanning down to each different kind of external table implement rescanning in fileam.c in a generic fashion, by closing and reopening the underlying "url". This gets us rescan support for custom and EXECUTE-type external tables, which was missing before, and also makes the code simpler. There are no known cases where the rescan support is currently needed (hence no test case included), because the planner puts Materialize nodes on top of external scans, but in principle every plan node is supposed to be rescannable. I tested this by reverting the previous patch that fixed using external scans in a subplan; without that patch, an external table in a subplan would get rescanned.
-
由 Heikki Linnakangas 提交于
ParallelizeCorrelatedSubPlanMutator() turns each Scan on a base relation into a "Result - Material - Broadcast - Scan" pattern, but it missed ExternalScans. External tables are supposed to be treated as distributed, i.e. each segment holds different part of the external table, so they need to be treated like regular tables.
-
由 Heikki Linnakangas 提交于
Long time ago, a hack was put in place in GPDB to use the "scan" slot, instead of the "outer" slot which is used in the upstream, to hold the result of an Agg or Window plan node's child. It's not clear to me why that was done. There was even a comment in fix_upper_expr() saying we wouldn't need it if we just fixed the executor to not contain that hack, and there was also a TODO comment in CMappingColIdVarPlStmt.cpp about that. Everything seems to work without those hacks, so revert this thing back to the way it works in the upstream. This is simpler in its own right, and also reduces our diff vs. upstream, which will make merging easier in the future.
-
由 Daniel Gustafsson 提交于
This was raised in #691 and was identified as a bug in upstream as well. The patch has now been committed to upstream, this is a backport with Greenplum versions of Flex/Perl maintained. See below for upstream commit message. commit 7d7b1292 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Mon May 2 11:18:10 2016 -0400 Fix configure's incorrect version tests for flex and perl. awk's equality-comparison operator is "==" not "=". We got this right in many places, but not in configure's checks for supported version numbers of flex and perl. It hadn't been noticed because unsupported versions are so old as to be basically extinct in the wild, and because the only consequence is whether or not a WARNING flies by during configure. Daniel Gustafsson noted the problem with respect to the test for flex, I found the other by reviewing other awk calls.
-
This closes #697
-
- 30 4月, 2016 3 次提交
-
-
This closes #682 Without GUC_GPDB_ADOPT, the values for guc is not going to be dispatched to the QE processes.
-
由 Marbin Tan 提交于
Specically persistent check: gp_persistent_relation_node <=> filesystem Authors: Marbin Tan & Larry Hamel
-
由 Marbin Tan 提交于
Batch size of 8 is too low and each cluster may have a different system configuration, so we would like to dertermine a default batch size before running gpcheckcat. * Add unittest for batch size * Truncate batch size to be, at maximum, the amount of primaries. Batch size can be no longer be larger than the amount of primaries. * Refactor: Create method main() for gpcheckcat In order improve unit testing, move functionality from '__main__' to a method. Authors: Marbin Tan, Larry Hamel, Nikhil Kak
-
- 29 4月, 2016 3 次提交
-
-
由 Kuien Liu 提交于
* Remove warning when compiling aosegfiles.c Use strlcpy instead of strncat to stop compiler complaining. The original case "strncat(segnumArray, tmp, sizeof(tmp))" is simple and safe, but the compiler complains much by law [-Wstrncat-size]. Changes to be committed: modified: src/backend/access/aocs/aocssegfiles.c modified: src/backend/access/appendonly/aosegfiles.c * speed up a bit by replacing strncat with strlcpy
-
由 Kuien Liu 提交于
Files changed: modified: src/backend/access/external/url.c modified: src/backend/cdb/cdbhash.c modified: src/backend/cdb/cdbmutate.c modified: src/pl/plpgsql/src/pl_funcs.c Skipped: catcoretable.c:104:26:CatCoreType_int4_array because it is an enumerated constant to ensure integrity
-
由 Chumki Roy 提交于
In a previous commit, f569c1d1, gpcheckcat was modified to display a list of tables with missing attributes. This commit adds the ability to list tables with extraneous attributes. Authors: Chumki Roy and James McAtamney
-
- 28 4月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
Fixed a typo in the --with-codegen-prefix description and corrected spelling in the CMake check error message. No functional changes.
-
- 27 4月, 2016 2 次提交
-
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
- 26 4月, 2016 2 次提交
-
-
由 Nikhil Kak 提交于
-
由 Daniel Gustafsson 提交于
-
- 25 4月, 2016 7 次提交
-
-
由 Heikki Linnakangas 提交于
atpxPartAddList() needs a CreateStmt that represents the parent table, but instead of creating it already in the parser, and adding more details to it in analyze.c, it's simpler to create it later, in atpxPartAddList(), where it's actually needed.
-
由 Heikki Linnakangas 提交于
The code in InitPostgres() was refactored in PostgreSQL 9.0 so that it no longer uses FindMyDatabaseByOid() function. We had backpatched the InitPostgres() changes already, so backpatch the removal of FindMyDatabaseByOid() as well. Silences a compiler warning.
-
由 Pengzhou Tang 提交于
In _SPI_execute_plan, debug_query_string was set directly to plan->query which is not memory context safe, it means debug_query_string have a chance to refer to a invalid address when FATAL/PANIC level error occurs.
-
由 Heikki Linnakangas 提交于
A common source of bugs has been that an object gets assigned a different OID in the master and in segments. A segment should normally never have to allocate an OID (for catalog objects) on its own, all OIDs should be allocated in the master, and sent over to the segments. To make such bugs easier to catch, add a WARNING if an OID is allocated in a segment. There were some DEBUG1 elogs for the same thing in place already, but the list of catalogs that need synchronized OIDS wasn't up-do-date, and this new place for the elog() is less invasive anyway.
-
由 Heikki Linnakangas 提交于
When I merged the operator family patch, I missed dispatching the new DDL commands to segments. Because of that, the segments didn't have information about operator families. Some operator families would be greated implicitly by CREATE OPERATOR CLASS, but you wouldn't necessarily get the same configuration of families and classes as in the master. Things worked pretty well despite that, because operator families and classes are used for planning, and planning happens in the master. Nevertheless, we really should have the operator family information in segments too, in case you run queries in maintenance mode directly on the segments, or if you execute functions in segments that need to execute expression that depend on them. Also, there were no regression tests for the new DDL commands.
-
由 Heikki Linnakangas 提交于
If you do CREATE OPERATOR, with a commutator or negator operator that doesn't exist yet, the system creates a "shell" entry for the non-existent operator. But those shell operators didn't get the same OID in all segments, which could lead to strange errors later. I couldn't find a test case demonstrating actual bugs from that, but it sure seems sketchy. Given that we take care to synchronize the OID of the primary created operator, surely we should do the same for all operators.
-
由 Heikki Linnakangas 提交于
The out/readfuncs.c support for AlterTableStmt.comptypeArrayOid was missing. Because of that, the segments didn't get the OID of the composite type's array type from master, and allocated it on their own.
-
- 23 4月, 2016 1 次提交
-
-
由 Marbin Tan 提交于
We were using "complex" as a user defined type, which is no longer the case once #605 was checked in.
-
- 22 4月, 2016 1 次提交
-
-
由 Pengzhou Tang 提交于
-