- 16 11月, 2018 2 次提交
-
-
由 Adam Berlin 提交于
-
由 Lisa Owen 提交于
* docs - pxf troubleshooting - address jdbc timezone error * jdbc driver returns the error * can set other options as well * Update troubleshooting_pxf.html.md.erb
-
- 15 11月, 2018 9 次提交
-
-
由 Daniel Gustafsson 提交于
When setting gp_role to utility mode on the commandline using the PGOPTIONS='-c gp_role=utility' construction, the role is set during init processing before the interconnect has been set up. There is however a disconnect/reconnect step in assign_gp_role() intended to ensure that a role switch is correctly propagated, which in this case coredumps as it tries to tear down a connection which has never been established. Fix by only reconnecting during normal processing and not init mode processing. Reported-by: Adam Berlin on Github #6231 Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Heikki Linnakangas 提交于
For consistency with upstream. Sum up the child reltuples at planning time in ORCA, instead. This adds some overhead to planning partitioned tables with lots of partitions with ORCA, but we hardly care about the startup time of ORCA, anyway.. This effectively reverts recent commit 3f3869d6. The tests that were added to vacuum_gp in that commit don't make much sense anymore, but instead of removing them altogether, rewrite them into something that is marginally useful. We probably have tests for reltuples/relpages elsewhere, in 'analyze' test file at least, but it seems good to have a smaller summary of the intended behavior, in the form of a test like this. Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Heikki Linnakangas 提交于
Commit 30e5d11b disabled the support for mark/restore on SeqScan. This removes the code that was made unused by that. The same was done in upstream, in commit adbfab11, in PostgreSQL 9.5. The upstream commit also removed mark/restore support in TidScan and ValuesScan nodes, but we did not do that in GPDB yet.
-
由 Daniel Gustafsson 提交于
-
由 Lisa Owen 提交于
* docs - pxf internal/user config directory split * edits requested by david * install java on the master also
-
由 Bhuvnesh Chaudhary 提交于
Whenever `eval_part_qual` is invoked, a new qualList object is allocated and is executed to evaluate a bool result. Once, the qualList is used, the memory is not freed, and we continue allocating memory for qualList for all the levels. This can lead to OOM errors. Instead of allocating the memory in `eval_part_qual`, its better to initialize it in `ExecInitPartitionSelector` to avoid such leaks. Signed-off-by: NAlexandra Wang <lewang@pivotal.io> Signed-off-by: NHans Zeller <hzeller@pivotal.io>
-
由 Heikki Linnakangas 提交于
It's only available in proprietary builds, which is not cool. For the purposes of testing incremental analyze, zlib works just as well. (Compression probably doesn't matter for these tests at all...)
-
由 Jason Vigil 提交于
Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NBradford Boyle <bboyle@pivotal.io>
-
由 David Krieger 提交于
-
- 14 11月, 2018 13 次提交
-
-
由 Daniel Gustafsson 提交于
When VACUUM passes a partitioning root it will consider in isolation in vacuum_rel, which will set the statistics to zero as there is no data to be counted. This means that the root statistics were reset on VACUUM which can cause unsuitable plans to be generated. Rather than resetting, opt for retaining the statistics for root partitions during VACUUM as it's all we can do, without breaking the one-rel-at-time semantics of VACUUM. This adds a testcase to ensure reltuples are maintained. Also add an assertion on QD operation to the function to indicate when reading code that this function may make assumptions that are only valid on the QD. Backport to 5X_STABLE.
-
由 Daniel Gustafsson 提交于
Erroring out via ereport(ERROR ..) will clean up resources allocated during execution, so explicitly freeing right before is not useful (unless the allocation is the in TopMemoryContext). Remove pfree() calls for lower allocations, and reorder one to happen just after a conditional ereport instead to make for slightly easier debugging when breaking on the error. Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The --enable-testutils mechanism was broken in the 9.3 merge, with fixing it left as a FIXME. Remove the invocation of the option from gpAux targets as they otherwise fail to compile the tree. This also expands the comment of the FIXME to indicate that these should be turned on again when fixed. Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
由 Daniel Gustafsson 提交于
--with-krb5 and --with-docdir are no longer supported and generate warnings when building with these targets (which Concourse does). The --with-krb5 invocation should possibly be replaced with gssapi options but since I don't know where or when these builds are used the only thing this commit contains is cleaning up the warnings. Reviewed-by: NJacob Champion <pchampion@pivotal.io>
-
由 ZhangJackey 提交于
This commit fixes the Github issue #6215. When executing the following plan ``` Gather Motion 1:1 (slice1; segments: 1) -> Merge Full Join -> Seq Scan on int4_tbl a -> Seq Scan on int4_tbl b ``` Greenplum will raise an Assert Fail. The root cause is we poorly support mark/restore in `heapam`. We decided not to touch the code there because later version Postgres has removed them. We simply force a material node for the above plan. Co-authored-by: Shujie Zhang shzhang@pivotal.io Co-authored-by: Zhenghua Lyu zlv@pivotal.io
-
由 Heikki Linnakangas 提交于
Commit 576690f2 added tracking of which segments have been enlisted in a distributed transaction. However, it registered every query dispatched with CdbDispatch*() in the distributed transaction, even if the query was dispatched without the DF_NEED_TWO_PHASE flag to the segments. Without DF_NEED_TWO_PHASE, the QE will believe it's not part of a distributed transaction, and will throw an error when the QD tries to prepare it for commit: ERROR: Distributed transaction 1542107193-0000000232 not found (cdbtm.c:3031) This can occur if a command updates a partially distributed table (a table with gp_distribution_policy.numsegments smaller than the cluster size), and uses one of the backend functions, like pg_relation_size(), that dispatches an internal query to all segments. Fix the confusion, by only registering commands that are dispatched with the DF_NEED_TWO_PHASE flag in the distributed transaction. Reviewed-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Huiliang.liu 提交于
* Add external table encoding option as condition of finding reusable table Get database default encoding if ENCODING is not set in config file. Find encoding code by encoding string and then add encoding code as one of conditions of finding reusable table.
-
由 Chuck Litzell 提交于
* docs - note differences between file/gpfdist and gphdfs/s3/pxf protocols. How to set permissions on each type. * review comments - pxf create protocol happens when installing extension * Changes from review
-
由 Heikki Linnakangas 提交于
This enhances gp_execute_on_segment(), so that it can be used to execute queries that return a result, and the result is passed back to the caller. With that, we can avoid the pl/python hack to connect to segments in utility mode, in reshuffle_ao test. I imagine that this would be useful for many other tests too, but I haven't yet searched around for more use cases. Reviewed-by: NAsim R P <apraveen@pivotal.io> Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Heikki Linnakangas 提交于
When dispatching the pg_relation_size() call to the segments, we were not passing through the fork name. As a result, it always calculated the size of the 'main' fork, regardless of the fork argument. This has been wrong since the 8.4 merge, which introduced relation forks.
-
由 Heikki Linnakangas 提交于
We had added the same check in GPDB, and the 9.2 merge added it again from upstream. Harmless, but let's be tidy.
-
由 Heikki Linnakangas 提交于
I added these tests in commit f8a80aeb, but forgot to add the tests to the test schedule. Add it now. The OID of template0 database isn't stable across versions, so change the test to use template1 instead, which has hard-coded OID 1.
-
- 13 11月, 2018 14 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
work_mem statistics was previously handling the non-TEXT formatting options in EXPLAIN, but the information on required work_mem to avoid spilling was not. This refactors the work_mem handling into a single place, under the assumption that workmemwanted won't be set unless workmemused is also set, and adds full formatting support. Below is an example output from TEXT as well as JSON output for the same query where spilling occurred: work_mem: 855261kB Segments: 3 Max: 285087kB (segment 0) Workfile: (3 spilling) Work_mem wanted: 575507K bytes avg, 575652K bytes max (seg1) to lessen workfile I/O affecting 3 workers. "work_mem": { + "Used": 855261, + "Segments": 3, + "Max Memory": 285087, + "Max Memory Segment": 0, + "Workfile Spilling": 3, + "Max Memory Wanted": 575652, + "Max Memory Wanted Segment": 1, + "Avg Memory Wanted": 575507, + "Segments Affected": 3 + }, + Reviewed-by: NShaoqi Bai <sbai@pivotal.io> Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Daniel Gustafsson 提交于
Support for event triggers on External Protocols was added in commit d920bf31 but the oclass wasn't marked as supported for event triggers. This makes little difference in practice as the only code- path affected is operations on dependent objects, but none exists for external protocols. Also add a small addition to the testcase to use the protocol in WRITEABLE table. In passing, also mark compression oclass as not supported, as we dont have operations for creating/dropping compression objects yet. Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Daniel Gustafsson 提交于
The isolationstester framework was backported into Greenplum, and via the merges of later upstream releases the installcheck-prepared-txns target was duplicated leaving a warning on target redefinition which was harmless yet annoying. Remove the duplication from backporting leaving the original committed lines in place.
-
由 Paul Guo 提交于
Greenplum bitmap related scan is different compared with that in PG upstream since greeplum supports on disk bitmap index and code is also largely refactored to support "streaming" bitmap ops. Due to potential lossy storage for page, a flag recheck is used to indicate whether executor needs to do recheck for each tuple. This patch fixes some recheck logic errors of bitmap ops code and also adds some tests to cover the greenplum specific code.
-
由 Pengzhou Tang 提交于
-
由 Jinbao Chen 提交于
In ‘copy (select statement) to file’, we generate a query plan and set its dest receivor to copy_dest_receive. And run the dest receivor on QD. In 'copy (select statement) to file on segment', we modify the query plan, delete gather mothon, and let dest receivor run on QE. Change 'isCtas' in Query to 'parentStmtType' to be able to mark the upper utility statement type. Add a CopyIntoClause node to store copy informations. Add copyIntoClause to PlannedStmt. In postgres, we don't need to make a different query plan for the query in the utility stament. But in greenplum, we need to. So we use a field to indicate whether the query is contained in utitily statemnt, and the type of utitily statemnt. Actually the behavior of 'copy (select statement) to file on segment' is very similar to 'SELECT ... INTO ...' and 'CREATE TABLE ... AS SELECT ...'. We use distribution policy inherent in the query result as the final data distribution policy. If not, we use the first clomn in target list as the key, and redistribute. The only difference is that we used 'copy_dest_receiver' instead of 'intorel_dest_receiver'
-
由 Pengzhou Tang 提交于
We have two ways to notify the QE senders to stop provide tuples. 1) send a stop message through interconnect 2) set QueryFinishPending flag through libpq. Method 1 can not resolve the hung issue in 2c011ce4 if the test case in that PR also contains a rescan RESULT node. For method 2, current problem is once a QE sender stuck at waiting interconnect data acks because out of sending buffers or stuck at waiting EOS acks, the QE sender has no chance to process QueryFinishPending flag. The fix break the loops and set the stillActive to false in a rude way, so adding more assertion and filter for safe.
-
由 Pengzhou Tang 提交于
This commit fixes wrong results with correlated queries (#6074), the issue is introduced by commit 2c011ce4 in which it made a bad decision to call ExecSquelchNode() in RESULT node. The original thought in 2c011ce4 is, once the one-time filter qual of a RESULT node is evaluated to be false, we no longer need to fetch tuple from child node, so it's safe to send a stop message to source QE senders. Then if the RESULT node is rescanned, one-time filter is revaluated to be true, RESULT node need to fetch tuple from child node, however, the QE senders has been stopped and some tuples are missed. So the first step is reverting the changes in 2c011ce4 and then add a new test case.
-
由 Heikki Linnakangas 提交于
These are missing from the upstream. While not strictly necessary, it seems weird that these can be used as DISTRIBUTED BY columns, for hash distribution, but we cannot do hash joins or hash aggregates on them. My longer-term goal is to get rid of the cdbhash mechanism, and replace that with regular hash opclasses. Once we do that, we'll need the hash opclass, or these datatypes can no longer be used in DISTRIBUTED BY. That's one reason to add these now.
-
由 Heikki Linnakangas 提交于
We already had the implementation for the hash function in pg_proc, but for some reason it was never turned into an operator class, so the planner didn't know it could use it for e.g. hash joins and hash aggregates.
-
由 Heikki Linnakangas 提交于
PostgerSQL doesn't have one, but we actually had to jump through some hoops in the planner to work around that, and convert TIDs to int8s when constructing plans with a "deduplication" step. Let's just bit the bullet implement the hash opclass.
-
由 Jacob Champion 提交于
This test has already been rewritten twice due to upstream format changes. It's better to just hardcode the numerics we want to test in the original GPDB4 format (thanks to Heikki for this idea!). As a result, we no longer need the convert_to_v4() and numeric_force_long_format() helper functions. We also remove the GPDB_93_MERGE_FIXME that was added recently. The reason this test changed output format is that the mangled GPDB4 numerics had extra digits past the display scale. This is not supposed to happen during normal operation, and in upstream commit 5cb0e335 -- which was merged as part of the 9.3 iteration -- get_str_from_var() was changed to no longer perform rounding, which leads to truncation for these "corrupted" numerics. Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
-
由 Jim Doty 提交于
We saw a fortran file in our source tree that was not being compiled. After some discussion, we decided against removing the fortran file, and instead prioritized keeping the vendored package intact. This commit brings us up to date with the latest version, and restores the license file to the vendored code
-
- 12 11月, 2018 2 次提交
-
-
由 Zhenghua Lyu 提交于
This commit does three things: - move reshufle tests to greenplum_schedule - add cases to test that reshuffle can correctly abort - remove some redundant cases(I think in this regress tests, it is enough to just test expanding from 2 to 3)
-
由 Pengzhou Tang 提交于
Previously, when creating a APPEND node for inheritance table, if subpaths has different number segments in gp_distribution_policy, the whole APPEND node might be assigned with a wrong numsegments, so some segments can not get plans and lost data in the results.
-