- 23 2月, 2018 1 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
gp_toolkit tests runs a query which checks if there are any GUCs which have different value on the cluster node. However, test udf_exception_blocks_panic_scenarios is executed before it and it injects PANIC which causes recovery and it takes some time for all the GUCs to be in consistent state between the segments. But by that time gp_toolkit identifies difference and fails, so moving gp_toolkit test above it.
-
- 19 2月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
The version in tinc was anyways broken long time back ever since sub-transactions were correctly dispatched to segments. Wrote new tests forcing reset of gang during commit-prepared and crossing MAX_ON_EXITS transactions.
-
- 09 2月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
Simplified the tests by setting the GUCs at session level instead of reloads. Also removed creating tables with index, sequence etc as not needed for these tests. So, same coverage is achieved with ICW in 15 secs vs older version taking 15 mins in TINC.
-
- 01 2月, 2018 1 次提交
-
-
由 Xin Zhang 提交于
Since it uses serializable backend which is conflicting with append-optimized tables. Author: Xin Zhang <xzhang@pivotal.io> Author: Asim R P <apraveen@pivotal.io>
-
- 30 1月, 2018 2 次提交
-
-
由 Wang Hao 提交于
The hook is called for - each query Submit/Start/Finish/Abort/Error - each plan node, on executor Init/Start/Finish Author: Wang Hao <haowang@pivotal.io> Author: Zhang Teng <tezhang@pivotal.io>
-
由 Wang Hao 提交于
On postmaster start, additional space in Shmem is allocated for Instrumentation slots and a header. The number of slots is controlled by a cluster level GUC, default is 5MB (approximate 30K slots). The default number is estimated by 250 concurrent queries * 120 nodes per query. If the slots are exhausted, instruments are allocated in local memory as fallback. These slots are organized as a free list: - Header points to the first free slot. - Each free slot points to next free slot. - The last free slot's next pointer is NULL. ExecInitNode calls GpInstrAlloc to pick an empty slot from the free list: - The free slot pointed by the header is picked. - The picked slot's next pointer is assigned to the header. - A spin lock on the header to prevent concurrent writing. - When GUC gp_enable_query_metrics is off, Instrumentation will be allocated in local memory. Slots are recycled by resource owner callback function. Benchmark result with TPC-DS shows performance impact by this commit is less than 0.1% To improve performance of instrumenting, following optimizations are added: - Introduce instrument_option to skip CDB info collection - Optimize tuplecount in Instrumentation from double to uint64 - Replace instrument tuple entry/exit function with macro - Add need_timer to Instrumentation, to allow eliminating of timing overhead. This is porting part of upstream commit: ------------------------------------------------------------------------ commit af7914c6 Author: Robert Haas <rhaas@postgresql.org> Date: Tue Feb 7 11:23:04 2012 -0500 Add TIMING option to EXPLAIN, to allow eliminating of timing overhead. ------------------------------------------------------------------------ Author: Wang Hao <haowang@pivotal.io> Author: Zhang Teng <tezhang@pivotal.io>
-
- 22 1月, 2018 1 次提交
-
-
由 Richard Guo 提交于
-
- 18 1月, 2018 1 次提交
-
-
由 Jesse Zhang 提交于
This patch removes codegen wholesale from Greenplum. In addition to reverting the commits involving codegen, we also removed miscellaneous references to the feature and GUC. The following commits from 5.0.0 were reverted (topologically ordered): f38c9064 Support avg aggregate function in codegen 87dcae4c Capture and print error messages from llvm::verifyFunction 65137540 libgpcodegen assert() overrides use GPDB Assert() 81d378b4 GenerateExecVariableList calls regular slot_getattr when the code generation of the latter fails 05a28211 Update Google Test source path used by codegen 22a35fcc Call ereport when code generating float operators 79517271 Support overflow checks for doubles. b5373a1e Fix codegen unittest link problem 7a1a98c9 Print filename and lineno in codegened CreateElog 58eda293 Fix wrong virtual tuple check in codegened slot_getattr bc6faa08 Set llvm_isNull_ptr to false when the result of codegened expression evaluation is not null 8bbbd63f Enhance codegened advance_aggregates with support for null attributes e1fd6072 Abort code generation of expression evaluation trees with unsupported ExprState types 509460ee Support null attributes in codegen expression evaluation framework 739d978d Move enrollment of codegened ExecVariableList to ExecInitNode c05528d1 Fix CHECK_SYMBOL_DEFINED macro in CMakeLists.txt 12cfd7bd Support offset calculation when there is null in the tuple and codegen is enabled 40a2631e Use slot_getattr wrapper function for regular version.(#1194) 613e9fbb Revert "Fix codegen issue by moving slot_getattr to heaptuple.c similar to" ee03f799 Fix cpplint error on advance aggregate 2a65b0aa Fix slot function nullptr issue in expr eval. 3107fc0e Fix for !(expr1 & expr2) and hasnull logic in slot_getatrr codegen. c940c7f6 Fix codegen issue by moving slot_getattr to heaptuple.c similar to Postgres. c8125736 Introduce guc to enable/disable all generator. a4f39507 Ensure that codegen unittests pass with GCC 6.2.0 (#1177) 682d0b28 Allow overriding assert() functionality in libgpcodegen in DEBUG mode 3209258a Organize codegen source files and unit tests d2ba88a9 Fix codegen unittests using DatumCastGenerator 020b71a5 Generate code for COUNT aggregate function 0eeec886 Fixing codegen related bugs in InitializeSupportedFunction 41055352 Rewrite PGGenericFuncGenerator to support variable number of types e5318b6b Add AdvanceAggregatesCodegenEnroll mock function 87521715 Codegen advance_aggregates for SUM transition function 2043d95b Use string messages in codegened slot_getattr fallback block f160fa5a Add new GUC to control Codegen Optimization level. 697ffc1a Fix cpplint errors c5e3aed4 Support Virtual Tuples and Memtuples in SlotGetAttrCodegen 5996aaa7 Keep a reference to the CodegenManager in code generators 6833b3c2 Remove unused header and just include what you use in codegen ab1eda87 Allow setting codegen guc to ON only if code generation is supported by the build dcd40712 Use PGFuncGeneratorInfo to codegen pg functions 83869d1c Replace dynamic_cast with dyn_cast for llvm objects 23007017 Decide what to generate for ExecEvalExpr based on PlanState 387d8ce8 Add EXPLAIN CODEGEN to print generated IR to the client c4a9bd27 Introduce Datum to cpp cast, cpp type to Datum cast and normal cast.(#944) 66158dfd Record external function names for useful debugging adab9120 Support variable length attributes in SlotGetAttrCodegen. 335e09aa Proclaim that the codegen'ed version of generated functions are called in debug build 50fd9140 Fix cpplint errors 88a04412 Use ExprTreeGeneratorInfo for expression tree generation. 3b4af3bb Split code generation of ExecVariableList from slot_getattr 8cd9ed9f Support <= operator for date/timestamp data types and some minor refactors. e4dccf46 Implement InlineFunction to force inline at call site 71170942 Mock postmaster.o for codegen_framework_unittest.t 09f00581 Codegen expressions that contain plus, minus and mul operators on float8 data types d7fb2f6d Fix codegen unittests on Linux and various compiler warnings while building codegen. 45f2aa96 Fix test and update coding style This closes #874 1b26fbfc Enrolled targetlist for scan and aggregate ebd1d014 Enhance codegen framework to support arbitrary expr tree ec626ce6 Generate and enroll naive ExecEvalExpr in Scan's quals 1186001e Revert "Create naive code-generated version of ExecQual" 6f928a65 Replace RegisterExternalFunction with GetOrRegisterExternalFunction using an unordered_map in codegen_utils 6ae0085b Move ElogWrapper to GpCodegenUtils. d3f80b45 Add verifyfunction for generated llvm function. 7bcf094a Fix codegen compiler error with 8.3 merge aae0ad3d Create naive code-generated version of ExecQual dce266ad Minor code quality fixes in src/backend/codegen d281f340 Support null attributes in code generated ExecVariableList 82fd418e Address a number of cpplint errors in codegen_utils_unittest.cc 887aa48d Add check for CodegenUtils::GetType for bool arrays bb9b92c6 Enhance Base Codegen to do clean up when generation fails b9ef5e3f Fix build error for Annotated types a5cfefd9 Add support for array types in codegen_utils. 2b883384 Fix static_assert call 7b75d9ea This commit generates code for code path: ExecVariableList > slot_getattr > _slot_getsomeattrs > slot_deform_tuple. This code path is executed during scan in simple select queries that do not have a where clause (e.g., select bar from foo;). 6d0a06e8 Fix CodeGen typos and CodeGeneratorManagerCreate function signature in gpcodegen_mock.c 4916a606 Add support for registering vararg external functions with codegen utils. ae4a7754 Integrate codegen framework and make simple external call to slot deform tuple. This closes #649 ee5fb851 Renaming code_generator to codegen_utils and CodeGenerator to CodegenUtils. This closes #648 88e9baba Adding GPDB code generation utils Signed-off-by: NJesse Zhang <sbjesse@gmail.com> Signed-off-by: NMelanie Plageman <mplageman@pivotal.io> Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 13 1月, 2018 2 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Ashwin Agrawal 提交于
-
- 13 12月, 2017 2 次提交
-
-
由 Andreas Scherbaum 提交于
* Make SPI work with 64 bit counters Fix GET DIAGNOSTICS Remove the earlier introduced SPI_processed64 variable This includes the following upstream patches: https://github.com/greenplum-db/gpdb/commit/23a27b039d94ba359286694831eafe03cd970eef https://github.com/greenplum-db/gpdb/commit/f3f3aae4b7841f4dc51129691a7404a03eb55449 https://github.com/greenplum-db/gpdb/commit/ab737f6ba9fc0a26d32a95b115d5cd0e24a63191 The https://github.com/greenplum-db/gpdb/commit/74a379b984d4df91acec2436a16c51caee3526af is not yet included, because repalloc_huge() is not yet backported.
-
由 Shreedhar Hardikar 提交于
We don't want to use the optimizer for planning queries in SQL, pl/pgSQL etc. functions when that is done on the segments. ORCA excels in complex queries, most of which will access distributed tables. We can't run such queries from the segments slices anyway because they require dispatching a query within another - which is not allowed in GPDB. Note that this restriction also applies to non-QD master slices. Furthermore, ORCA doesn't currently support pl/* statements (relevant when they are planned on the segments). For these reasons, restrict to using ORCA on the master QD processes only. Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.") and separate out gporca fault injector tests in newly added gporca_faults.sql so that the rest can run in a parallel group. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
- 04 12月, 2017 1 次提交
-
-
由 Shreedhar Hardikar 提交于
Move gporca regression test out of the parallel group so that gp_fault_injector functionality works correctly. Also, as it turns out, ORCA is used to run pg/PLSQL queries sometimes even when the GUC optimizer is set to off. So when gporca sets up the gp_fault_injector, it gets activated later on in parallel group qp_functions_in_from test is part of. So, reset the fault in gporca just in case.
-
- 02 12月, 2017 1 次提交
-
-
由 Ivan Leskin 提交于
Add a new compression option for append-optimized tables, "zstd". It is generally faster than zlib or quicklz, and compresses better. Or at least it can be faster or compress better, if not both at the same time, by adjusting the compression level. A major advantage of Zstandard is the wide range of tuning, to choose the trade-off between compression speed and ratio. Update documentation to mention "zstd" alongside "zlib" and "quicklz". More could be done; all the examples still use zlib or quicklz, for example, and I think we want to emphasize Zstandard more in the docs, over those other options. But this is the bare minimum to keep the docs factually correct. Using the new option requires building the server with the libzstd library. A new --with-zstd option is added for that. The default is to build without libzstd, for now, but we should probably change the default to be on, after we have had a chance to update all the buildfarm machines to have libzstd. Patch by Ivan Leskin, Dmitriy Pavlov, Anton Chevychalov. Test case, docs changes, and some minor editorialization by Heikki Linnakangas.
-
- 24 11月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
This is backport from PostgreSQL 9.4. It brings back functionality that we lost with the ripout & replace of the window function implementation. I left out all the code and tests related to COLLATE, because we don't have that feature. Will need to put that back when we merge collation support, in 9.1. commit 8d65da1f Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Mon Dec 23 16:11:35 2013 -0500 Support ordered-set (WITHIN GROUP) aggregates. This patch introduces generic support for ordered-set and hypothetical-set aggregate functions, as well as implementations of the instances defined in SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(), percent_rank(), cume_dist()). We also added mode() though it is not in the spec, as well as versions of percentile_cont() and percentile_disc() that can compute multiple percentile values in one pass over the data. Unlike the original submission, this patch puts full control of the sorting process in the hands of the aggregate's support functions. To allow the support functions to find out how they're supposed to sort, a new API function AggGetAggref() is added to nodeAgg.c. This allows retrieval of the aggregate call's Aggref node, which may have other uses beyond the immediate need. There is also support for ordered-set aggregates to install cleanup callback functions, so that they can be sure that infrastructure such as tuplesort objects gets cleaned up. In passing, make some fixes in the recently-added support for variadic aggregates, and make some editorial adjustments in the recent FILTER additions for aggregates. Also, simplify use of IsBinaryCoercible() by allowing it to succeed whenever the target type is ANY or ANYELEMENT. It was inconsistent that it dealt with other polymorphic target types but not these. Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing, and rather heavily editorialized upon by Tom Lane Also includes this fixup: commit cf63c641 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Mon Dec 23 20:24:07 2013 -0500 Fix portability issue in ordered-set patch. Overly compact coding in makeOrderedSetArgs() led to a platform dependency: if the compiler chose to execute the subexpressions in the wrong order, list_length() might get applied to an already-modified List, giving a value we didn't want. Per buildfarm.
-
- 23 11月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
The last 8.4 merge commit introduced support for DISTINCT with hashing, and refactored the way grouping_planner() works with the path keys. That broke DISTINCT with window functions, because the new distinct_pathkeys field was not set correctly. In commit 474f1db0, I moved some GPDB-added tests from the 'aggregates' test, to a new 'gp_aggregates' test. But I forgot to add the new test file to the test schedule, so it was not run. Oops. Add it to the schedule now. The tests in 'gp_aggregates' cover this bug.
-
- 08 11月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
This test was simply large, and also took quite a long time to run, so it's nice to split it up. Furthermore, only some of the tests produce different output with ORCA. Split the test so that the tests that use EXPLAIN, or produce different output with ORCA for some other reason, go to a new 'bfv_partition_plans' test, and the rest remain in 'bfv_partition'.
-
- 07 11月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
The storage/basic/partition tests were a collection of bugfixes from primarily older version of Greenplum. This moves the valuable tests over to ICW and removes the ones which are already covered in existing ICW suites. The decision for each individual tests is elaborated on in the list below: * MPP-3379 was testing an ancient bug and MPP-3553 a hypothetical bug from before the current partition code was written. Combined the two since the added clause will still cover the ancient 3379 issue and remove the other from Tinc. * MPP-3625 was mostly already covered by existing tests in partition and bfv_partition. Added a test for splitting a non-existing default as that was the only thing covered. * MPP-6297 is testing the pg_get_partition_def() function after the partition hierarchy has been amended via ALTER TABLE, which is already covered by the partition suite. Additionally it tested running pg_dump, which use pg_get_partition_def() heavily, and this is covered by our pg_upgrade tests. * MPP-6379 tested that partitions inherited unique indexes, the test was moved to the partition_indexing suite. * MPP-6489 tested ALTER TABLE .. SET DISTRIBUTED BY for subpartitions which didn't seem covered in ICW so moved to alter_distribution_policy suite * MPP-7661 tested for an old bug where pg_dump incorrectly handled partition hierarchies created with the EVERY syntax. pg_upgrade tests in ICW will run this test on hierarchies from the partition suites so remove. * MPP-7734 tested for excessive memory consumption in the case of a very deep partitioning hierarchy. The issue in question was fixed in 4.0 and the test clocks in at ~ 1.5 minutes, so remove the test to save time in the test pipeline. The test was more of a stress test than a regression test at this point, and while thats not of no importance, we should run stresstests deliberately and not accidentally. * MPP-6537/6538 tested that new partitions introduced in the hierarchy correctly inherited the original owner. Test moved to the partition suite. * MPP-7002 tested that splitting partitions worked after a column had been dropped. Since we had a similar test already in the partition suite, extend that test to also cover splitting. * MPP-7164 is testing for partition rank after DDL operations on the partition hierarchy. I can see that we are testing something along the lines of this in ICW so pulled it across. In the process, fixed the test to actually run the SQL properly and not error out on a syntax error. Also removed a duplicated file for setting up the views. * MPP-9548 tested for the case of insert data into a column which was dropped and then re-added. Since we already had that covered in the partition suite, simply extended the comment on that case and added another projection to really cover it completely. * MPP-7232 tests that pg_get_partition_def() supports renamed partitions correctly. Added to ICW but the pg_dump test removed. * MPP-17740 tested the .. WITH (tablename = <t> ..) syntax which we to some extent already covered, but added a separate case to test permutations of it. * MPP-18673 covers an old intermittent bug where concurrent sessions splitting partitions would corrupt the relcache. The test in Tinc didn't however run concurrently so the underlying bug wasn't being tested, so remove the test. If we want a similar test at some point it should be an isolationtester suite. * MPP-17761 tests an old bug where splitting an added CO partition failed. Since we didn't have much in terms of testing splitting CO, extended the block doing split testing with that. * MPP-17707-* were essentially the same test but with varying storage options. While a lot of this is covered elsehwere, these tests were really easy to read so rolled them all into a new suite called partition_storage. * MPP-12775 covered yet another split/exchange scenario. Added a short variant to the partition suite. * MPP-17110 tests for an old regression in attribute encoding for added columns in partitioning hierarchies. Removed the part of the test that checked compression ratio as AO compression should be tested elsewhere. * The partition_ddl2 test was moved over as partition_ddl more or less unchanged. This also removes unused answer files mpp8031.ans.orca and query99 for which there were no corresponding tests, as well as the data file used for copying data into the tests (a copy of which already exists in src/test/regress/data).
-
- 06 11月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
This is mostly in preparation for changes soon to be merged from PostgreSQL 8.4, commit a77eaa6a to be more precise. Currently GPDB's ExecInsert uses ExecSlotFetch*() functions to get the tuple from the slot, while in the upstream, it makes a modifiable copy with ExecMaterializeSlot(). That's OK as the code stands, because there's always a "junk filter" that ensures that the slot doesn't point directly to an on-disk tuple. But commit a77eaa6a will change that, so we have to start being more careful. This does fix an existing bug, namely that if you UPDATE an AO table with OIDs, the OIDs currently change (github issue #3732). Add a test case for that. More detailed breakdown of the changes: * In ExecInsert, create a writeable copy of the tuple when we're about to modify it, so that we don't accidentally modify an existing on-disk tuple. By calling ExecMaterializeSlot(). * In ExecInsert, track the OID of the tuple we're about to insert in a local variable, when we call the BEFORE ROW triggers, because we don't have a "tuple" yet. * Add ExecMaterializeSlot() function, like in the upstream, because we now need it in ExecInsert. Refactor ExecFetchSlotHeapTuple to use ExecMaterializeSlot(), like in upstream. * Cherry-pick bug fix commit 3d02cae3 from upstream. We would get that soon anyway as part of the merge, but we'll soon have test failures if we don't fix it immediately. * Change the API of appendonly_insert(), so that it takes the new OID as argument, instead of extracting it from the passed-in MemTuple. With this change, appendonly_insert() is guaranteed to not modify the passed-in MemTuple, so we don't need the equivalent of ExecMaterializeSlot() for MemTuples. * Also change the API of appendonly_insert() so that it returns the new OID of the inserted tuple, like heap_insert() does. Most callers ignore the return value, so this way they don't need to pass a dummy pointer argument. * Add test case for the case that a BEFORE ROW trigger sets the OID of a tuple we're about to insert. This is based on earlier patches against the 8.4 merge iteration3 branch by Jacob and Max.
-
- 30 10月, 2017 1 次提交
-
-
由 Pengzhou Tang 提交于
-
- 24 10月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Aside from being a good thing that we should do sooner or later anyway, this silences some current test failures that were caused by the error message differences by commit ec3370b0.
-
- 22 10月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I'm not convinced we really need to all these different combinations. But let's at least move the test into the main test suite, so that it's easier to work with. These did reveal the bug fixed in commit c159ec72 (github issue #3577), so there is at least some value in these tests.
-
- 19 10月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
How I made the conversion: 1. I concatenated all the *.sql files together 2. Removed all the @auther, @date, etc. tags 3. There was one stray "SELECT truncate_dml_heap_r();" command in one of the setup SQL files. I could not find that function anywhere. Removed it. 4. Removed all the set "gp_enable_column_oriented_table=on" and "set optimizer=on" commands. 5. Instead of re-creating the test tables between every test, only create the tables once at the top, and reuse them. In order to keep the results the same, even though the tables are modified by the tests, wrap every test (except those that ERROR out on purpose) in a BEGIN-ROLLBACK block. Since we were just dropping the tables after each test before, this provides as much coverage.
-
由 Heikki Linnakangas 提交于
I'm not entirely sure what the bug was that these test, or whether we have coverage for it already, but the tests are cheap enough that I don't mind just adding them.
-
由 Heikki Linnakangas 提交于
There was probably coverage for some of these in the existing tests already, but I couldn't quickly pinpoint the exact tests, so let's at least move these out of TINC.
-
- 07 10月, 2017 1 次提交
-
-
由 Kavinder Dhaliwal 提交于
This commit introduces the gp_recursive_cte test file with a variety of tests concentrated on exercising referencing a recursive CTE in a subquery
-
- 09 9月, 2017 2 次提交
-
-
由 Xin Zhang 提交于
Originally, if the REORGANIZE option is used, and the source and target tables got same partition distribution policy, then the redistribute is skipped. This is due to the nature of planner optimizer will skip reshuffle if the source and target distribution policy match. However, if both source and target distribution policies are random, then planner will generate a redistribution motion to balance the tuples across the cluster. Leveraging that thought, we added new code path to temporarily set the source table's distribution policy to random while executing the CTAS query, and hence the optimizer can generate the proper query plan with `redistribute motion`. We restore the policy after creating the temporary table. Test cases are added to show case the scenario where a table is loaded with data inconsistent with its distribution policy when using COPY ... ON SEGMENT. The second case catches a regression when using SET WITH (REORGANIZE = TRUE) DISTRIBUTED RANDOMLY. Signed-off-by: NJacob Champion <pchampion@pivotal.io>
-
由 Heikki Linnakangas 提交于
-
- 05 9月, 2017 1 次提交
-
-
由 Ning Yu 提交于
* Simplify tuple serialization in Motion nodes. There is a fast-path for tuples that contain no toasted attributes, which writes the raw tuple almost as is. However, the slow path is significantly more complicated, calling each attribute's binary send/receive functions (although there's a fast-path for a few built-in datatypes). I don't see any need for calling I/O functions here. We can just write the raw Datum on the wire. If that works for tuples with no toasted attributes, it should work for all tuples, if we just detoast any toasted attributes first. This makes the code a lot simpler, and also fixes a bug with data types that don't have a binary send/receive routines. We used to call the regular (text) I/O functions in that case, but didn't handle the resulting cstring correctly. Diagnosis and test case by Foyzur Rahman. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 31 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
Rather than appending to a StringInfo, return a string. The caller can append that to a StringInfo if he wants to. And instead of passing a prefix as argument, the caller can prepend that too. Both callers passed the same format string, so just embed that in the function itself. Don't append a trailing "; ". It's easier for the caller to append it, if it's preferred, than to remove it afterwards. Also add a regression test for the 'gp_enable_fallback_plan' GUC. There were none before. The error message you get with that GUC disabled uses the gp_guc_list_show function.
-
- 25 8月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 24 8月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
Commit 0be2e5b0 added a regression test to check that if a view contains an ORDER BY, that ORDER BY is obeyed on selecting from the view. However, it forgot to add the test to the schedule, so it was never run. Add it. There is actually a problem with the test as it is written: gpdiff masks out the differences in the row order, so this test won't catch the problem that it was originally written for. Nevertheless, seems better to enable the test and run, than not run it at all. But add a comment to point that out.
-
由 Heikki Linnakangas 提交于
In order to test this, modify the gpdiff tool to not suppress duplicate WARNINGs, if the duplicates don't have a segment number attached to them. The bug was easy to see by setting work_mem, which emits a WARNING in GPDB, except that suppressing the duplicates hid the bug. Even though we have a few tests that set work_mem, add one that does that explicitly for the purpose of testing for this bug, with a comment explaining its purpose. While we're at it, move the GPDB-specific test queries out of 'guc' test, into a new 'guc_gp' test, to keep the upstream 'guc' test pristine. Fixes github issue #3008, reported by @guofengrichard.
-
- 21 8月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
The way ORCA was tied into the planner, running a planner_hook was not supported in the intended way. This commit moves ORCA into standard_planner() instead of planner() and leaves the hook for extensions to make use of, with or without ORCA. Since the intention with the optimizer GUC is to replace the planner in postgres, while keeping the planning proess, this allows for planner extensions to co-operate with that. In order to reduce the Greenplum footprint in upstream postgres source files for future merges, the ORCA functions are moved to their own file. Also adds a memaccounting class for planner hooks since they otherwise ran in the planner scope, as well as a test for using planner_hooks.
-
- 18 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I repurposed the existing psql_gpdb_du test for this. It was very small, I think we can put tests for other \d commands in the same file. In the passing fix a typo in a comment there, and move the expected output from output/ to expected/, because it doesn't need any string replacements.
-
- 09 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Turn it into an extension, for easier installation. * Add a simpler variant of the gp_inject_fault function, with less options. This is applicable to almost all the calls in the regression suite, so it's nice to make them less verbose. * Change the dbid argument from smallint to int4. For convenience, so that you don't need a cast when calling the function.
-
- 27 7月, 2017 3 次提交
-
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The function gp_inject_fault() was defined in a test specific contrib module (src/test/dtm). It is moved to a dedicated contrib module gp_inject_fault. All tests can now make use of it. Two pg_regress tests (dispatch and cursor) are modified to demonstrate the usage. The function is modified so that it can inject fault in any segment, specified by dbid. No more invoking gpfaultinjector python script from SQL files. The new module is integrated into top level build so that it is included in make and make install.
-
由 Jesse Zhang 提交于
SharedInputScan (a.k.a. "Shared Scan" in EXPLAIN) is the operator through which Greenplum implements Common Table Expression execution. It executes in two modes: writer (a.k.a. producer) and reader (a.k.a. consumer). Writers will execute the common table expression definition and materialize the output, and readers can read the materialized output (potentially in parallel). Because of the parallel nature of Greenplum execution, slices containing Shared Scans need to synchronize among themselves to ensure that readers don't start until writers are finished writing. Specifically, a slice with readers depending on writers on a different slice will block during `ExecutorRun`, before even pulling the first tuple from the executor tree. Greenplum's Hash Join implementation will skip executing its outer ("probe side") subtree if it detects an empty inner ("hash side"), and declare all motions in the skipped subtree as "stopped" (we call this "squelching"). That means we can potentially squelch a subtree that contains a shared scan writer, leaving cross-slice readers waiting forever. For example, with ORCA enabled, the following query: ```SQL CREATE TABLE foo (a int, b int); CREATE TABLE bar (c int, d int); CREATE TABLE jazz(e int, f int); INSERT INTO bar VALUES (1, 1), (2, 2), (3, 3); INSERT INTO jazz VALUES (2, 2), (3, 3); ANALYZE foo; ANALYZE bar; ANALYZE jazz; SET statement_timeout = '15s'; SELECT * FROM ( WITH cte AS (SELECT * FROM foo) SELECT * FROM (SELECT * FROM cte UNION ALL SELECT * FROM cte) AS X JOIN bar ON b = c ) AS XY JOIN jazz on c = e AND b = f; ``` leads to a plan that will expose this problem: ``` QUERY PLAN ------------------------------------------------------------------------------------------------------------ Gather Motion 3:1 (slice2; segments: 3) (cost=0.00..2155.00 rows=1 width=24) -> Hash Join (cost=0.00..2155.00 rows=1 width=24) Hash Cond: bar.c = jazz.e AND share0_ref2.b = jazz.f AND share0_ref2.b = jazz.e AND bar.c = jazz.f -> Sequence (cost=0.00..1724.00 rows=1 width=16) -> Shared Scan (share slice:id 2:0) (cost=0.00..431.00 rows=1 width=1) -> Materialize (cost=0.00..431.00 rows=1 width=1) -> Table Scan on foo (cost=0.00..431.00 rows=1 width=8) -> Hash Join (cost=0.00..1293.00 rows=1 width=16) Hash Cond: share0_ref2.b = bar.c -> Redistribute Motion 3:3 (slice1; segments: 3) (cost=0.00..862.00 rows=1 width=8) Hash Key: share0_ref2.b -> Append (cost=0.00..862.00 rows=1 width=8) -> Shared Scan (share slice:id 1:0) (cost=0.00..431.00 rows=1 width=8) -> Shared Scan (share slice:id 1:0) (cost=0.00..431.00 rows=1 width=8) -> Hash (cost=431.00..431.00 rows=1 width=8) -> Table Scan on bar (cost=0.00..431.00 rows=1 width=8) -> Hash (cost=431.00..431.00 rows=1 width=8) -> Table Scan on jazz (cost=0.00..431.00 rows=1 width=8) Filter: e = f Optimizer status: PQO version 2.39.1 (20 rows) ``` where processes executing slice1 on the segments that have an empty `jazz` will hang. We fix this by ensuring we execute the Shared Scan writer even if it's in the sub tree that we're squelching. Signed-off-by: NMelanie Plageman <mplageman@pivotal.io> Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
- 22 7月, 2017 1 次提交
-