- 22 5月, 2020 4 次提交
-
-
由 Pengzhou Tang 提交于
This is mainly to resolve slow response to sequence requests under TCP interconnect, sequence requests are sent through libpqs from QEs to QD (we call them dispatcher connections). In the past, under TCP interconnect, QD checked the events on dispatcher connections every 2 seconds, obviously it's inefficient. Under UDPIFC mode, QD also monitors the dispatcher connections when receving tuples from QEs so QD can process sequence requests in time, this commit applies the same logic to the TCP interconnect. Reviewed-by: NHao Wu <gfphoenix78@gmail.com> Reviewed-by: NNing Yu <nyu@pivotal.io>
-
由 Hubert Zhang 提交于
* Fix flake test: refresh matview should use 2pc commit. We have a refresh matview with unique index check test case CREATE TABLE mvtest_foo(a, b) AS VALUES(1, 10); CREATE MATERIALIZED VIEW mvtest_mv AS SELECT * FROM mvtest_foo distributed by(a); CREATE UNIQUE INDEX ON mvtest_mv(a); INSERT INTO mvtest_foo SELECT * FROM mvtest_foo; REFRESH MATERIALIZED VIEW mvtest_mv; Only one segment contains tuples and will failed the unique index check. Without 2pc, other segemnts will just commit transaction successfully. Since one segment errors out, QD will send cancel signal to all the segments, and if these segments have not finished the commit process, it will report the warning: DETAIL: The transaction has already changed locally, it has to be replicated to standby. Reviewed-by: NPaul Guo <paulguo@gmail.com>
-
由 Hubert Zhang 提交于
We introduce function which runs on INITPLAN in commit a21ff2 INITPLAN function is designed to support "CTAS select * from udf();" Since udf() is run on EntryDB, but EntryDB is always read gang which cannot do dispatch work, the query would fail if function contains DDL statement etc. The idea of INITPLAN function is to run the function on INITPLAN, which is QD in fact and store the result into a tuplestore. Later the FunctionScan on EntryDB just read tuple from tuplestore instead of running the real function. But the life cycle management is a little tricky. In the original commit, we hack to close the tuplestore in INITPLAN without deleting the file, and let EntryDB reader to delete the file after finishing the tuple fetch. This will introduce file leak if the transaction abort before the entryDB runs. This commit add a postprocess_initplans in ExecutorEnd() of the main plan to clean the tuplestore createed in preprocess_initplans in ExecutorStart() of the main plan. Note that postprocess_initplans must be place after the dispatch work are finished i.e. mppExecutorFinishup(). Upstream don't need this function since it always use scalar PARAM to communicate between INITPLAN and main plan.
-
由 Chris Hajas 提交于
Commit 649ee57d "Build ORCA with C++14: Take Two (#10068)" left behind a major FIXME: a hard-coded CXXFLAGS in gporca.mk. At the very least this looks completely out of place aesthetically. But more importantly, this is problematic in several ways: 1. It leaves the language mode for part of the code base (src/backend/gpopt "ORCA translator") unspecified. The ORCA translator closely collaborates with ORCA and directly uses much of the interfaces from ORCA. There is a non-hypothetical risk of non-subtle incompatibilities. This is obscured by the fact that GCC and upstream Clang (which both default to gnu++14 after their respective 6.0 releases). Apple Clang from Xcode 11, however, reacts to it much like the default is still gnu++98: > In file included from CConfigParamMapping.cpp:20: > In file included from ../../../../src/include/gpopt/config/CConfigParamMapping.h:19: > In file included from ../../../../src/backend/gporca/libgpos/include/gpos/common/CBitSet.h:15: > In file included from ../../../../src/backend/gporca/libgpos/include/gpos/common/CDynamicPtrArray.h:15: > ../../../../src/backend/gporca/libgpos/include/gpos/common/CRefCount.h:68:24: error: expected ';' at end of declaration list > virtual ~CRefCount() noexcept(false) > ^ > ; 2. It potentially conflicts with other parts of the code base. Namely, when configured with gpcloud, we might have -std=c++11 and -std=gnu++14 in the same breath, which is highly undesirable or an outright error. 3. Idiomatically language standard selection should modify CXX, not CXXFLAGS, in the same vein as how AC_PROC_CC_C99 modifies CC. We already had a precedence of setting the compiler up in C++11 mode (albeit for a less-used component gpcloud). This patch leverages the same mechanism to set up CXX in C++14 mode. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 21 5月, 2020 6 次提交
-
-
由 Lisa Owen 提交于
* docs - document the new plcontainer --use_local_copy option * add more words around when to use the option Co-authored-by: NDavid Yozie <dyozie@pivotal.io>
-
由 Lisa Owen 提交于
* docs - restructure the platform requirements extensions table * list newer version first * remove MADlib superscript/note, add to Additional Info * update plcontainer pkg versions to 2.1.2, R image to 3.6.3
-
由 Jinbao Chen 提交于
Now the system table pg_statisitics only has data in master. But the hash join skew hash need the some information in pg_statisitics. So we need to dispatch the analyze command to segment, So that the system table pg_statistics could have data in segments.
-
由 Huiliang.liu 提交于
GPHOME_LOADERS and greenplum_loaders_path.sh are still requried by some users. So export GPHOME_LOADERS as GPHOME_CLIENTS and link greenplum_loaders_path.sh to greenplum_clients_path.sh for compatible
-
由 Jesse Zhang 提交于
Commit 73ca7e77 (greenplum-db/gpdb#7195) introduced a simple suite that exercises auto_explain. However, it went beyond a simple "EXPLAIN", and instead it ran a muted version of EXPLAIN ANALYZE. Comparing the output of "EXPLAIN ANALYZE" in a regress test is always error-prone and flaky. To wit, commit 1348afc0 (#7608) had to be done shortly after 73ca7e77, because the feature was time sensitive. More recently commit 2e4d99fa (#10032) was forced to memorize the work_mem, likely compiled with --enable-cassert, and missed the case when we were configured without asserts, failing the test with a smaller-than-expected work_mem. This commit puts a band aid on it by masking the numeric values of work_mem in auto_explain output.
-
- 20 5月, 2020 6 次提交
-
-
由 xiong-gang 提交于
-
由 xiong-gang 提交于
-
由 盏一 提交于
Just like `man strtol` says: > the calling program should set errno to 0 before the call, and then determine if > an error occurred by checking whether errno has a nonzero value after the call.
-
由 Mel Kiyama 提交于
-
由 Lisa Owen 提交于
* docs - enhance the pxf supported platforms section * vendor -> bundle
-
由 Tyler Ramer 提交于
- kernel.sem final value has become 4096 from 40960 - vm.dirty_bytes and vm.dirty_background_bytes removed - Adjust vm.dirty_ratio and vm.dirty_background_ratio to be in line with system of less than 64 GB of memory. Co-authored-by: NTyler Ramer <tramer@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 19 5月, 2020 6 次提交
-
-
由 (Jerome)Junfeng Yang 提交于
pg_exttable catalog was removed because we use the FDW to implement external table now. But other extensions may still rely on the pg_exttable catalog. So we create a view base on this UDF to extract pg_exttable catalog info. Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
Keep the syntax of external tables, but store as foreign tables with options into the catalog. While using it, transform the foreign table options to an ExtTableEntry, which is compatible with external_table_shim, PXF and other custom protocols. Also, add `DISTRIBUTED` syntax support for `CREATE FOREIGN TABLE` if the foreign server indicates it's an external table. Note: 1, all permissions handling from pg_authid are still, to keep the external tables' GRANT queries, will do that in another PR if possible. 2, multiple URI locations are stored in foreign table options, separated by `|`. Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Denis Smirnov 提交于
Currently when we finish a query with a bitmap index scan and destroy its portal, we release bitmap resources in a wrong order. First of all we should release bitmap iterator (a bitmap wrapper) and only after that close down subplans (bitmap index scan with an allocated bitmap). Before current commit this operations were done in a reverse order that causes access to a freed bitmap in a iterator closing function. Underhood pfree() is a malloc's free() wrapper. Free() doesn't return memory to OS in most cases or even doesn't immediately corrupt data in a freed chunk, so it is possible to access freed chunk data right after its deallocation. That is why we can get segfault only under concurrent workloads when malloc's arena returns memory to OS.
-
由 Jesse Zhang 提交于
This patch makes the minimal changes to build ORCA with C++14. This should address the grievance that ORCA cannot build with the default Xerces C++ (3.2 or newer, which is built with GCC 8.3 in the default C++14 mode) headers from Debian. I've kept the CMake build system in sync with the main Makefile. I've also made sure that all ORCA tests pass. This patch set also enables ORCA in Travis so the community gets compilation coverage. === FIXME / near-term TODOs: What's _not_ included in this patch, but would be nice to have soon (in descending order of importance): 1. -std=gnu++14 ought to be done in "configure", not in a Makefile. This is not a pendantic aesthetic issue, sooner or later we'll run into this problem, especially if we're mixing multiple things built in C++. 2. Clean up the Makefiles and move most CXXFLAGS override into autoconf. 3. Those noexept(false) seem excessive, we should benefit from conditionally marking more code "noexcept" at least in production. 4. Detecting whether Xerces was generated (either by autoconf or CMake) with a compiler that's effectively running post-C++11 5. Work around a GCC 9.2 bug that crashes the loading of minidumps (I've tested with GCC 6 to 10). Last I checked, the bug has been fixed in GCC releases 10.1 and 9.3. [resolves #9923] [resolves #10047] Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NHans Zeller <hzeller@pivotal.io> Reviewed-by: NAshuka Xue <axue@pivotal.io> Reviewed-by: NDavid Kimura <dkimura@pivotal.io>
-
- 18 5月, 2020 9 次提交
-
-
由 Jesse Zhang 提交于
This commit fixes up a host of top_builddir vs top_srcdir confusion, uncovered by running a VPATH build (with ORCA enabled, of course). I've also taken this opportunity to slightly eliminate some duplication, using Makefile inclusion. After this commit, a VPATH build should compile. This resolves #10071.
-
由 Jesse Zhang 提交于
This fixes up quite a few confusion around the use of top_srcdir vs top_builddir. Amusingly most of this is uncovered by running merely a "make clean" on a VPATH build. One notable change (at least to me) is changing a "." into srcdir in gpfdist Makefile. This is really easy to miss (again, to me). While we're here I've taken the liberty to fix a surrounding mistake like CPPFLAGS vs CFLAGS in src/test/regress. This should let a VPATH build with ORCA disabled successfully compile.
-
由 Jinbao Chen 提交于
We are not allowed to modify the distribution of partition leaf table after gpdb6. Because the distibution of root table and leaf table must be same. But if only reorg on the partition leaf table, it will not cause this problem. So we enable reorg when the distibution was not be changed.
-
- 16 5月, 2020 4 次提交
-
-
由 Mel Kiyama 提交于
most functions have been updated to use regclass (oid or table name)
-
由 Mel Kiyama 提交于
* docs - clarify/fix CREATE TABLE syntax for partitioned tables Also add more partitioned table examples. * doc - minor updates to partitioned table syntax. * docs - minor fix to syntax diagram
-
由 Mel Kiyama 提交于
* docs - update bloat best practices information from dev. --Remove copying or redistributing table data as alternatives to VACUUM FULL --Mention that VACUUM (without FULL) maintenance is for both heap and AO tables. Also Reorganized information. Clarified ACCESS EXCLUSIVE lock is reason users cannot access table during VACUUM FULL * docs - updates based on review comments. * docs - removed warning about stopping VACUUM FULL.
-
由 mkiyama 提交于
-
- 15 5月, 2020 5 次提交
-
-
由 Tom Lane 提交于
Negative availMemLessRefund would be problematic. It's not entirely clear whether the case can be hit in the code as it stands, but this seems like good future-proofing in any case. While we're at it, insist that the value be not merely positive but not tiny, so as to avoid doing a lot of repalloc work for little gain. Peter Geoghegan Discussion: <CAM3SWZRVkuUB68DbAkgw=532gW0f+fofKueAMsY7hVYi68MuYQ@mail.gmail.com>
-
由 Heikki Linnakangas 提交于
Now that ShareInputScan manages its own tuplestore, Material doesn't need the extra features that tuplestorenew.c provides. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
Reviewed-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
Seems a bit silly to have the Material node involved. Just create and manage the tuplestore in ShareInputScan node itself, and leave out the Material node. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
ShareInputScan no longer tries to share the sort tapes between processes, so all this infrastructure to track multiple read positions is no longer needed. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-