- 22 5月, 2020 1 次提交
-
-
由 Chris Hajas 提交于
Commit 649ee57d "Build ORCA with C++14: Take Two (#10068)" left behind a major FIXME: a hard-coded CXXFLAGS in gporca.mk. At the very least this looks completely out of place aesthetically. But more importantly, this is problematic in several ways: 1. It leaves the language mode for part of the code base (src/backend/gpopt "ORCA translator") unspecified. The ORCA translator closely collaborates with ORCA and directly uses much of the interfaces from ORCA. There is a non-hypothetical risk of non-subtle incompatibilities. This is obscured by the fact that GCC and upstream Clang (which both default to gnu++14 after their respective 6.0 releases). Apple Clang from Xcode 11, however, reacts to it much like the default is still gnu++98: > In file included from CConfigParamMapping.cpp:20: > In file included from ../../../../src/include/gpopt/config/CConfigParamMapping.h:19: > In file included from ../../../../src/backend/gporca/libgpos/include/gpos/common/CBitSet.h:15: > In file included from ../../../../src/backend/gporca/libgpos/include/gpos/common/CDynamicPtrArray.h:15: > ../../../../src/backend/gporca/libgpos/include/gpos/common/CRefCount.h:68:24: error: expected ';' at end of declaration list > virtual ~CRefCount() noexcept(false) > ^ > ; 2. It potentially conflicts with other parts of the code base. Namely, when configured with gpcloud, we might have -std=c++11 and -std=gnu++14 in the same breath, which is highly undesirable or an outright error. 3. Idiomatically language standard selection should modify CXX, not CXXFLAGS, in the same vein as how AC_PROC_CC_C99 modifies CC. We already had a precedence of setting the compiler up in C++11 mode (albeit for a less-used component gpcloud). This patch leverages the same mechanism to set up CXX in C++14 mode. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 21 5月, 2020 6 次提交
-
-
由 Lisa Owen 提交于
* docs - document the new plcontainer --use_local_copy option * add more words around when to use the option Co-authored-by: NDavid Yozie <dyozie@pivotal.io>
-
由 Lisa Owen 提交于
* docs - restructure the platform requirements extensions table * list newer version first * remove MADlib superscript/note, add to Additional Info * update plcontainer pkg versions to 2.1.2, R image to 3.6.3
-
由 Jinbao Chen 提交于
Now the system table pg_statisitics only has data in master. But the hash join skew hash need the some information in pg_statisitics. So we need to dispatch the analyze command to segment, So that the system table pg_statistics could have data in segments.
-
由 Huiliang.liu 提交于
GPHOME_LOADERS and greenplum_loaders_path.sh are still requried by some users. So export GPHOME_LOADERS as GPHOME_CLIENTS and link greenplum_loaders_path.sh to greenplum_clients_path.sh for compatible
-
由 Jesse Zhang 提交于
Commit 73ca7e77 (greenplum-db/gpdb#7195) introduced a simple suite that exercises auto_explain. However, it went beyond a simple "EXPLAIN", and instead it ran a muted version of EXPLAIN ANALYZE. Comparing the output of "EXPLAIN ANALYZE" in a regress test is always error-prone and flaky. To wit, commit 1348afc0 (#7608) had to be done shortly after 73ca7e77, because the feature was time sensitive. More recently commit 2e4d99fa (#10032) was forced to memorize the work_mem, likely compiled with --enable-cassert, and missed the case when we were configured without asserts, failing the test with a smaller-than-expected work_mem. This commit puts a band aid on it by masking the numeric values of work_mem in auto_explain output.
-
- 20 5月, 2020 6 次提交
-
-
由 xiong-gang 提交于
-
由 xiong-gang 提交于
-
由 盏一 提交于
Just like `man strtol` says: > the calling program should set errno to 0 before the call, and then determine if > an error occurred by checking whether errno has a nonzero value after the call.
-
由 Mel Kiyama 提交于
-
由 Lisa Owen 提交于
* docs - enhance the pxf supported platforms section * vendor -> bundle
-
由 Tyler Ramer 提交于
- kernel.sem final value has become 4096 from 40960 - vm.dirty_bytes and vm.dirty_background_bytes removed - Adjust vm.dirty_ratio and vm.dirty_background_ratio to be in line with system of less than 64 GB of memory. Co-authored-by: NTyler Ramer <tramer@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 19 5月, 2020 6 次提交
-
-
由 (Jerome)Junfeng Yang 提交于
pg_exttable catalog was removed because we use the FDW to implement external table now. But other extensions may still rely on the pg_exttable catalog. So we create a view base on this UDF to extract pg_exttable catalog info. Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
Keep the syntax of external tables, but store as foreign tables with options into the catalog. While using it, transform the foreign table options to an ExtTableEntry, which is compatible with external_table_shim, PXF and other custom protocols. Also, add `DISTRIBUTED` syntax support for `CREATE FOREIGN TABLE` if the foreign server indicates it's an external table. Note: 1, all permissions handling from pg_authid are still, to keep the external tables' GRANT queries, will do that in another PR if possible. 2, multiple URI locations are stored in foreign table options, separated by `|`. Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Denis Smirnov 提交于
Currently when we finish a query with a bitmap index scan and destroy its portal, we release bitmap resources in a wrong order. First of all we should release bitmap iterator (a bitmap wrapper) and only after that close down subplans (bitmap index scan with an allocated bitmap). Before current commit this operations were done in a reverse order that causes access to a freed bitmap in a iterator closing function. Underhood pfree() is a malloc's free() wrapper. Free() doesn't return memory to OS in most cases or even doesn't immediately corrupt data in a freed chunk, so it is possible to access freed chunk data right after its deallocation. That is why we can get segfault only under concurrent workloads when malloc's arena returns memory to OS.
-
由 Jesse Zhang 提交于
This patch makes the minimal changes to build ORCA with C++14. This should address the grievance that ORCA cannot build with the default Xerces C++ (3.2 or newer, which is built with GCC 8.3 in the default C++14 mode) headers from Debian. I've kept the CMake build system in sync with the main Makefile. I've also made sure that all ORCA tests pass. This patch set also enables ORCA in Travis so the community gets compilation coverage. === FIXME / near-term TODOs: What's _not_ included in this patch, but would be nice to have soon (in descending order of importance): 1. -std=gnu++14 ought to be done in "configure", not in a Makefile. This is not a pendantic aesthetic issue, sooner or later we'll run into this problem, especially if we're mixing multiple things built in C++. 2. Clean up the Makefiles and move most CXXFLAGS override into autoconf. 3. Those noexept(false) seem excessive, we should benefit from conditionally marking more code "noexcept" at least in production. 4. Detecting whether Xerces was generated (either by autoconf or CMake) with a compiler that's effectively running post-C++11 5. Work around a GCC 9.2 bug that crashes the loading of minidumps (I've tested with GCC 6 to 10). Last I checked, the bug has been fixed in GCC releases 10.1 and 9.3. [resolves #9923] [resolves #10047] Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NHans Zeller <hzeller@pivotal.io> Reviewed-by: NAshuka Xue <axue@pivotal.io> Reviewed-by: NDavid Kimura <dkimura@pivotal.io>
-
- 18 5月, 2020 9 次提交
-
-
由 Jesse Zhang 提交于
This commit fixes up a host of top_builddir vs top_srcdir confusion, uncovered by running a VPATH build (with ORCA enabled, of course). I've also taken this opportunity to slightly eliminate some duplication, using Makefile inclusion. After this commit, a VPATH build should compile. This resolves #10071.
-
由 Jesse Zhang 提交于
This fixes up quite a few confusion around the use of top_srcdir vs top_builddir. Amusingly most of this is uncovered by running merely a "make clean" on a VPATH build. One notable change (at least to me) is changing a "." into srcdir in gpfdist Makefile. This is really easy to miss (again, to me). While we're here I've taken the liberty to fix a surrounding mistake like CPPFLAGS vs CFLAGS in src/test/regress. This should let a VPATH build with ORCA disabled successfully compile.
-
由 Jinbao Chen 提交于
We are not allowed to modify the distribution of partition leaf table after gpdb6. Because the distibution of root table and leaf table must be same. But if only reorg on the partition leaf table, it will not cause this problem. So we enable reorg when the distibution was not be changed.
-
- 16 5月, 2020 4 次提交
-
-
由 Mel Kiyama 提交于
most functions have been updated to use regclass (oid or table name)
-
由 Mel Kiyama 提交于
* docs - clarify/fix CREATE TABLE syntax for partitioned tables Also add more partitioned table examples. * doc - minor updates to partitioned table syntax. * docs - minor fix to syntax diagram
-
由 Mel Kiyama 提交于
* docs - update bloat best practices information from dev. --Remove copying or redistributing table data as alternatives to VACUUM FULL --Mention that VACUUM (without FULL) maintenance is for both heap and AO tables. Also Reorganized information. Clarified ACCESS EXCLUSIVE lock is reason users cannot access table during VACUUM FULL * docs - updates based on review comments. * docs - removed warning about stopping VACUUM FULL.
-
由 mkiyama 提交于
-
- 15 5月, 2020 8 次提交
-
-
由 Tom Lane 提交于
Negative availMemLessRefund would be problematic. It's not entirely clear whether the case can be hit in the code as it stands, but this seems like good future-proofing in any case. While we're at it, insist that the value be not merely positive but not tiny, so as to avoid doing a lot of repalloc work for little gain. Peter Geoghegan Discussion: <CAM3SWZRVkuUB68DbAkgw=532gW0f+fofKueAMsY7hVYi68MuYQ@mail.gmail.com>
-
由 Heikki Linnakangas 提交于
Now that ShareInputScan manages its own tuplestore, Material doesn't need the extra features that tuplestorenew.c provides. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
Reviewed-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
Seems a bit silly to have the Material node involved. Just create and manage the tuplestore in ShareInputScan node itself, and leave out the Material node. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
ShareInputScan no longer tries to share the sort tapes between processes, so all this infrastructure to track multiple read positions is no longer needed. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Heikki Linnakangas 提交于
Previously, ShareInputScan could co-opearate with a Sort node, to share the final sort tape directly with other processes. Remove that support. If a Sort node is shared, we now put a Materialize node on top of the Sort, like all other nodes. That is obviously less performant than sharing the sort tape directly. However, I don't believe that is significant in practice. Firstly, if you consider how a ShareInputScan is used, having a Sort below the ShareInputScan should be rare. A ShareInputScan is used to implement CTEs, and in order to have a Sort node just below the ShareInputScan, you need to have an ORDER BY in the CTE. For example (from the 'sisc_sort_spill' test): select avg(i3) from ( with ctesisc as (select * from testsisc order by i2) select t1.i3, t2.i2 from ctesisc as t1, ctesisc as t2 where t1.i1 = t2.i2 ) foo; However, in a query like that, the ORDER BY is actually useless; the order is not guaranteed to be preserved. In fact, ORCA optimizes it away completely. Secondly, even if you have a query like that, I don't think optimizing away the Material is very significant. If the number of rows is small enough to fit in memory, the Sort can be performed in memory, so you're still writing it to disk only once, in the Material node. If it's large enough to spill, the Material node will shield the Sort node from needing to support random access, which enables the "on-the-fly" final merge optimization in the tuplesort. So I believe you'll do roughly the same amount of I/O in that case, too. One way to think about this is that the final merge will be written out to the Material's tuplestore instead of the tuplesort's file. There is one drawback to that: the Material node won't be able to reuse the disk space used by the sort tapes, as the final merge is performed, so you'll momentarily need twice the disk space. I think that's acceptable. If you don't like that, don't put superfluous ORDER BYs in your WITH queries. Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
-
由 Wen Lin 提交于
while gpload is loading data if the configure file contains "error_table" and doesn't contain "preload", an error of no attribute "staging_table" or "fast_path" occurs.
-
由 David Yozie 提交于
-