1. 21 5月, 2020 6 次提交
    • L
      docs - document the new plcontainer --use_local_copy option (#10021) · c038b824
      Lisa Owen 提交于
      * docs - document the new plcontainer --use_local_copy option
      
      * add more words around when to use the option
      Co-authored-by: NDavid Yozie <dyozie@pivotal.io>
      c038b824
    • L
      docs - restructure the platform requirements extensions table (#10027) · 4353accf
      Lisa Owen 提交于
      * docs - restructure the platform requirements extensions table
      
      * list newer version first
      
      * remove MADlib superscript/note, add to Additional Info
      
      * update plcontainer pkg versions to 2.1.2, R image to 3.6.3
      4353accf
    • J
      Dispatch analyze to segments · 0c27e42a
      Jinbao Chen 提交于
      Now the system table pg_statisitics only has data in master. But the
      hash join skew hash need the some information in pg_statisitics. So
      we need to dispatch the analyze command to segment, So that the system
      table pg_statistics could have data in segments.
      0c27e42a
    • P
      Revert "Fix the bug that GPFDIST_WATCHDOG_TIMER doesn't work." · dc8349d3
      Paul Guo 提交于
      This reverts commit 6e76d8a1.
      
      Saw pipeline failure in ubuntu jobs. Not look like flakiness.
      dc8349d3
    • H
      Setup GPHOME_LOADERS environment and script (#10107) · 9e9917ab
      Huiliang.liu 提交于
      GPHOME_LOADERS and greenplum_loaders_path.sh are still requried by some users.
      So export GPHOME_LOADERS as GPHOME_CLIENTS and link greenplum_loaders_path.sh
      to greenplum_clients_path.sh for compatible
      9e9917ab
    • J
      Mask out work_mem in auto_explain tests · d9bf9e69
      Jesse Zhang 提交于
      Commit 73ca7e77 (greenplum-db/gpdb#7195) introduced a simple suite
      that exercises auto_explain. However, it went beyond a simple "EXPLAIN",
      and instead it ran a muted version of EXPLAIN ANALYZE.
      
      Comparing the output of "EXPLAIN ANALYZE" in a regress test is always
      error-prone and flaky. To wit, commit 1348afc0 (#7608) had to be
      done shortly after 73ca7e77, because the feature was time
      sensitive. More recently commit 2e4d99fa (#10032) was forced to
      memorize the work_mem, likely compiled with --enable-cassert, and missed
      the case when we were configured without asserts, failing the test with
      a smaller-than-expected work_mem.
      
      This commit puts a band aid on it by masking the numeric values of
      work_mem in auto_explain output.
      d9bf9e69
  2. 20 5月, 2020 6 次提交
  3. 19 5月, 2020 6 次提交
    • (
      Provide a pg_exttable view for extension compatibility · 3aad307c
      (Jerome)Junfeng Yang 提交于
      pg_exttable catalog was removed because we use the FDW to implement
      external table now. But other extensions may still rely on the
      pg_exttable catalog.
      
      So we create a view base on this UDF to extract pg_exttable catalog
      info.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      3aad307c
    • A
      044c4c0d
    • A
      gpcloud: fix compiling issue due to pg_exttable removal · 6ef8d1cc
      Adam Lee 提交于
      6ef8d1cc
    • A
      Remove the pg_exttable catalog · e4b499aa
      Adam Lee 提交于
      Keep the syntax of external tables, but store as foreign tables with
      options into the catalog.
      
      While using it, transform the foreign table options to an ExtTableEntry,
      which is compatible with external_table_shim, PXF and other custom
      protocols.
      
      Also, add `DISTRIBUTED` syntax support for `CREATE FOREIGN TABLE` if
      the foreign server indicates it's an external table.
      
      Note:
      1, all permissions handling from pg_authid are still, to keep the
      external tables' GRANT queries, will do that in another PR if possible.
      2, multiple URI locations are stored in foreign table options, separated
      by `|`.
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      e4b499aa
    • D
      Fix bitmap segfault while portal clean · ef36338d
      Denis Smirnov 提交于
      Currently when we finish a query with a bitmap index scan and
      destroy its portal, we release bitmap resources in a wrong order.
      First of all we should release bitmap iterator (a bitmap wrapper)
      and only after that close down subplans (bitmap index scan with an
      allocated bitmap). Before current commit this operations were done
      in a reverse order that causes access to a freed bitmap in a
      iterator closing function.
      
      Underhood pfree() is a malloc's free() wrapper. Free() doesn't
      return memory to OS in most cases or even doesn't immediately
      corrupt data in a freed chunk, so it is possible to access freed
      chunk data right after its deallocation. That is why we can get
      segfault only under concurrent workloads when malloc's arena
      returns memory to OS.
      ef36338d
    • J
      Build ORCA with C++14: Take Two (#10068) · 649ee57d
      Jesse Zhang 提交于
      This patch makes the minimal changes to build ORCA with C++14. This
      should address the grievance that ORCA cannot build with the default
      Xerces C++ (3.2 or newer, which is built with GCC 8.3 in the default
      C++14 mode) headers from Debian. I've kept the CMake build system in
      sync with the main Makefile. I've also made sure that all ORCA tests
      pass.
      
      This patch set also enables ORCA in Travis so the community gets
      compilation coverage.
      
      === FIXME / near-term TODOs:
      
      What's _not_ included in this patch, but would be nice to have soon (in
      descending order of importance):
      
      1. -std=gnu++14 ought to be done in "configure", not in a Makefile. This
      is not a pendantic aesthetic issue, sooner or later we'll run into this
      problem, especially if we're mixing multiple things built in C++.
      
      2. Clean up the Makefiles and move most CXXFLAGS override into autoconf.
      
      3. Those noexept(false) seem excessive, we should benefit from
      conditionally marking more code "noexcept" at least in production.
      
      4. Detecting whether Xerces was generated (either by autoconf or CMake)
      with a compiler that's effectively running post-C++11
      
      5. Work around a GCC 9.2 bug that crashes the loading of minidumps (I've
      tested with GCC 6 to 10). Last I checked, the bug has been fixed in GCC
      releases 10.1 and 9.3.
      
      [resolves #9923]
      [resolves #10047]
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Reviewed-by: NHans Zeller <hzeller@pivotal.io>
      Reviewed-by: NAshuka Xue <axue@pivotal.io>
      Reviewed-by: NDavid Kimura <dkimura@pivotal.io>
      649ee57d
  4. 18 5月, 2020 9 次提交
  5. 16 5月, 2020 4 次提交
  6. 15 5月, 2020 8 次提交
    • T
      Guard against possible memory allocation botch in batchmemtuples(). · 706f7483
      Tom Lane 提交于
      Negative availMemLessRefund would be problematic.  It's not entirely
      clear whether the case can be hit in the code as it stands, but this
      seems like good future-proofing in any case.  While we're at it,
      insist that the value be not merely positive but not tiny, so as to
      avoid doing a lot of repalloc work for little gain.
      
      Peter Geoghegan
      
      Discussion: <CAM3SWZRVkuUB68DbAkgw=532gW0f+fofKueAMsY7hVYi68MuYQ@mail.gmail.com>
      706f7483
    • H
      Change Material back to using upstream tuplestore. · 2e4d99fa
      Heikki Linnakangas 提交于
      Now that ShareInputScan manages its own tuplestore, Material doesn't need
      the extra features that tuplestorenew.c provides.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      2e4d99fa
    • H
    • H
      Have ShareInputScan manage the tuplestore by itself. · 5bdf2c8b
      Heikki Linnakangas 提交于
      Seems a bit silly to have the Material node involved. Just create and
      manage the tuplestore in ShareInputScan node itself, and leave out the
      Material node.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      5bdf2c8b
    • H
      Remove GPDB changes in tuplesort that were used by ShareInputScans. · 0b233b59
      Heikki Linnakangas 提交于
      ShareInputScan no longer tries to share the sort tapes between processes,
      so all this infrastructure to track multiple read positions is no longer
      needed.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      0b233b59
    • H
      Remove optimization to share Sort tapes directly in ShareInputScan. · 611fa500
      Heikki Linnakangas 提交于
      Previously, ShareInputScan could co-opearate with a Sort node, to share
      the final sort tape directly with other processes. Remove that support. If
      a Sort node is shared, we now put a Materialize node on top of the Sort,
      like all other nodes.
      
      That is obviously less performant than sharing the sort tape directly.
      However, I don't believe that is significant in practice. Firstly, if you
      consider how a ShareInputScan is used, having a Sort below the
      ShareInputScan should be rare. A ShareInputScan is used to implement CTEs,
      and in order to have a Sort node just below the ShareInputScan, you need
      to have an ORDER BY in the CTE. For example (from the 'sisc_sort_spill'
      test):
      
          select avg(i3) from (
            with ctesisc as (select * from testsisc order by i2)
            select t1.i3, t2.i2
            from ctesisc as t1, ctesisc as t2
            where t1.i1 = t2.i2
          ) foo;
      
      However, in a query like that, the ORDER BY is actually useless; the
      order is not guaranteed to be preserved. In fact, ORCA optimizes it away
      completely.
      
      Secondly, even if you have a query like that, I don't think optimizing
      away the Material is very significant. If the number of rows is small
      enough to fit in memory, the Sort can be performed in memory, so you're
      still writing it to disk only once, in the Material node. If it's large
      enough to spill, the Material node will shield the Sort node from needing
      to support random access, which enables the "on-the-fly" final merge
      optimization in the tuplesort. So I believe you'll do roughly the same
      amount of I/O in that case, too. One way to think about this is that the
      final merge will be written out to the Material's tuplestore instead of
      the tuplesort's file.
      
      There is one drawback to that: the Material node won't be able to reuse
      the disk space used by the sort tapes, as the final merge is performed,
      so you'll momentarily need twice the disk space. I think that's
      acceptable. If you don't like that, don't put superfluous ORDER BYs in
      your WITH queries.
      Reviewed-by: NGeorgios Kokolatos <gkokolatos@pivotal.io>
      611fa500
    • W
      Fix the gpload error that have no attribute "staging_table" or "fast_path" · 524f3105
      Wen Lin 提交于
      while gpload is loading data if the configure file contains "error_table" and doesn't contain "preload", an error of no attribute "staging_table" or "fast_path"  occurs.
      524f3105
    • D
      Docs - remove Beta designation from Greenplum R docs · 7e4bbafb
      David Yozie 提交于
      7e4bbafb
  7. 14 5月, 2020 1 次提交
    • T
      Remove gpsys1 · eee440e6
      Tyler Ramer 提交于
      I'm not quite sure of the purpose of this utility, nor, apparently, is
      any readme or historical repo.
      
      Apart from a small fix provided in
      commit 71d67305,
      there has been no modification to this file since at least 2008. More
      importantly, I'm not quite sure of any reasonable use for this file. The
      supported platforms are only linux, darwin, or sunos5, and the listed
      use, of printing the memory size in bytes, is trivial on any of those
      systems without resorting to some python script that wraps a command line
      call.
      
      Given that it hasn't been updated since 2008, it's still compatible with
      some ancient version of python, which means that it's yet another file to
      upgrade to python 3 - in this case, let's drop the program, rather than
      bother upgrading it.
      Authored-by: NTyler Ramer <tramer@pivotal.io>
      eee440e6