1. 26 5月, 2020 1 次提交
  2. 25 5月, 2020 4 次提交
    • H
      Add comment on test. · d3809fcc
      Heikki Linnakangas 提交于
      The commit message of commit a7412321 explained the reason well, but
      let's also add it in a comment in the test itself.
      d3809fcc
    • Always use the <,<=,>,>= for the comparison of gxid · 7de102cd
      盏一 提交于
      the distributed xid is just a plain counter, there is no WRAPAROUND in
      it.
      7de102cd
    • J
      Fixed the \dm empty output error (#10163) · 999f187e
      Jinbao Chen 提交于
      The psql client ignored rel storage when he create the \dm command.
      So the output of \dm was empty. Add the correct rel storage check in command.
      999f187e
    • P
      Fix a hung issue caused by gp_interconnect_id disorder · 644bde25
      Pengzhou Tang 提交于
      This issue is exposed when doing an experiment to remove the
      special "eval_stable_functions" handling in evaluate_function(),
      qp_functions_in_* test cases will get stuck sometimes and it turns
      out to be a gp_interconnect_id disorder issue.
      
      Under UDPIFC interconnect, gp_interconnect_id is used to
      distinguish the executions of MPP-fied plan in the same session
      and in the receiver side, packets with smaller gp_interconnect_id
      is treated as 'past' packets, receiver will stop the sender to send
      the packets.
      
      The RCA of the hung is:
      1. QD call InitSliceTable() to advance the gp_interconnect_id and
      store it in slice table.
      2. In CdbDispatchPlan->exec_make_plan_constant(), QD find some
      stable function need to be simplified to const, then it executes
      this function first.
      3. The function contains the SQL, QD init another slice table and
      advance the gp_interconnect_id again, QD dispatch the new plan and
      execute it.
      4. After the function is simplified to const, QD continues to dispatch
      the previous plan, however, the gp_interconnect_id for it becomes the
      older one. When a packet comes, if the receiver hasn't set up the
      interconnect yet, the packet will be handled by handleMismatch() and
      it will be treated as `past` packets and the senders will be stopped
      earlier by the receiver. Later the receiver finish the setup of
      interconnect, it cannot get any packets from senders and get stuck.
      
      To resolve this, we advance the gp_interconnect_id when a plan is
      really dispatched, the plan is dispatched sequentially, so the later
      dispatched plan will have a higher gp_interconnect_id.
      
      Also limit the usage of gp_interconnect_id in rx thread of UDPIFC,
      we prefer to use sliceTable->ic_instance_id in main thread.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Reviewed-by: NAsim R P <apraveen@pivotal.io>
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      644bde25
  3. 22 5月, 2020 7 次提交
    • (
      Use foreign data wraper routines to replace external insert in COPY FROM. (#10169) · 23f8671a
      (Jerome)Junfeng Yang 提交于
      Enable Copy Form for foreign tables to remove the external table
      dependency in copy.c.
      This commit backports small part of the commit 3d956d95 from Postgres.
      
      Remove the fileam.h including from non-external code. So we can extract
      external table into extension later.
      Move function external_set_env_vars to URL component since extvar_t is
      defined in url.h.
      Implement external table fdw's BeginForeignInsert and EndForeignInsert,
      so COPY FROM will go through the fdw routine instead of the external
      insert.
      Reviewed-by: NHeikki Linnakangas <heikki.linnakangas@iki.fi>
      23f8671a
    • H
      Fix pkill path in sreh test (#10171) · 36b1c7d0
      Huiliang.liu 提交于
      pkill is not in /bin/ folder on Ubuntu, so gpfdist can't be killed
      in sreh test. That would make gpfdist regression test fail
      36b1c7d0
    • H
      Fix bug when UNION ALL replicated tables with different numsegments (#10135) · 49749292
      Hao Wu 提交于
      `select c.c1, c.c2 from d1 c union all select a.c1, a.c2 from d2 a;`
      Both d1 and d2 are replicated tables, but the `numsegments` of them
      in gp_distribution_policy are different. This could happen during
      gpexpanding. The bug exists in function cdbpath_create_motion_path.
      Both `subpath->locus` and `locus` are SegmentGeneral, but the locuses
      are not equal
      Co-authored-by: NPengzhou Tang <ptang@pivotal.io>
      49749292
    • P
      Monitor dispatcher connection when receiving from TCP interconnect · c1d45e9e
      Pengzhou Tang 提交于
      This is mainly to resolve slow response to sequence requests under
      TCP interconnect, sequence requests are sent through libpqs from
      QEs to QD (we call them dispatcher connections). In the past, under
      TCP interconnect, QD checked the events on dispatcher connections
      every 2 seconds, obviously it's inefficient.
      
      Under UDPIFC mode, QD also monitors the dispatcher connections when
      receving tuples from QEs so QD can process sequence requests in
      time, this commit applies the same logic to the TCP interconnect.
      Reviewed-by: NHao Wu <gfphoenix78@gmail.com>
      Reviewed-by: NNing Yu <nyu@pivotal.io>
      c1d45e9e
    • H
      Fix flaky test: refresh matview should use 2pc commit. (#10028) · 4b504304
      Hubert Zhang 提交于
      * Fix flake test: refresh matview should use 2pc commit.
      
      We have a refresh matview with unique index check test case
      CREATE TABLE mvtest_foo(a, b) AS VALUES(1, 10);
      CREATE MATERIALIZED VIEW mvtest_mv AS SELECT * FROM mvtest_foo distributed by(a);
      CREATE UNIQUE INDEX ON mvtest_mv(a);
      INSERT INTO mvtest_foo SELECT * FROM mvtest_foo;
      REFRESH MATERIALIZED VIEW mvtest_mv;
      
      Only one segment contains tuples and will failed the unique index
      check. Without 2pc, other segemnts will just commit transaction
      successfully. Since one segment errors out, QD will send cancel
      signal to all the segments, and if these segments have not finished
      the commit process, it will report the warning:
      DETAIL: The transaction has already changed locally,
      it has to be replicated to standby.
      Reviewed-by: NPaul Guo <paulguo@gmail.com>
      4b504304
    • H
      Refactor the resource management in INITPLAN function · f669acf7
      Hubert Zhang 提交于
      We introduce function which runs on INITPLAN in commit a21ff2
      INITPLAN function is designed to support "CTAS select * from udf();"
      Since udf() is run on EntryDB, but EntryDB is always read gang which
      cannot do dispatch work, the query would fail if function contains DDL
      statement etc.
      
      The idea of INITPLAN function is to run the function on INITPLAN, which
      is QD in fact and store the result into a tuplestore. Later the FunctionScan
      on EntryDB just read tuple from tuplestore instead of running the real function.
      
      But the life cycle management is a little tricky. In the original commit, we
      hack to close the tuplestore in INITPLAN without deleting the file, and let
      EntryDB reader to delete the file after finishing the tuple fetch. This will
      introduce file leak if the transaction abort before the entryDB runs.
      
      This commit add a postprocess_initplans in ExecutorEnd() of the main plan to
      clean the tuplestore createed in preprocess_initplans in ExecutorStart() of
      the main plan. Note that postprocess_initplans must be place after the dispatch
      work are finished i.e. mppExecutorFinishup().
      Upstream don't need this function since it always use scalar PARAM to communicate
      between INITPLAN and main plan.
      f669acf7
    • C
      Let configure set C++14 mode (#10147) · b371e592
      Chris Hajas 提交于
      Commit 649ee57d "Build ORCA with C++14: Take Two (#10068)" left
      behind a major FIXME: a hard-coded CXXFLAGS in gporca.mk. At the very
      least this looks completely out of place aesthetically. But more
      importantly, this is problematic in several ways:
      
      1. It leaves the language mode for part of the code base
      (src/backend/gpopt "ORCA translator") unspecified. The ORCA translator
      closely collaborates with ORCA and directly uses much of the interfaces
      from ORCA. There is a non-hypothetical risk of non-subtle
      incompatibilities. This is obscured by the fact that GCC and upstream
      Clang (which both default to gnu++14 after their respective 6.0
      releases). Apple Clang from Xcode 11, however, reacts to it much like
      the default is still gnu++98:
      
      > In file included from CConfigParamMapping.cpp:20:
      > In file included from ../../../../src/include/gpopt/config/CConfigParamMapping.h:19:
      > In file included from ../../../../src/backend/gporca/libgpos/include/gpos/common/CBitSet.h:15:
      > In file included from ../../../../src/backend/gporca/libgpos/include/gpos/common/CDynamicPtrArray.h:15:
      > ../../../../src/backend/gporca/libgpos/include/gpos/common/CRefCount.h:68:24: error: expected ';' at end of declaration list
      >                         virtual ~CRefCount() noexcept(false)
      >                                             ^
      >                                             ;
      
      2. It potentially conflicts with other parts of the code base. Namely,
      when configured with gpcloud, we might have -std=c++11 and -std=gnu++14
      in the same breath, which is highly undesirable or an outright error.
      
      3. Idiomatically language standard selection should modify CXX, not
      CXXFLAGS, in the same vein as how AC_PROC_CC_C99 modifies CC.
      
      We already had a precedence of setting the compiler up in C++11 mode
      (albeit for a less-used component gpcloud). This patch leverages the
      same mechanism to set up CXX in C++14 mode.
      Authored-by: NChris Hajas <chajas@pivotal.io>
      b371e592
  4. 21 5月, 2020 6 次提交
    • L
      docs - document the new plcontainer --use_local_copy option (#10021) · c038b824
      Lisa Owen 提交于
      * docs - document the new plcontainer --use_local_copy option
      
      * add more words around when to use the option
      Co-authored-by: NDavid Yozie <dyozie@pivotal.io>
      c038b824
    • L
      docs - restructure the platform requirements extensions table (#10027) · 4353accf
      Lisa Owen 提交于
      * docs - restructure the platform requirements extensions table
      
      * list newer version first
      
      * remove MADlib superscript/note, add to Additional Info
      
      * update plcontainer pkg versions to 2.1.2, R image to 3.6.3
      4353accf
    • J
      Dispatch analyze to segments · 0c27e42a
      Jinbao Chen 提交于
      Now the system table pg_statisitics only has data in master. But the
      hash join skew hash need the some information in pg_statisitics. So
      we need to dispatch the analyze command to segment, So that the system
      table pg_statistics could have data in segments.
      0c27e42a
    • P
      Revert "Fix the bug that GPFDIST_WATCHDOG_TIMER doesn't work." · dc8349d3
      Paul Guo 提交于
      This reverts commit 6e76d8a1.
      
      Saw pipeline failure in ubuntu jobs. Not look like flakiness.
      dc8349d3
    • H
      Setup GPHOME_LOADERS environment and script (#10107) · 9e9917ab
      Huiliang.liu 提交于
      GPHOME_LOADERS and greenplum_loaders_path.sh are still requried by some users.
      So export GPHOME_LOADERS as GPHOME_CLIENTS and link greenplum_loaders_path.sh
      to greenplum_clients_path.sh for compatible
      9e9917ab
    • J
      Mask out work_mem in auto_explain tests · d9bf9e69
      Jesse Zhang 提交于
      Commit 73ca7e77 (greenplum-db/gpdb#7195) introduced a simple suite
      that exercises auto_explain. However, it went beyond a simple "EXPLAIN",
      and instead it ran a muted version of EXPLAIN ANALYZE.
      
      Comparing the output of "EXPLAIN ANALYZE" in a regress test is always
      error-prone and flaky. To wit, commit 1348afc0 (#7608) had to be
      done shortly after 73ca7e77, because the feature was time
      sensitive. More recently commit 2e4d99fa (#10032) was forced to
      memorize the work_mem, likely compiled with --enable-cassert, and missed
      the case when we were configured without asserts, failing the test with
      a smaller-than-expected work_mem.
      
      This commit puts a band aid on it by masking the numeric values of
      work_mem in auto_explain output.
      d9bf9e69
  5. 20 5月, 2020 6 次提交
  6. 19 5月, 2020 6 次提交
    • (
      Provide a pg_exttable view for extension compatibility · 3aad307c
      (Jerome)Junfeng Yang 提交于
      pg_exttable catalog was removed because we use the FDW to implement
      external table now. But other extensions may still rely on the
      pg_exttable catalog.
      
      So we create a view base on this UDF to extract pg_exttable catalog
      info.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      3aad307c
    • A
      044c4c0d
    • A
      gpcloud: fix compiling issue due to pg_exttable removal · 6ef8d1cc
      Adam Lee 提交于
      6ef8d1cc
    • A
      Remove the pg_exttable catalog · e4b499aa
      Adam Lee 提交于
      Keep the syntax of external tables, but store as foreign tables with
      options into the catalog.
      
      While using it, transform the foreign table options to an ExtTableEntry,
      which is compatible with external_table_shim, PXF and other custom
      protocols.
      
      Also, add `DISTRIBUTED` syntax support for `CREATE FOREIGN TABLE` if
      the foreign server indicates it's an external table.
      
      Note:
      1, all permissions handling from pg_authid are still, to keep the
      external tables' GRANT queries, will do that in another PR if possible.
      2, multiple URI locations are stored in foreign table options, separated
      by `|`.
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      e4b499aa
    • D
      Fix bitmap segfault while portal clean · ef36338d
      Denis Smirnov 提交于
      Currently when we finish a query with a bitmap index scan and
      destroy its portal, we release bitmap resources in a wrong order.
      First of all we should release bitmap iterator (a bitmap wrapper)
      and only after that close down subplans (bitmap index scan with an
      allocated bitmap). Before current commit this operations were done
      in a reverse order that causes access to a freed bitmap in a
      iterator closing function.
      
      Underhood pfree() is a malloc's free() wrapper. Free() doesn't
      return memory to OS in most cases or even doesn't immediately
      corrupt data in a freed chunk, so it is possible to access freed
      chunk data right after its deallocation. That is why we can get
      segfault only under concurrent workloads when malloc's arena
      returns memory to OS.
      ef36338d
    • J
      Build ORCA with C++14: Take Two (#10068) · 649ee57d
      Jesse Zhang 提交于
      This patch makes the minimal changes to build ORCA with C++14. This
      should address the grievance that ORCA cannot build with the default
      Xerces C++ (3.2 or newer, which is built with GCC 8.3 in the default
      C++14 mode) headers from Debian. I've kept the CMake build system in
      sync with the main Makefile. I've also made sure that all ORCA tests
      pass.
      
      This patch set also enables ORCA in Travis so the community gets
      compilation coverage.
      
      === FIXME / near-term TODOs:
      
      What's _not_ included in this patch, but would be nice to have soon (in
      descending order of importance):
      
      1. -std=gnu++14 ought to be done in "configure", not in a Makefile. This
      is not a pendantic aesthetic issue, sooner or later we'll run into this
      problem, especially if we're mixing multiple things built in C++.
      
      2. Clean up the Makefiles and move most CXXFLAGS override into autoconf.
      
      3. Those noexept(false) seem excessive, we should benefit from
      conditionally marking more code "noexcept" at least in production.
      
      4. Detecting whether Xerces was generated (either by autoconf or CMake)
      with a compiler that's effectively running post-C++11
      
      5. Work around a GCC 9.2 bug that crashes the loading of minidumps (I've
      tested with GCC 6 to 10). Last I checked, the bug has been fixed in GCC
      releases 10.1 and 9.3.
      
      [resolves #9923]
      [resolves #10047]
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Reviewed-by: NHans Zeller <hzeller@pivotal.io>
      Reviewed-by: NAshuka Xue <axue@pivotal.io>
      Reviewed-by: NDavid Kimura <dkimura@pivotal.io>
      649ee57d
  7. 18 5月, 2020 9 次提交
  8. 16 5月, 2020 1 次提交