1. 29 7月, 2017 2 次提交
  2. 28 7月, 2017 6 次提交
  3. 27 7月, 2017 11 次提交
    • K
      Ensure ccp_destroy on debug_sleep · c36f83cb
      Kris Macoskey 提交于
      This will allow a user to cancel debug_sleep and be ensured the
      ccp_destroy will still cleanup any created clusters.
      c36f83cb
    • A
      Use xl_heaptid_set() in heap_update_internal. · f1d1d55b
      Ashwin Agrawal 提交于
      Commit d50f429c added xlog lock record, but
      missed to tune into for Greenplum which is to add persistent table
      information. Hence caused failure during recovery with FATAL message "xlog
      record with zero persistenTID". Using xl_heaptid_set() which calls
      `RelationGetPTInfo()` making sure PT info is populated for xlog record.
      f1d1d55b
    • P
      Fix flaky 'insufficient memory reserved' issue in pipeline · a7cce539
      Pengzhou Tang 提交于
      The 'insufficient memory reserved' issue existed for a long time, the
      root cause is the default statement_mem (125MB) is not enough for
      queries using by gpcheckcat script when regression database is huge.
      
      This commit add STATEMENT_MEM in demo_cluster.sh to initialize gpdb
      with required statement_mem and set statement_mem to 225MB in common.bash
      a7cce539
    • A
      Log gpload threads' terminating · 4b71d480
      Adam Lee 提交于
      It's useful and important for debugging.
      4b71d480
    • A
      Fix error in schedule file · 138141f8
      Asim R P 提交于
      138141f8
    • A
      Move dtm test to pg_regress from its own contrib module · c10e75fd
      Asim R P 提交于
      The gp_inject_fault() function is now available in pg_regress so a contrib
      module is not required.  The test was not being run, it trips an assertion.  So
      it is not added to greenplum_schedule.
      c10e75fd
    • A
      Update fsync test to use SQL UDF to inject faults · 9bd14bd3
      Asim R P 提交于
      9bd14bd3
    • A
      Make SQL based fault injection function available to all tests. · b23680d6
      Asim R P 提交于
      The function gp_inject_fault() was defined in a test specific contrib module
      (src/test/dtm).  It is moved to a dedicated contrib module gp_inject_fault.
      All tests can now make use of it.  Two pg_regress tests (dispatch and cursor)
      are modified to demonstrate the usage.  The function is modified so that it can
      inject fault in any segment, specified by dbid.  No more invoking
      gpfaultinjector python script from SQL files.
      
      The new module is integrated into top level build so that it is included in
      make and make install.
      b23680d6
    • J
      Ensure Execution of Shared Scan Writer On Squelch [#149182449] · 9fbd2da5
      Jesse Zhang 提交于
      SharedInputScan (a.k.a. "Shared Scan" in EXPLAIN) is the operator
      through which Greenplum implements Common Table Expression execution. It
      executes in two modes: writer (a.k.a. producer) and reader (a.k.a.
      consumer). Writers will execute the common table expression definition
      and materialize the output, and readers can read the materialized output
      (potentially in parallel).
      
      Because of the parallel nature of Greenplum execution, slices containing
      Shared Scans need to synchronize among themselves to ensure that readers
      don't start until writers are finished writing. Specifically, a slice
      with readers depending on writers on a different slice will block during
      `ExecutorRun`, before even pulling the first tuple from the executor
      tree.
      
      Greenplum's Hash Join implementation will skip executing its outer
      ("probe side") subtree if it detects an empty inner ("hash side"), and
      declare all motions in the skipped subtree as "stopped" (we call this
      "squelching"). That means we can potentially squelch a subtree that
      contains a shared scan writer, leaving cross-slice readers waiting
      forever.
      
      For example, with ORCA enabled, the following query:
      
      ```SQL
      CREATE TABLE foo (a int, b int);
      CREATE TABLE bar (c int, d int);
      CREATE TABLE jazz(e int, f int);
      
      INSERT INTO bar  VALUES (1, 1), (2, 2), (3, 3);
      INSERT INTO jazz VALUES (2, 2), (3, 3);
      
      ANALYZE foo;
      ANALYZE bar;
      ANALYZE jazz;
      
      SET statement_timeout = '15s';
      
      SELECT * FROM
              (
              WITH cte AS (SELECT * FROM foo)
              SELECT * FROM (SELECT * FROM cte UNION ALL SELECT * FROM cte)
              AS X
              JOIN bar ON b = c
              ) AS XY
              JOIN jazz on c = e AND b = f;
      ```
      leads to a plan that will expose this problem:
      
      ```
                                                       QUERY PLAN
      ------------------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)  (cost=0.00..2155.00 rows=1 width=24)
         ->  Hash Join  (cost=0.00..2155.00 rows=1 width=24)
               Hash Cond: bar.c = jazz.e AND share0_ref2.b = jazz.f AND share0_ref2.b = jazz.e AND bar.c = jazz.f
               ->  Sequence  (cost=0.00..1724.00 rows=1 width=16)
                     ->  Shared Scan (share slice:id 2:0)  (cost=0.00..431.00 rows=1 width=1)
                           ->  Materialize  (cost=0.00..431.00 rows=1 width=1)
                                 ->  Table Scan on foo  (cost=0.00..431.00 rows=1 width=8)
                     ->  Hash Join  (cost=0.00..1293.00 rows=1 width=16)
                           Hash Cond: share0_ref2.b = bar.c
                           ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..862.00 rows=1 width=8)
                                 Hash Key: share0_ref2.b
                                 ->  Append  (cost=0.00..862.00 rows=1 width=8)
                                       ->  Shared Scan (share slice:id 1:0)  (cost=0.00..431.00 rows=1 width=8)
                                       ->  Shared Scan (share slice:id 1:0)  (cost=0.00..431.00 rows=1 width=8)
                           ->  Hash  (cost=431.00..431.00 rows=1 width=8)
                                 ->  Table Scan on bar  (cost=0.00..431.00 rows=1 width=8)
               ->  Hash  (cost=431.00..431.00 rows=1 width=8)
                     ->  Table Scan on jazz  (cost=0.00..431.00 rows=1 width=8)
                           Filter: e = f
       Optimizer status: PQO version 2.39.1
      (20 rows)
      ```
      where processes executing slice1 on the segments that have an empty
      `jazz` will hang.
      
      We fix this by ensuring we execute the Shared Scan writer even if it's
      in the sub tree that we're squelching.
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      9fbd2da5
    • A
      Fix torn-page, unlogged xid and further risks from heap_update(). · d50f429c
      Andres Freund 提交于
      When heap_update needs to look for a page for the new tuple version,
      because the current one doesn't have sufficient free space, or when
      columns have to be processed by the tuple toaster, it has to release the
      lock on the old page during that. Otherwise there'd be lock ordering and
      lock nesting issues.
      
      To avoid concurrent sessions from trying to update / delete / lock the
      tuple while the page's content lock is released, the tuple's xmax is set
      to the current session's xid.
      
      That unfortunately was done without any WAL logging, thereby violating
      the rule that no XIDs may appear on disk, without an according WAL
      record.  If the database were to crash / fail over when the page level
      lock is released, and some activity lead to the page being written out
      to disk, the xid could end up being reused; potentially leading to the
      row becoming invisible.
      
      There might be additional risks by not having t_ctid point at the tuple
      itself, without having set the appropriate lock infomask fields.
      
      To fix, compute the appropriate xmax/infomask combination for locking
      the tuple, and perform WAL logging using the existing XLOG_HEAP_LOCK
      record. That allows the fix to be backpatched.
      
      This issue has existed for a long time. There appears to have been
      partial attempts at preventing dangers, but these never have fully been
      implemented, and were removed a long time ago, in
      11919160 (cf. HEAP_XMAX_UNLOGGED).
      
      In master / 9.6, there's an additional issue, namely that the
      visibilitymap's freeze bit isn't reset at that point yet. Since that's a
      new issue, introduced only in a892234f, that'll be fixed in a
      separate commit.
      
      Author: Masahiko Sawada and Andres Freund
      Reported-By: Different aspects by Thomas Munro, Noah Misch, and others
      Discussion: CAEepm=3fWAbWryVW9swHyLTY4sXVf0xbLvXqOwUoDiNCx9mBjQ@mail.gmail.com
      Backpatch: 9.1/all supported versions
      d50f429c
    • K
      Generate a table file during a filtered backup · a6d36d7d
      Karen Huddleston 提交于
      This file contains a list of schema-qualified tablenames in the backup
      set.  It is not used in the restore process; it is there solely to allow
      users to determine which tables were dumped in that backup set.
      Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
      Signed-off-by: NChris Hajas <chajas@pivotal.io>
      a6d36d7d
  4. 26 7月, 2017 1 次提交
  5. 25 7月, 2017 6 次提交
    • D
      Mark local functions as static where appropriate · 684fe68f
      Daniel Gustafsson 提交于
      Set local functions as static and include a prototype. This fixes a
      multitude of warnings for missing prototypes in clang like this one:
      
      gpcheckcloud.cpp:32:6: warning: no previous prototype for function
                             'registerSignalHandler' [-Wmissing-prototypes]
      void registerSignalHandler() {
      		     ^
      684fe68f
    • D
      Install libevent in Travis CI builds · ac52d9ed
      Daniel Gustafsson 提交于
      Since Travis CI was upgraded to use Ubuntu Trusty as the base
      Linux [0], libevent-dev is no longer available out of the box.
      Install it manually with the Apt addon, since we don't want to
      use sudo due to longer instance bootup time. While at it,
      remove the macOS section since we never actually supported
      Travis for macOS builds (it was a leftover from an attempt).
      
      [0] https://blog.travis-ci.com/2017-07-11-trusty-as-default-linux-is-coming
      ac52d9ed
    • I
      Update branding on sample config · aa4c8075
      Ivan Novick 提交于
      aa4c8075
    • N
      Fix resgroup ICW failures · 4165a543
      Ning Yu 提交于
      * Fix the resgroup assert failure on CREATE INDEX CONCURRENTLY syntax.
      
      When resgroup is enabled an assertion failure will be encountered with
      below case:
      
          SET gp_create_index_concurrently TO true;
          DROP TABLE IF EXISTS concur_heap;
          CREATE TABLE concur_heap (f1 text, f2 text, dk text) distributed by (dk);
          CREATE INDEX CONCURRENTLY concur_index1 ON concur_heap(f2,f1);
      
      The root cause is that we had the assumption on QD that a command is
      dispatched to QEs when assigned to a resgroup, but this is false with
      CREATE INDEX CONCURRENTLY syntax.
      
      To fix it we have to make necessary check and cleanup on QEs.
      
      * Do not assign a resource group in SIGUSR1 handler.
      
      When assigning a resource group on master it might call WaitLatch() to
      wait for a free slot. However as WaitLatch() expects to be waken by the
      SIGUSR1 signal, it will run into endless waiting when SIGUSR1 is
      blocked.
      
      One scenario is the catch up handler. Catch up handler is triggered and
      executed directly in the SIGUSR1 handler, so during its execution
      SIGUSR1 is blocked. And as catch up handler will begin a transaction so
      it will try to assign a resource group and trigger the endless waiting.
      
      To fix this we add the check to not assign a resource group when running
      inside the SIGUSR1 handler. As signal handlers are supposed to be light
      and short and safe, so skip resource group in such a case shall be
      reasonable.
      4165a543
    • J
      Best Practices update (#2797) · f7acb99f
      Jane Beckman 提交于
      * Initial updates for Best Practices
      
      * Additional comments from review
      
      * Changes from Craig Sylvester
      
      * David's comments from PR, changes to code output from reviewers
      
      * Review tweaks
      f7acb99f
    • C
      gpperfmon overview improvements (#2785) · a1110fc7
      Chuck Litzell 提交于
      * gpperfmon overview improvements
      
      * Add a link to the log rotation section.
      
      * Edits from review
      a1110fc7
  6. 24 7月, 2017 2 次提交
    • X
      Use non-blocking recv() in internal_cancel() · 23e5a5ee
      xiong-gang 提交于
      The issue of hanging on recv() in internal_cancel() are reported
      serveral times, the socket status is shown 'ESTABLISHED' on master,
      while the peer process on the segment has already exit. We are not
      sure how exactly dose this happen, but we are able to simulate this
      hang issue by dropping packet or reboot the system on the segment.
      
      This patch use poll() to do non-blocking recv() in internal_cancel();
      the timeout of poll() is set to the max value of authentication_timeout
      to make sure the process on segment has already exit before attempting
      another retry; and we expect retry on connect() can detect network issue.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      23e5a5ee
    • Z
      Detect cgroup mount point at runtime. (#2790) · 1b1b3a11
      Zhenghua Lyu 提交于
      In the past, we use hard coded path "/sys/fs/cgroup" as cgroup mount
      point, this can be wrong when 1) running on old kernels or 2) the
      customer has special cgroup mount points.
      
      Now we detect the mount point at runtime by checking /proc/self/mounts.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      1b1b3a11
  7. 22 7月, 2017 10 次提交
  8. 21 7月, 2017 2 次提交
    • J
      Improve partition selection logging (#2796) · 038aa959
      Jesse Zhang 提交于
      Partition Selection is the process of determining at runtime ("execution
      time") which leaf partitions we can skip scanning. Three types of Scan
      operators benefit from partition selection: DynamicTableScan,
      DynamicIndexScan, and BitmapTableScan.
      
      Currently, there is a minimal amount of logging about what partitions
      are selected, but they are scattered between DynamicIndexScan and
      DynamicTableScan (and so we missed BitmapTableScan).
      
      This commit moves the logging into the PartitionSelector operator
      itself, when it exhausts its inputs. This also brings the nice side
      effect of more granular information: the log now attributes the
      partition selection to individual partition selectors.
      038aa959
    • V
      Fix rtable index of FunctionScan when translating GPORCA plan. · 3b24a561
      Venkatesh Raghavan 提交于
      Arguments to the function scan can themselve have a subquery
      that can create new rtable entries. Therefore, first translate all
      arguments of the FunctionScan before setting scanrelid of the
      FunctionScan.
      3b24a561