1. 03 1月, 2019 2 次提交
    • L
      docs - discuss the global deadlock detector (#6538) · ea5a0a1c
      Lisa Owen 提交于
      * docs - discuss the global deadlock detector
      
      * some of the edits requested by david
      
      * move opening paragraph to release note
      
      * reorg content, add a bit about local deadlock
      
      * guc can be reloaded
      
      * concurrent update AND DELETE
      ea5a0a1c
    • D
      Revive lost optimization from detoast memtuple refactor · 3b9bdb3b
      David Kimura 提交于
      This change brings back an optimization to reuse buffer when slot's
      PRIVATE_tts_memtuple is NULL and PRIVATE_tts_mtup_buf is not. Here we
      pass inline_toast=false into memtuple_form_to. Thus previous refactor to
      detoast either MemTuple or HeapTuple in SendTuple is maintained
      (from commit bf5e3d5d).
      3b9bdb3b
  2. 31 12月, 2018 7 次提交
  3. 29 12月, 2018 2 次提交
  4. 28 12月, 2018 19 次提交
  5. 27 12月, 2018 6 次提交
    • P
      Fix the answer file for test rpt · 2351a0e9
      Pengzhou Tang 提交于
      This is involved unexpectedly by b120194a when resolving conflict
      with the master.
      2351a0e9
    • T
      Donot expose gp_segment_id to users for replicated table (#6342) · b120194a
      Tang Pengzhou 提交于
      * Do not expose system columns to users for replicated table
      
      for replicated table, all replica in segments should always be the same which
      make gp_segment_id ambiguous for replicated table, instead of makeing effort
      to keep gp_segment_id the same in all segments, I would like to just hide the
      gp_segment_id for replicated table to make things simpler.
      
      This commit only make gp_segment_id invisible to users at the stage of
      transforming parsetree, in the underlying storage, each segment still store
      different value for system column gp_segment_id, operations like SPLITUPDATE
      might also use gp_segment_id to do an explicit redistribution motion, this is
      fine as long as the user-visible columns have the same data.
      
      * Fixup reshuffle* test cases
      
      reshuffle.* cases used to use gp_segment_id to test that a replicated
      table is actually expanded to new segments, now gp_segment_id is
      invisible to users, some errors reports.
      
      Because replicated table has triple data, we can get enough info
      from the total count even without a group by of gp_segment_id. It's
      not 100% accurate but should be enough already.
      
      * Do dependencies check when converting a table to replicated table
      
      system columns of replicated table should not be exposed to users,
      we do the check at parser stage, however, if users create views or
      rules involve the system columns of a hash distributed table and
      then the hash distributed table is converted to a replicated table,
      then users can still access the system columns of replicated table
      through views and rules because they bypass the check in the parser
      stage. To resolve this, we add a dependencies check when altering
      a table to replicated table, users need to drop the views or rules
      first.
      
      I tried to add a recheck in later stages like planner or executor,
      but it seems that system columns like CTID are used internally for
      basic DMLs like UPDATE/DELETE and those columns are added even
      before the rewrite of views and rules, so adding a recheck will
      block basic DMLs too.
      b120194a
    • P
      Provide more accurate debugging information for demo create failure. (#6558) · 38e28194
      Paul Guo 提交于
      There is a case that a postmaster process is killed-by-9 and then the
      process has no chance to remove its temporary files /tmp/.s.PGSQL.${PORT}*,
      which are used by the demo create script to check the port usability.
      For the above case re-creating demo cluster will fail but the below error
      message information is misleading since apparently the port is not used.
      
        Check to see if the port is free by using :  'netstat -an | grep 16432'.
      
      Fix this by providing more information to users.
      
      Reviewed by Ashwin Agrawal
      38e28194
    • K
      Update continuous-integration to gp-continuous-integration · 8003b1d6
      Karen Huddleston 提交于
      The repo name was changed after the github migration. The previous paths
      will not work correctly for new clones of the repository.
      Authored-by: NKaren Huddleston <khuddleston@pivotal.io>
      8003b1d6
    • N
      Ignore LockAcquire() return value in gp_expand_lock_catalog() · fb7c92d0
      Ning Yu 提交于
      Here is the return values of `LockAcquire()`:
      
          * LOCKACQUIRE_NOT_AVAIL       lock not available, and dontWait=true
          * LOCKACQUIRE_OK              lock successfully acquired
          * LOCKACQUIRE_ALREADY_HELD    incremented count for lock already held
      
      In our case `dontWait` is false so `LOCKACQUIRE_NOT_AVAIL` is never
      returned, the other 2 both indicate that the lock is successfully
      acquired.  So simply ignore the return value.
      
      This issue is detected by Coverity, here is the original report:
      
          *** CID 190295:  Error handling issues  (CHECKED_RETURN)
          /tmp/build/0e1b53a0/gpdb_src/src/backend/utils/misc/gpexpand.c: 98 in gp_expand_lock_catalog()
          92      *
          93      * This should only be called by gpexpand.
          94      */
          95     Datum
          96     gp_expand_lock_catalog(PG_FUNCTION_ARGS)
          97     {
          >>>     CID 190295:  Error handling issues  (CHECKED_RETURN)
          >>>     Calling "LockAcquire" without checking return value (as is done elsewhere 14 out of 16 times).
          98      LockAcquire(&gp_expand_locktag, AccessExclusiveLock, false, false);
          99
          100             PG_RETURN_VOID();
          101     }
          102
          103     /*
      fb7c92d0
    • A
      Bump ORCA version to 3.18.0 · 1f3cbd73
      Abhijit Subramanya 提交于
      1f3cbd73
  6. 26 12月, 2018 2 次提交
  7. 25 12月, 2018 2 次提交
    • N
      Fix expression type in FAST_PATH_SET_HOLD_TILL_END_XACT() · 4ea0e419
      Ning Yu 提交于
      In macro `FAST_PATH_SET_HOLD_TILL_END_XACT()` we expect an expression to
      return uint64 type, however the actual return type depends is decided by
      the argument `bits`, when `bits` is 32-bit wide it might not produce
      correct result.
      
      Fixed by adding necessary type conversion in the macro.
      
      This issue is detected by Coverity, here is the original report:
      
          *** CID 190294:  Integer handling issues  (BAD_SHIFT)
          /tmp/build/0e1b53a0/gpdb_src/src/backend/storage/lmgr/lock.c: 4557 in setFPHoldTillEndXact()
          4551
          4552                    if (proc->fpRelId[f] != relid ||
          4553                            (lockbits = FAST_PATH_GET_BITS(proc, f)) == 0)
          4554                            continue;
          4555
          4556                    /* one relid only occupies one slot. */
          >>>     CID 190294:  Integer handling issues  (BAD_SHIFT)
          >>>     In expression "(lockbits & 7U) << 3U * f", left shifting by more than 31 bits has undefined behavior.  The shift amount, "3U * f", is as much as 45.
          4557                    FAST_PATH_SET_HOLD_TILL_END_XACT(proc, f, lockbits);
          4558                    result = true;
          4559                    break;
          4560            }
          4561
          4562            LWLockRelease(proc->backendLock);
          4563
          4564            return result;
      4ea0e419
    • D
      Make SendTuple do de-toasting for heap as well as memtuples (#6527) · bf5e3d5d
      David Kimura 提交于
      * Make SendTuple do de-toasting for heap as well as memtuples
      
      Motion node didn't consistently handle where MemTuples and HeapTuples
      are de-toasted. This change delays de-toasting of MemTuple in motion
      sender until SerializeTupleDirect(), which is where HeapTuples are
      de-toasted.
      
      This change also corrects a bug in de-toasting of memtuples in motion
      sender.  De-toasting of memtuples marked the slot containing the
      de-toasted memtuple as virtual.  When a TupleTableSlot is marked
      virtual, its values array should point to addresses within the memtuple.
      The bug was that the values array was populated *before* de-toasting and
      the memory used by the toasted memtuple was freed.  Subsequent usages of
      this slot suffered from dereferencing invalid memory through the
      pointers in values array.
      
      This change corrects the issue by simplifying ExecFetchSlotMemTuple() to
      no longer manage memory required to de-toast which and also aligns the
      code structure more similar to ExecFetchSlotMinimalTuple() in Postgres.
      An alternative solution that we considered is to continue de-toasting in
      ExecFetchSlotMemTuple() and clear the virtual bit before freeing the
      tuple. A disadvantage of that approach is that it modifies an existing
      slot and complicates the contract of memtuple and virtual table slots.
      Also, it doesn't match with where HeapTuples are de-toasted so the
      contract becomes less clear.
      
      The bug is hard to discover because motion node in executor is the only place
      where a memtuple is de-toasted, before sending it over interconnect.  Only a
      few plan nodes make use of a tuple that's already returned to the parent nodes
      e.g. Unique (DISTINCT ON()) and SetOp (INTERSECT / EXCEPT).  Example plan that
      manifests the bug:
      
       Gather Motion 3:1
         Merge Key: a, b
         ->  Unique
               Group By: a, b
               ->  Sort
                     Sort Key (Distinct): a, b
                     ->  Seq Scan
      
      The Unique node uses the previous tuple de-toasted by the Gather Motion sender
      to compare with the new tuple obtained from the Sort node for eliminating
      duplicates.
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      bf5e3d5d