1. 29 1月, 2019 5 次提交
    • J
      pg_rewind: fix -R and -S options · 3ed05465
      Jacob Champion 提交于
      A character transposition in the getopt_long() string meant that the
      option meant for -S was being applied to -R:
      
          pg_rewind: option requires an argument -- R
      
      Fix that.
      3ed05465
    • D
      Add notes to qualify lack of Large Object support. (#6798) · 89f53441
      David Yozie 提交于
      * Add notes to qualify lack of large object support.
      
      * Replacing large object nonsupport note with more general description and link to postgresql docs
      89f53441
    • D
      Docs: update pg_class relkind entries (#6799) · 0c24af63
      David Yozie 提交于
      * update pg_class relkind entries
      
      * Remove duplicate entry for composite type
      
      * Add info for missing columns: reloftype, relallvisible, relpersistence, relhastriggers
      0c24af63
    • H
      Remove FIXME, there's nothing to do here. · d18b0e4f
      Heikki Linnakangas 提交于
      The point of this FIXME was that the code before the 9.2 merge was
      possibly broken, because it was missing this code to get the input slot.
      I think it was missing before the 9.2 merge, because of a bungled merge
      of commit 7fc0f062, during the 9.0 merge, but now the code in GPDB
      master is identical to upstream, and there's nothing to do. Also,
      comparing the 8.2 and 5X_STABLE code, it looks correct in 5X_STABLE, as
      well, so there's nothing to do there either.
      d18b0e4f
    • H
      Use single-byte Boyer-Moore-Horspool search even with multibyte encodings. · 6ffa7140
      Heikki Linnakangas 提交于
      This is a backport of upstream commit 9556aa01, and Tom Lane's follow
      up commit 6119060d. Cherry-picked it now, to avoid the 256 MB limit on
      strings. We used to have an old workaround for that issue in GPDB, but lost
      it as part of the 9.1 merge.
      
      Upstream commit:
      
      commit 9556aa01
      Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
      Date:   Fri Jan 25 16:25:05 2019 +0200
      
          Use single-byte Boyer-Moore-Horspool search even with multibyte encodings.
      
          The old implementation first converted the input strings to arrays of
          wchars, and performed the conversion on those. However, the conversion is
          expensive, and for a large input string, consumes a lot of memory.
          Allocating the large arrays also meant that these functions could not be
          used on strings larger 1 GB / pg_encoding_max_length() (256 MB for UTF-8).
      
          Avoid the conversion, and instead use the single-byte algorithm even with
          multibyte encodings. That can get fooled, if there is a matching byte
          sequence in the middle of a multi-byte character, so to eliminate false
          positives like that, we verify any matches by walking the string character
          by character with pg_mblen(). Also, if the caller needs the position of
          the match, as a character-offset, we also need to walk the string to count
          the characters.
      
          Performance testing shows that walking the whole string with pg_mblen() is
          somewhat slower than converting the whole string to wchars. It's still
          often a win, though, because we don't need to do it if there is no match,
          and even when there is, we only need to walk up to the point where the
          match is, not the whole string. Even in the worst case, there would be
          room for optimization: Much of the CPU time in the current loop with
          pg_mblen() is function call overhead, and could be improved by inlining
          pg_mblen() and/or the encoding-specific mblen() functions. But I didn't
          attempt to do that as part of this patch.
      
          Most of the callers of text_position_setup/next functions were actually
          not interested in the position of the match, counted in characters. To
          cater for them, refactor the text_position_next() interface into two
          parts: searching for the next match (text_position_next()), and returning
          the current match's position as a pointer (text_position_get_match_ptr())
          or as a character offset (text_position_get_match_pos()). Getting the
          pointer to the match is a more convenient API for many callers, and with
          UTF-8, it allows skipping the character-walking step altogether, because
          UTF-8 can't have false matches even when treated like raw byte strings.
      
          Reviewed-by: John Naylor
          Discussion: https://www.postgresql.org/message-id/3173d989-bc1c-fc8a-3b69-f24246f73876%40iki.fi
      6ffa7140
  2. 28 1月, 2019 3 次提交
  3. 27 1月, 2019 2 次提交
  4. 26 1月, 2019 10 次提交
    • H
      Fix expected output for \di output change. · a2c7ccc9
      Heikki Linnakangas 提交于
      After commit 56bb376c, \di no longer prints the Storage column. I failed
      to change the 'bfv_partition' test's expected output accordingly.
      a2c7ccc9
    • H
      Fix assertion failure in \di+ and add tests. · 56bb376c
      Heikki Linnakangas 提交于
      The 'translate_columns' array must be larger than the number of columns
      in the result set, being passed printQuery(). We had added one column,
      "Storage", in GPDB, so we must make the array larger, too.
      
      This is a bit fragile, and would go wrong, if there were any translated
      columns after the GPDB-added column. But there isn't, and we don't really
      do translation in GPDB, anyway, so this seems good enough.
      
      The Storage column isn't actually interesting for indexes, so omit it
      for \di.
      
      Add a bunch of tests. For the \di+ that was hitting the assertion, as well
      as \d commands, to test the Storage column.
      
      Fixes github issue https://github.com/greenplum-db/gpdb/issues/6792Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
      Reviewed-by: NJimmy Yih <jyih@pivotal.io>
      Reviewed-by: NJesse Zhang <jzhang@pivotal.io>
      56bb376c
    • A
      CI: change gpexpand job dependency to icw_gporca_centos6. · 322c4602
      Ashwin Agrawal 提交于
      icw_gporca_centos6 job generates the icw_gporca_centos6_dump. gpexpand
      has icw_gporca_centos6_dump as input, hence make it just depend on
      that particular job instead of all the ICW jobs. This makes the
      gpexpand job same as pg_upgrade job. Also, importantly marks the
      real dependency instead of perceived one.
      322c4602
    • A
      gpexpand: need to start and stop the primaries with convertMasterDataDirToSegment. · c596cca7
      Ashwin Agrawal 提交于
      This reverts partial part of commit
      b597bfa8, as primaries need to be
      started once using convertMasterDataDirToSegment.
      c596cca7
    • B
      Fix mem leak · d2d1c209
      Bhuvnesh Chaudhary 提交于
      d2d1c209
    • A
      Remove BitmapHeapPath and UniquePath check in cdbpath_cost_motion() · 014a9d6a
      Alexandra Wang 提交于
      For cost estimation of MotionPath node, we calculate the rows as
      (subpath->rows * cdbpath_segments) for CdbPathLocus_IsReplicated() which
      do not have IndexPath, BitmapHeapPath, UniquePath, and
      BitmapAppendOnlyPath (which is completely removed in db516347) in its
      subpath. Previously, for the above mentioned node we always calculated
      the rows as subpath->rows. The reason why the Paths mentioned above are
      special is unknown, the logic has always been there, it used to be in
      cdbpath_rows() and was refactored as part of commit b2411b59. Therefore
      removing the checks all together, and calculating the rows for all
      CdbPathLocus_IsReplicated() to be same. We have already removed
      IndexPath as part of the 94_STABLE merge.
      
      With this update, we only see one plan change in ICG.
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      014a9d6a
    • A
      Remove tinc references. · b6af8d7b
      Ashwin Agrawal 提交于
      b6af8d7b
    • A
      Remove TINC framework and tests. · 7fa85902
      Ashwin Agrawal 提交于
      All the tests have been ported out of this framework and nothing runs
      these tests in CI anymore.
      7fa85902
    • A
      Remove tinc from concourse and pipeline files. · cd733c64
      Ashwin Agrawal 提交于
      This also removes the last remaining walrep job from pipeline file
      using tinc. Those tests are anyways broken and can't be run. Plan is
      to port some of the relevant to regress or behave.
      cd733c64
    • A
      gpexpand: remove redundant creation of mirrors. · b597bfa8
      Ashwin Agrawal 提交于
      gpexpand runs `_gp_expand.sync_new_mirrors()` at end after updating
      catalog which runs `gprecoverseg -aF`. While it was also calling
      `buildSegmentInfoForNewSegment()` as part of add_segments() which
      creates primaries and was also calling `ConfigureNewSegment()` for
      mirror which ran pg_basebackup internally. So, essentially as end
      result mirror was created twice, pg_basebackup and then later
      gprecoverseg -aF.
      
      Hence, modifying to just create primaries first as part of
      `_gp_expand.add_segments()` and let `_gp_expand.sync_new_mirrors()` do
      the mirror creation. Spotted the redundancy while browsing the code.
      b597bfa8
  5. 25 1月, 2019 13 次提交
  6. 24 1月, 2019 7 次提交