1. 30 1月, 2019 10 次提交
  2. 29 1月, 2019 16 次提交
    • H
      Don't print WARNING for every temporary file being deleted at recovery. · d2d3c8ac
      Heikki Linnakangas 提交于
      Before this, you would get warnings like this in the log at crash
      recovery, for every temporary file that's deleted:
      
      2019-01-28 20:20:45.702848 EET,,,p7513,th-1570674304,,,,0,,,seg1,,,,,"WARNING","01000","could not open directory ""base/pgsql_tmp/pgsql_tmpslice1_tuplestore5876.0"": No such file or directory",,,,,,,,"pgfnames","pgfnames.c",43,
      
      To fix, backport changes PostgreSQL v11, which added the support for
      removing temporary directories in upstream. That is, commit dc6c4c9d,
      and the follow up commits 561885db and eeb3c2df.
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      d2d3c8ac
    • P
      Fix FIXME in is_dummy_plan_walker() and also refactor the code a bit. (#6830) · 379fceda
      Paul Guo 提交于
      For LockRows node, if its outer plan is dummy then there will be no
      row for locking and thus it could be dummy.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      379fceda
    • P
      StartTransaction reads segment configuration for GP_ROLE_DISPATCH process · 227046fc
      Pengzhou Tang 提交于
      Previously, we updated snapshot of newest segments configuration at the start of a
      global transaction and never change it until the end of global transaction even a
      segment is down in the middle which makes things simple. The problem is, some
      backends, like FTS & GDD, are never a part of a distributed transaction, so they
      are missing the chance of updating segments snapshot. FTS is not problematic
      now because it explicitly destroys the segments snapshot and gets a new one every
      iteration to always resolve newest hostnames, GDD and other potential backends
      still be problematic.
      
      After a thought, there is no harm to update segments snapshot even for a local
      transaction except that we need to take care a local transaction that started
      without a database be selected. Another idea is just letting GDD to do a explicit
      update for every loop, but it would be forgettable when a similar backend is added.
      227046fc
    • D
      1067a219
    • M
      docs - remove gptransfer from docs (#6821) · 89c5c3fc
      Mel Kiyama 提交于
      * docs - remove gptransfer from docs
      --removed gptransfer topics, references to gptransfer, and images.
      --also updated text in gpcopy-migrate as rough update for 6.0
      
      * docs - remove gptransfer from docs - review updates
      89c5c3fc
    • C
      docs - REPEATABLE READ xact mode is supported. (#6717) · fe719bf5
      Chuck Litzell 提交于
      * docs - REPEATABLE READ xact mode is supported. SERIALIZABLE falls back to REPEATABLE READ.
      
      * Note that GPDB doesn't implement PGSQL SSI transactions
      
      * Review comments
      fe719bf5
    • M
      docs - updates for online expand (#6719) · dd5bb58b
      Mel Kiyama 提交于
      * docs - updates for online expand
      
      * docs - online expand - edits based on review comments.
      updated catalog table information.
      removed draft comments.
      dd5bb58b
    • B
      Concourse: Add missing libquicklz resources to compile_gpdb_binary_swap_centos6 · d9782cae
      Bradford D. Boyle 提交于
      Previously added these to the task, but missed adding them to the
      pipeline.
      Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io>
      Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
      d9782cae
    • B
      8ca7d806
    • D
      Switch quicklz_compressor extensions from gpaddon to gpcontrib · 0a8afa67
      David Sharp 提交于
      And configure GPDB with --with-quicklz on RHEL
      
      This commit removes quicklz_compressor from all platforms except
      RHEL/Centos. Other platforms will be re-enable in the future.
      Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
      Co-authored-by: NBen Christel <bchristel@pivotal.io>
      0a8afa67
    • D
      6de2a3cf
    • J
      pg_rewind: fix -R and -S options · 3ed05465
      Jacob Champion 提交于
      A character transposition in the getopt_long() string meant that the
      option meant for -S was being applied to -R:
      
          pg_rewind: option requires an argument -- R
      
      Fix that.
      3ed05465
    • D
      Add notes to qualify lack of Large Object support. (#6798) · 89f53441
      David Yozie 提交于
      * Add notes to qualify lack of large object support.
      
      * Replacing large object nonsupport note with more general description and link to postgresql docs
      89f53441
    • D
      Docs: update pg_class relkind entries (#6799) · 0c24af63
      David Yozie 提交于
      * update pg_class relkind entries
      
      * Remove duplicate entry for composite type
      
      * Add info for missing columns: reloftype, relallvisible, relpersistence, relhastriggers
      0c24af63
    • H
      Remove FIXME, there's nothing to do here. · d18b0e4f
      Heikki Linnakangas 提交于
      The point of this FIXME was that the code before the 9.2 merge was
      possibly broken, because it was missing this code to get the input slot.
      I think it was missing before the 9.2 merge, because of a bungled merge
      of commit 7fc0f062, during the 9.0 merge, but now the code in GPDB
      master is identical to upstream, and there's nothing to do. Also,
      comparing the 8.2 and 5X_STABLE code, it looks correct in 5X_STABLE, as
      well, so there's nothing to do there either.
      d18b0e4f
    • H
      Use single-byte Boyer-Moore-Horspool search even with multibyte encodings. · 6ffa7140
      Heikki Linnakangas 提交于
      This is a backport of upstream commit 9556aa01, and Tom Lane's follow
      up commit 6119060d. Cherry-picked it now, to avoid the 256 MB limit on
      strings. We used to have an old workaround for that issue in GPDB, but lost
      it as part of the 9.1 merge.
      
      Upstream commit:
      
      commit 9556aa01
      Author: Heikki Linnakangas <heikki.linnakangas@iki.fi>
      Date:   Fri Jan 25 16:25:05 2019 +0200
      
          Use single-byte Boyer-Moore-Horspool search even with multibyte encodings.
      
          The old implementation first converted the input strings to arrays of
          wchars, and performed the conversion on those. However, the conversion is
          expensive, and for a large input string, consumes a lot of memory.
          Allocating the large arrays also meant that these functions could not be
          used on strings larger 1 GB / pg_encoding_max_length() (256 MB for UTF-8).
      
          Avoid the conversion, and instead use the single-byte algorithm even with
          multibyte encodings. That can get fooled, if there is a matching byte
          sequence in the middle of a multi-byte character, so to eliminate false
          positives like that, we verify any matches by walking the string character
          by character with pg_mblen(). Also, if the caller needs the position of
          the match, as a character-offset, we also need to walk the string to count
          the characters.
      
          Performance testing shows that walking the whole string with pg_mblen() is
          somewhat slower than converting the whole string to wchars. It's still
          often a win, though, because we don't need to do it if there is no match,
          and even when there is, we only need to walk up to the point where the
          match is, not the whole string. Even in the worst case, there would be
          room for optimization: Much of the CPU time in the current loop with
          pg_mblen() is function call overhead, and could be improved by inlining
          pg_mblen() and/or the encoding-specific mblen() functions. But I didn't
          attempt to do that as part of this patch.
      
          Most of the callers of text_position_setup/next functions were actually
          not interested in the position of the match, counted in characters. To
          cater for them, refactor the text_position_next() interface into two
          parts: searching for the next match (text_position_next()), and returning
          the current match's position as a pointer (text_position_get_match_ptr())
          or as a character offset (text_position_get_match_pos()). Getting the
          pointer to the match is a more convenient API for many callers, and with
          UTF-8, it allows skipping the character-walking step altogether, because
          UTF-8 can't have false matches even when treated like raw byte strings.
      
          Reviewed-by: John Naylor
          Discussion: https://www.postgresql.org/message-id/3173d989-bc1c-fc8a-3b69-f24246f73876%40iki.fi
      6ffa7140
  3. 28 1月, 2019 3 次提交
  4. 27 1月, 2019 2 次提交
  5. 26 1月, 2019 9 次提交