1. 16 12月, 2017 1 次提交
  2. 15 12月, 2017 5 次提交
    • D
      Fix typos in pg_upgrade · e84a5fb5
      Daniel Gustafsson 提交于
      e84a5fb5
    • M
      Remove "chown gpdb_src" from CI scripts (#4144) · 10c80c88
      Michael Roth 提交于
      * Stiwching the chown to a more overlay friendly chmod on directories
      
        - Initial work is to remove the chown from the gpadmin user setup and
          replace it with a chmod a+w to the directories.  This is sufficient
          for ICW, TINC and behave to run in most cases.
      
        - Gpload2 needs the datafile to be owned by gpadmin
      
        - Change to gpcloud as it was chowning the full directory
      
        - PXF tests needed to be able to write to the pxf_automation_src
          directory.  Updated tests to set directories world writable instead
          of recusiely chowning.  Singlenode needs to be owned by gpadmin.
      
      TODO: Change gpload2 to no longer need datafile to be owned by gpadmin
      TODO: clean up singlenode owndership for PXF test
      10c80c88
    • D
      Fix PG_PRINTF_ATTRIBUTE backport · e0050bcb
      Daniel Gustafsson 提交于
      Commit 94eacb66 backported pg_attribute_printf() but missed
      the part which defines PG_PRINTF_ATTRIBUTE. Since the function
      wasn't used in the backport the missing declaration didn't yield
      any compiler warnings. The upcoming update of src/timezone to
      track the current PostgreSQL version will however use this and
      cause warnings.
      
      This is a partial backport of the below commit from PostgreSQL
      9.5. One hunk of the referenced commit was a fix for a previous
      attempt which was skipped since we never merged that fix (yet).
      
        commit b779168f
        Author: Noah Misch <noah@leadboat.com>
        Date:   Sun Nov 23 09:34:03 2014 -0500
      
          Detect PG_PRINTF_ATTRIBUTE automatically.
      
          This eliminates gobs of "unrecognized format function type" warnings
          under MinGW compilers predating GCC 4.4.
      e0050bcb
    • M
      behave: gprecoverseg - ensure that gpfaultinjector gets triggered · acaccc6e
      Marbin Tan 提交于
      Ensure that we're triggering the `gpfaultinjector`.
      There are cases where even though we have the `gpfaultinjector` setup
      and the transaction still does not block properly.
      
      By creating a database, we ensure that all segments gets contacted,
      and FTS will detect the issue that we created with gpfaultinjector.
      acaccc6e
    • X
      Add segwalrep jobs to master pipeline · 64d9c8d5
      Xin Zhang 提交于
      This will protect regression on walrep features on master branch.
      64d9c8d5
  3. 14 12月, 2017 13 次提交
    • T
      Support gptransfer only schema of databases or tables · d5852e91
      Tingfang Bao 提交于
      This is to make gptransfer able to transfer only schema of databases or
      tables, like "--schema-only -d foo" or "--schema-only -t bar.public.t1".
      It could do that before actually but forgot to set the success flag.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      d5852e91
    • P
      Ignore undetermined output within cursor.sql · fc8a30e9
      Pengzhou Tang 提交于
      When declaring a cursor, QD will not block until QE got a
      snapshot and set up a interconnect, so we can not guarantee
      that QD always see 'qe_got_snapshot_and_interconnect' was
      triggered. For the case itself, if the writer QE can get
      slower than the reader QE without the help of fault injector,
      it's even better.
      fc8a30e9
    • P
      Fix use-after-free in PyGreSQL (#4128) · 705124e6
      Peifeng Qiu 提交于
      pg_query function is the underlying workhorse for db.query in
      python. For INSERT queries, it will return a string containing
      the number of rows successfully inserted.
      
      PQcmdTuples() parses a PGresult return by PQExec, if it's an insert
      count result, return a pointer to the count. However this pointer
      is the internal buffer of PGresult, it shouldn't be used after
      PQClear(), although most time its content remain accessible and
      unchanged. PyString_FromString will make a copy of the string, so
      move PQClear() after PyString_FromString is safe.
      
      This will fix the problem that gpload get a unprintable insert
      count sometimes.
      705124e6
    • M
      Resolve some GPDB_84_MERGE_FIXMEs in bitmap as well as a minor · f1fdc75a
      Max Yang 提交于
      possible memory leak.
      
      Author: Xiaoran Wang <xiwang@pivotal.io>
      f1fdc75a
    • M
      Fix internal_bpchar_pattern_compare compare logic to keep it · 35b53cbf
      Max Yang 提交于
      as same as upstream.
      
      In upstream, internal_bpchar_pattern_compare compare inputs by ignoring
      ending space. But GPDB it just use whole string compare. The bug didn't
      appear because the before merging PG_MERGE_84 GPDB just use TableScan when executing
      following query, but after PG_MERGE_84, IndexScan is used, and internal_bpchar_pattern_compare
      will be used for index:
      create table tbl(id int4, v char(10));
      create index tbl_v_idx_bpchar on tbl using btree(v bpchar_pattern_ops);
      insert into tbl values (1, 'abc');
      explain select * from tbl where v = 'abc '::char(20);
      select * from tbl where v = 'abc '::char(20);
      
      Author: Xiaoran Wang <xiwang@pivotal.io>
      35b53cbf
    • X
      fix FIXME in transformStorageEncodingClause · 8cb28ee6
      Xiaoran Wang 提交于
      The parameter namespace passed to transformRelOptions routine
      has only tow values. One is 'toast',the other one is NULL.
      'toast' value is to filter toast reloptions. NULL value is to
      get no-toast reloptions.
      
      In transformStorageEncodingClause routine just need to get
      no-toast reloption.
      
      Author: Max Yang <myang@pivotal.io>
      8cb28ee6
    • X
      Update missing_xlog to make it more deterministic · b69a8209
      Xin Zhang 提交于
      Before we shutdown the mirror, we create a checkpoint and also do an
      empty transaction to ensure the restart point is created on mirror.
      
      This will speedup the mirror recovery because we don't have to reply all
      the xlog records from the beginning.
      
      Author: Xin Zhang <xzhang@pivotal.io>
      Author: Taylor Vesely <tvesely@pivotal.io>
      b69a8209
    • T
      Add db_in_standby_mode to update control file · 6a546701
      Taylor Vesely 提交于
      In previous commit c9e2693c, the control
      file will be updated on the mirror when restart point is created.
      However, different from upstream, GPDB is running mirror under
      DB_IN_STANDY_MODE rather than DB_IN_ARCHIVE_RECOVERY mode in upstream.
      Hence, the control file was never updated when creating restart point.
      
      Author: Xin Zhang <xzhang@pivotal.io>
      Author: Taylor Vesely <tvesely@pivotal.io>
      6a546701
    • B
      Set the max size of join order threshold to 12 · 426bf31f
      Bhuvnesh Chaudhary 提交于
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      426bf31f
    • X
      Make fsync=off default on create-demo-cluster · 1ee171fa
      Xin Zhang 提交于
      Also remove the redundant fsync=off from related pipeline files.
      
      It can be overriden with BLDWRAP_POSTGRES_CONF_ADDONS.
      
      Author: Xin Zhang <xzhang@pivotal.io>
      Author: Ashwin Agrawal <aagrawal@pivotal.io>
      1ee171fa
    • S
      Correct typo · ced25b39
      Shreedhar Hardikar 提交于
      ced25b39
    • L
      5c210c49
    • A
      Remove Startup Pass 4 PT verification code. · 5361041d
      Ashwin Agrawal 提交于
      This code is hidden under GUC and never turned on, so no point keeping it. Was
      coded in past due to some inconsistency issues which have not surfaced from long
      time now. Plus, anyways PT needs to go away soon as well.
      5361041d
  4. 13 12月, 2017 16 次提交
  5. 12 12月, 2017 5 次提交
    • H
      Fix accounting for memory usage of the new skew bucket in hash join. · ec466f41
      Heikki Linnakangas 提交于
      This was left out during the merge, because when we started merging this,
      we used to track the amount of memory used by the hash table differently.
      It was changed back the way it is in the upstream in commit f4d60673,
      before this code was merged in, but I forgot to restore these calls.
      
      Also re-enable increasing # of batches in hash join, when adding to skew
      bucket.
      ec466f41
    • H
      Cosmetic cleanup, to reduce diff vs. upstream. · 35ff159c
      Heikki Linnakangas 提交于
      35ff159c
    • D
      Replace usage of deprecated error codes · fd0a1b75
      Daniel Gustafsson 提交于
      These error codes were marked as deprecated in September 2007 but
      the code didn't get the memo. Extend the deprecation into the code
      and actually replace the usage. Ten years seems long enough notice
      so also remove the renames, the odds of anyone using these in code
      which compiles against a 6X tree should be low (and easily fixed).
      fd0a1b75
    • D
      Fix typo · 5bea949d
      Daniel Gustafsson 提交于
      5bea949d
    • H
      Remove bloom filter from hash joins. · 1fd4d7ee
      Heikki Linnakangas 提交于
      It was added as a performance optimization back in 2007, but it's not
      doing much at the moment. Some quick testing on my laptop suggests that
      it's a wash at best, and makes the join slightly slower in some scenarios.
      
      This will reduce our diff vs upstream, reducing the chance of merge
      conflicts in the future.
      1fd4d7ee