1. 28 6月, 2019 8 次提交
    • X
      ec8c18f1
    • J
      Remove deadcode · 18f44ef3
      Jesse Zhang 提交于
      Two empty test files introduced in commit 364cbce7. Must have been a
      think-O
      18f44ef3
    • D
      Revert "Update sysctl settings to match new recommendations" · e859186d
      David Krieger 提交于
      This reverts commit 559d9d9f.
      e859186d
    • S
      Update sysctl settings to match new recommendations · 559d9d9f
      Shoaib Lari 提交于
      The recommended sysctl settings for user clusters are out-of-date, and
      after some investigation we've discovered a good minimal set of
      recommended defaults.  These will be updated in the documentation, but
      we also want to set them in the VMs used for our CLI Behave tests so
      that we use values similar to those that users will have in their
      environments, so this commit adds a section to the Concourse cluster
      task file to do so.
      Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      559d9d9f
    • P
      Fix an issue caused by mismatched targetlist of Motion node · a565622e
      Pengzhou Tang 提交于
      A Motion node may has a subplan whose targetlist need to be expanded
      to make sure all entries of the subplan's hashExpr are in its
      targetlist, in this case, the motion node should also update its
      targetlist, otherwise, the difference of targetlist will cause
      Motion node parse the tuple (especially for MemTuple) incorrectly
      with a mismatched tuple description, make wrong result or even crash.
      
      For a subplan, if we can get a Expr for distkey column from the
      targetlist, the Expr must be in the targetlist or reference the
      Vars in the targetlist, The old logic is, if the Expr is valid,
      the Expr that reference Vars is also added to the targetlist.
      For example, a subplan has targetlist (c1) and the locus whose
      distkey is (c1::float8), so the Expr returned will be one
      referenced Vars in the targetlist and then the old targetlist
      will be extended to (c1, c1::float8). Obviously, the extension
      of targetlist here is unnecessary, the Expr can evaluate from
      Vars, meanwhile, motion need to transfer more columns in a tuple.
      Co-authored-by: NAdam Lee <ali@pivotal.io>
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Reviewed-by: NRichard Guo <rguo@pivotal.io>
      Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      a565622e
    • S
      Fix wrong results for aggs containing a correlated subquery · 96d92667
      Shreedhar Hardikar 提交于
      This commit handles a missed case in the previous commit: "Fix
      algebrization of subqueries in queries with complex GROUP BYs".
      
      The logic inside RunExtractAggregatesMutator's Var case was intended to
      fix top-level Vars inside subqueries in the targetlist, but also
      incorrectly fixed top-level Vars in subqueries inside of aggregates.
      96d92667
    • S
      Behave: Use UTF-8 for gpconfig tests · 4ba11535
      Shoaib Lari 提交于
      The gpconfig test was failing for UTF-8 characters when run on Ubuntu because
      our test containers use the POSIX locale. In this commit, we have set the
      `LC_TYPE` to `en_US.UTF-8` for the gpconfig test so that the test has the same
      behavior on all platforms.
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      4ba11535
    • S
      gen_pipeline: Do not call `suggested_*()` functions in `gen_pipeline()`. · 8ed17a81
      Shoaib Lari 提交于
      The `gen_pipeline()` function called the `suggested_git_remote()` and the
      `suggested_git_branch()` functions as default values for the `git_remote` and
      `git_branch` parameters.  For `prod` pipeline, the `gen_pipeline()` function is
      called with GPDB repo and `BASE_BRANCH`.  However, the `suggested_*()` functions
      are called in the `gen_pipeline()` function definition resulting in error as
      they are not applicable for the production branches.
      
      Therefore, in this commit we have used `None` as the default and call the
      `suggested_*()` functions only if the corresponding parameters are not provided
      by the caller.
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      8ed17a81
  2. 27 6月, 2019 11 次提交
  3. 26 6月, 2019 4 次提交
  4. 25 6月, 2019 8 次提交
    • A
      Default tablespace guc tests · f7287795
      Adam Berlin 提交于
      - Ensure that indexes are placed into the default tablespace on the master and segments.
      - Default tablespace GUC should not affect where a temporary table or its indexes are placed.
      - Default tablespace should override explict database tablespace configuration
      - ALTER DATABASE SET TABLESPACE should not impact SET default_tablespace to 'some_tablespace'
      - Objects created when default_tablespace is set should end up 'some_tablespace'.
      f7287795
    • T
      Re fly the gpdb_master/gpdb_master_without_asserts pipelines · ea7e00ca
      Tingfang Bao 提交于
         Add the windows clinets testing jobs to pipelines
      Authored-by: NTingfang Bao <bbao@pivotal.io>
      ea7e00ca
    • T
      Add gpdb client test cases on windows pipeline (#7926) · 12f7c139
      Tingfang Bao 提交于
      1) Run TEST_REMOTE.py of gpload
      2) Run psql SSL, gpfdist Windows pipe testing
      Co-authored-by: NPeifeng Qiu <pqiu@pivotal.io>
      Co-authored-by: NTingfang Bao <bbao@pivotal.io>
      Co-authored-by: NXiaoran Wang <xiwang@pivotal.io>
      Co-authored-by: NXiaodong Huo <xhuo@pivotal.io>
      12f7c139
    • X
      Fix a bug when setting globalxmin · 7b1d67c1
      xiong-gang 提交于
      It's introduced in 'e00098', it happens in such case:
      
      Tx 1: generate xid 1000 on segment 0 (xmin=1000, globalxmin=1000)
      Tx 2: generate xid 2000 on segment 0 (xmin=1000, globalxmin=1000)
      Tx 1: finish
      Tx 3: generate xid 3000 on segment 0 (xmin=2000, globalxmin=1000)
      Tx 2: finish
      Tx 4: generate xid 4000 on segment 0 (xmin=3000, globalxmin=2000)
      
      In this scenario, tx4 can advance the DistributedLogShared->oldestXmin to 2000,
      tx3 should use its globalxmin 1000 instead of the value of
      DistributedLogShared->oldestXmin.
      7b1d67c1
    • C
      Bump ORCA version to 3.51.0 (#7996) · ee407f14
      Chris Hajas 提交于
      Improve PciIntervalFromScalarBoolOr() to use faster algorithm
      Only allow const expr evaluation for (const cmp const) ignoring casts
      Remove FDisjointLeft check in CRange::PrngExtend()
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io>
      ee407f14
    • M
      Remove unused QAUTILS_FILES.txt · 3e29b708
      Mark Sliva 提交于
      It is superseded by NON_PRODUCTION_FILES.txt
      Co-authored-by: NMark Sliva <msliva@pivotal.io>
      Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
      3e29b708
    • J
      postinit: refuse dispatch-mode connections to QEs · ebc26b04
      Jamie McAtamney 提交于
      Prior to this commit, standard connections made to a primary segment
      would hang forever if that segment was not started in utility mode,
      because the call to cdb_setup() waits for a state that will never
      arrive. The hang exists in 6X as well, for slightly different reasons
      (an infinite wait on a semaphore).
      
      Some utilities make use of pg_isready to verify segment status, and a
      hung backend will report an incorrect status for the segment, which is
      how this bug was discovered. The hung backend sticks around forever in
      startup state as well.
      
      Add an explicit check on QE segments that the incoming session does not
      expect a dispatcher role, and bail out at the same place that we would
      normally complain about a mismatched utility-mode connection. (If the
      incoming session is set to the executor role, we'll raise a FATAL for
      other reasons, but at least we won't hang.)
      
      Add a regress test for the new behavior. We coopt the new
      internal_connection test file for this, and rename it gp_connections.
      We also add a more end-to-end regression test using the same
      reproduction case as the original bug.
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      ebc26b04
    • N
      basebackup: correctly terminate readlink() buffer · dc0a3cec
      Nikolaos Kalampalikis 提交于
      Buffer corruption was caused by not correctly terminating the
      readlink(), as of commit ccfa3ab7. We restored a previous line of code
      that allows correct termination.
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      dc0a3cec
  5. 24 6月, 2019 1 次提交
  6. 22 6月, 2019 3 次提交
  7. 21 6月, 2019 5 次提交
    • L
      docs - add info about configing hive access via jdbc (#7931) · 706d7774
      Lisa Owen 提交于
      * docs - add info about configing hive access via jdbc
      
      * add About to title
      
      * remove some nesting, misc edits from review
      
      * better table column names? clearer Authenticated User values
      
      * edits requested by alex, adjust heading levels
      706d7774
    • X
      Implement one-phase optimization · b5871009
      xiong-gang 提交于
      If a transaction has only updated one QE, we can do one-phase commit
      there.
      
      If one-phase commit transactions don't write pg_distributedlog, the tuples'
      visibility will be checked only with the local snapshot. This will result in an 
      incorrect result in repeatable read isolation level.
      
      For example:
      create table t(a int);
      tx 1: BEGIN ISOLATION LEVEL REPEATABLE READ;
      tx 2: insert into t values(1);
      tx 1: select * from t where a = 1;
      tx 2: insert into t values(1);
      tx 2: insert into t values(2);
      tx 1: select * from t;
      
      The first SELECT of tx1 will create a distributed snapshot on QD and a local
      snapshot on segment 1, and the later SELECT of tx1 will create a local snapshot
      on segment 2. In this way, the later SELECT sees the first and the third tuple
      but not the second one.
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NHubert Zhang <hzhang@pivotal.io>
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      b5871009
    • A
      Update the comments in makeHashAggEntryForInput() · 78429ffc
      Adam Lee 提交于
      The code doesn't call ExecCopySlotMemTupleTo() as the comments was
      speaking, and the two times of memtuple_form_to() are confusing, at
      least to me, update it.
      78429ffc
    • A
      Fix small memory leak in partial-aggregate deserialization functions. · 498255a7
      Adam Lee 提交于
      This backports upstream commit bd1693e8,
      slightly modified to resolve the comments conflict.
      
      	Author: Tom Lane <tgl@sss.pgh.pa.us>
      	Date:   Thu Jun 23 10:55:59 2016 -0400
      
      	    Fix small memory leak in partial-aggregate deserialization functions.
      
      	    A deserialize function's result is short-lived data during partial
      	    aggregation, since we're just going to pass it to the combine function
      	    and then it's of no use anymore.  However, the built-in deserialize
      	    functions allocated their results in the aggregate state context,
      	    resulting in a query-lifespan memory leak.  It's probably not possible for
      	    this to amount to anything much at present, since the number of leaked
      	    results would only be the number of worker processes.  But it might become
      	    a problem in future.  To fix, don't use the same convenience subroutine for
      	    setting up results that the aggregate transition functions use.
      
      	    David Rowley
      
      	    Report: <10050.1466637736@sss.pgh.pa.us>
      498255a7
    • A