1. 14 4月, 2018 4 次提交
    • A
      Remove AppendOnlyStorage_GetUsableBlockSize(). · 0a119de3
      Ashwin Agrawal 提交于
      When the blocksize is 2MB, the function AppendOnlyStorage_GetUsableBlockSize
      would give out the wrong usable block size. The expected result is 2MB. But the
      return value of the function call would give out (2M -4). This is because the
      macro AOSmallContentHeader_MaxLength is defined as (2M -1). After rounding down
      to 4 byte aligned, the result is (2M - 4).
      
      Without the fix can encounter errors as follows: "ERROR: Used length 2097152
      greater than bufferLen 2097148 at position 8388592 in table 'xxxx'".
      
      Also removed some related, but unused macro variables, just for cleaning up
      codes related to AO storage.
      Co-authored-by: NLirong Jian <jian@hashdata.cn>
      0a119de3
    • A
      Use relmapping file in gp_replica_check. · da277082
      Ashwin Agrawal 提交于
      commit b9b8831a introduced "relation mapping"
      infrastructure, which stores relfilenode as "0" in pg_class for shared and
      nailed catalog tables. The actual relfilenode is stored in separate file which
      provides mapping from OID to relfilenode. gp_replica_check builds hashtable for
      relfilnodes to perform lookup for actual files on-disk. So, while populating
      this hashtable, consult relmap file to get relfilenode for tables with
      relfilenode = 0.
      
      This should help avoid seeing WARNINGs like "relfilenode XXXX not present in
      primary's pg_class" or "found extra unknown file on mirror:" for
      gp_replica_check.
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      da277082
    • V
      Remove FIXME in statistic computation of planner · b7abfcbf
      Venkatesh Raghavan 提交于
      Changes in 8.4 for statistics computation is comprehensive.
      Will fix any fallouts of this change as we observe them.
      Current tests show that things are kosher so removing the fix me.
      b7abfcbf
    • B
      Bump ORCA to v2.55.19 · 9a70b244
      Bhuvnesh Chaudhary 提交于
      9a70b244
  2. 13 4月, 2018 17 次提交
  3. 12 4月, 2018 3 次提交
    • J
      Implement NUMERIC upgrade for AOCS versions < 8.3 · 54895f54
      Jacob Champion 提交于
      8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO
      tables, but not column-oriented. Correct that here.
      
      Store upgraded Datum data in a per-DatumStream buffer, to avoid
      "upgrading" the same data multiple times (multiple tuples may be
      pointing at the same data buffer, for example with RLE compression).
      Cache the column's base type in the DatumStreamRead struct.
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      54895f54
    • B
      Revert "Fix bug that planner generates redundant motion for joins on distribution key" · 2a326e59
      Bhuvnesh Chaudhary 提交于
      This reverts commit 8b0a7fed.
      
      Due to this commit, full join queries with condition on varchar columns
      started failing due to the below error. It is expected that there is a
      relabelnode on top of varchar columns while looking up the sort
      operator, however because of the said commit we removed the relabelnode.
      
      ```sql
      create table foo(a varchar(30), b varchar(30));
      postgres=# select X.a from foo X full join (select a from foo group by 1) Y ON X.a = Y.a;
      ERROR:  could not find member 1(1043,1043) of opfamily 1994 (createplan.c:4664)
      ```
      
      Will reopen the issue #4175 which brought this patch.
      2a326e59
    • A
      Add missing #ifdef block in aset.c (#4704) · 3ddbb283
      Andreas Scherbaum 提交于
      3ddbb283
  4. 11 4月, 2018 8 次提交
    • P
      Add a GUC to limit the number of slices for a query · d716a92f
      Pengzhou Tang 提交于
      Executing a query plan containing a large number of slices may slow down
      the entire Greenplum cluster: each "n-gang" slice corresponds to a
      separate process per segment. An example of such queries is a UNION ALL
      atop several complex views. To prevent such a situation, add a GUC
      gp_max_slices and refuse to execute plans of which the number of slices
      exceed that limit.
      Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
      d716a92f
    • X
      Add missing header file of gppc · adba45a9
      xiong-gang 提交于
      adba45a9
    • X
      Add GUC verify_gpfdists_cert · d66a7a1f
      xiong-gang 提交于
      This GUC determines whether curl verifies the authenticity of the
      gpfdist's certificate
      d66a7a1f
    • A
      CI: Use larger instance type for icw sles12 tests · ba5cfdbd
      Alexandra Wang 提交于
      Update template pipeline to reflect this change
      553b8754Authored-by: NAlexandra Wang <lewang@pivotal.io>
      ba5cfdbd
    • A
      CI: Replace top level `*_anchor:`s with a single list of `anchors` · 26d69d7e
      Alexandra Wang 提交于
      Re-apply 3a772cfd , this commit was
      accidently overwritten when introducing the CCP 2.0 change.
      Authored-by: NAlexandra Wang <lewang@pivotal.io>
      26d69d7e
    • D
      Concourse: Make setup_gpadmin_user.bash sourceable · 91bb0d6f
      David Sharp 提交于
      Authored-by: NDavid Sharp <dsharp@pivotal.io>
      91bb0d6f
    • B
      Fix Analyze privilege issue when executed by superuser · 3c139b9f
      Bhuvnesh Chaudhary 提交于
      The patch 62aba765 from upstream fixed
      the CVE-2009-4136 (security vulnerability) with the intent to properly
      manage session-local state during execution of an index function by a
      database superuser, which in some cases allowed remote authenticated
      users to gain privileges via a table with crafted index functions.
      
      Looking into the details of the CVE-2009-4136 and related CVE-2007-6600,
      the patch should ideally have limited the scope while we calculate the
      stats on the index expressions, where we run functions to evaluate the
      expression and could potentially present a security threat.
      
      However, the patch changed the user to table owner before collecting the
      sample, due to which even if analyze was run by superuser the sample
      couldn't be collected as the table owner did not had sufficient
      privileges to access the table. With this commit, we switch back to the
      original user while collecting the sample as it does not deal with
      indexes, or function call which was the original intention of the patch.
      
      Upstream did not face the privilege issue, as it does block sampling
      instead of issuing a query.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      3c139b9f
    • A
      Address GPDB_84_MERGE_FIXME in simplify_EXISTS_query() · 99450728
      Abhijit Subramanya 提交于
      This FIXME is two-fold:
      - Handling LIMIT 0
        The LIMIT is already handled in the caller,
        convert_EXISTS_sublink_to_join(): When an existential sublink contains
        an aggregate without GROUP BY or HAVING, we can safely replace it by a
        one-time TRUE/FALSE filter based on the type of sublink since the
        result of aggregate is always going to be one row even if it's input
        rows are 0.  However this assumption is incorrect when sublink
        contains LIMIT/OFFSET, such as, if the final limit count after
        applying the offset is 0.
      
      - Rules for demoting HAVING to WHERE
        previously, simplify_EXISTS_query() only disallowed demoting HAVING
        quals to WHERE, if it did not contain any aggregates. To determine the
        same, previously it used query->hasAggs, which is incorrect
        since hasAggs indicates that aggregate is present either in targetlist
        or HAVING.  This penalized the queries, wherein HAVING did not contain
        the agg but targetlist did (as demonstrated in the newly added test).
        This check is now replaced by contain_aggs_of_level().  Also, do not
        demote if HAVING contains volatile functions since they need to be
        evaluated once per group.
      Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
      99450728
  5. 10 4月, 2018 8 次提交
    • R
      Fix setting relfrozenxid during upgrade. · aa646fe7
      Richard Guo 提交于
      Relations with external storage, as well as AO and CO
      should have InvalidTransactionId in relfrozenxid during
      upgrade.
      
      The idea here is to keep the same logic with function
      should_have_valid_relfrozenxid().
      aa646fe7
    • R
      When dumping function, append callback function attribute if needed. · d5842dc3
      Richard Guo 提交于
      Attribute 'DESCRIBE' is added by gpdb to describe
      the name of a callback function. Currently pg_dump
      does not handle this attribute.
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      d5842dc3
    • B
      Revert "Fix pushing down of quals in subqueries contains window funcs" · cc11b40e
      Bhuvnesh Chaudhary 提交于
      This reverts commit 54ee5b5c.
      
      In qual_is_pushdown_safe_set_operation it crashed while
      Assert(subquery)
      cc11b40e
    • B
      Fix pushing down of quals in subqueries contains window funcs · 54ee5b5c
      Bhuvnesh Chaudhary 提交于
      Previously, if there was a subquery contaning window functions, pushing
      down of the filters was banned. This commit fixes the issue, by pushing
      downs filters which are not on the columns projected using window
      functions.
      
      Adding relevant tests.
      
      Test case:
      After porting the fix to gpdb master, in the below case the filter `b = 1` is pushed down on
      ```
      explain select b from (select b, row_number() over (partition by b) from foo) f  where b = 1;
                                                   QUERY PLAN
      ----------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)  (cost=0.00..1.05 rows=1 width=4)
         ->  Subquery Scan on f  (cost=0.00..1.05 rows=1 width=4)
               ->  WindowAgg  (cost=0.00..1.04 rows=1 width=4)
                     ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..1.03 rows=1 width=4)
                           Hash Key: foo.b
                           ->  Seq Scan on foo  (cost=0.00..1.01 rows=1 width=4)
                                 Filter: b = 1
       Optimizer: legacy query optimizer
      ```
      
      Currently on master the plan is as below where the filter is not pushed down.
      ```
      explain select b from (select b, row_number() over (partition by b) from foo) f  where b = 1;
                                                   QUERY PLAN
      ----------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)  (cost=1.04..1.07 rows=1 width=4)
         ->  Subquery Scan on f  (cost=1.04..1.07 rows=1 width=4)
               Filter: f.b = 1
               ->  WindowAgg  (cost=1.04..1.06 rows=1 width=4)
                     Partition By: foo.b
                     ->  Sort  (cost=1.04..1.04 rows=1 width=4)
                           Sort Key: foo.b
                           ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..1.03 rows=1 width=4)
                                 Hash Key: foo.b
                                 ->  Seq Scan on foo  (cost=0.00..1.01 rows=1 width=4)
       Optimizer: legacy query optimizer
      54ee5b5c
    • L
      d33426eb
    • L
      docs - remove gpcrondump/gpdbrestore/gpmfr refs, doc files (#4830) · 82dd8ad2
      Lisa Owen 提交于
      * docs - remove gpcrondump/gpdbrestore/gpmfr refs, doc files
      
      * use more generic description for gpbackup/gprestore
      
      * gpbackup has same limitation w.r.t. leaf child parts
      82dd8ad2
    • M
      docs: minor edits to ALTER EXTERNAL TABLE. (#4840) · 27d7d718
      Mel Kiyama 提交于
      27d7d718
    • B
      Add IDLE_RESOURCES_NEVER_TIME_OUT macro for magic 0 value · feb34228
      Ben Christel 提交于
      This is just refactoring. Related to (#4690).
      Co-authored-by: NBen Christel <bchristel@pivotal.io>
      Co-authored-by: NAmil Khanzada <akhanzada@pivotal.io>
      feb34228