1. 14 3月, 2019 1 次提交
  2. 01 10月, 2018 1 次提交
  3. 28 9月, 2018 3 次提交
    • D
      Order active window clauses for greater reuse of Sort nodes. · 3f0d46f7
      Daniel Gustafsson 提交于
      This is a backport of the below commit from postgres 12dev, which in turn
      is a patch that was influenced by an optimization from the previous version
      of the Greenplum Window code. The idea is to order the Sort nodes based on
      sort prefixes, such that sorts can be reused by subsequent nodes.
      
      As this uses EXPLAIN in the test output, a new expected file is added for
      ORCA output even though the patch only touches the postgres planner.
      
        commit 728202b6
        Author: Andrew Gierth <rhodiumtoad@postgresql.org>
        Date:   Fri Sep 14 17:35:42 2018 +0100
      
          Order active window clauses for greater reuse of Sort nodes.
      
          By sorting the active window list lexicographically by the sort clause
          list but putting longer clauses before shorter prefixes, we generate
          more chances to elide Sort nodes when building the path.
      
          Author: Daniel Gustafsson (with some editorialization by me)
          Reviewed-by: Alexander Kuzmenkov, Masahiko Sawada, Tom Lane
          Discussion: https://postgr.es/m/124A7F69-84CD-435B-BA0E-2695BE21E5C2%40yesql.se
      3f0d46f7
    • H
      Reimplement DISTINCT for window aggregates. · 6523b432
      Heikki Linnakangas 提交于
      GPDB 5 supported DISTINCT in window aggregates, e.g:
      
      COUNT(DISTINCT x) OVER (PARTITION BY y)
      
      However, PostgreSQL does not support that, and as a result, GPDB lost that
      capability as part of the window functions rewrite, too. In the upstream,
      there's an explicit check for that, that it was lost in the window function
      rewrite. So the parser accepted that, but it was executed just like if
      there was no DISTINCT. There were also no tests for this, that would return
      a different result with the DISTINCT than without, which is why no-one
      noticed it.
      
      To fix, implement the DISTINCT support, to the same extent that the old
      implementation supported it. The new implementation adds a little sort +
      deduplicate step for each DISTINCT aggregate. I'm not sure how this
      compares with the old implementation, performance-wise, but at least it
      works now.
      
      Also, add the missing tests.
      6523b432
    • Z
      Allow tables to be distributed on a subset of segments · 4eb65a53
      ZhangJackey 提交于
      There was an assumption in gpdb that a table's data is always
      distributed on all segments, however this is not always true for example
      when a cluster is expanded from M segments to N (N > M) all the tables
      are still on M segments, to workaround the problem we used to have to
      alter all the hash distributed tables to randomly distributed to get
      correct query results, at the cost of bad performance.
      
      Now we support table data to be distributed on a subset of segments.
      
      A new columne `numsegments` is added to catalog table
      `gp_distribution_policy` to record how many segments a table's data is
      distributed on.  By doing so we could allow DMLs on M tables, joins
      between M and N tables are also supported.
      
      ```sql
      -- t1 and t2 are both distributed on (c1, c2),
      -- one on 1 segments, the other on 2 segments
      select localoid::regclass, attrnums, policytype, numsegments
          from gp_distribution_policy;
       localoid | attrnums | policytype | numsegments
      ----------+----------+------------+-------------
       t1       | {1,2}    | p          |           1
       t2       | {1,2}    | p          |           2
      (2 rows)
      
      -- t1 and t1 have exactly the same distribution policy,
      -- join locally
      explain select * from t1 a join t1 b using (c1, c2);
                         QUERY PLAN
      ------------------------------------------------
       Gather Motion 1:1  (slice1; segments: 1)
         ->  Hash Join
               Hash Cond: a.c1 = b.c1 AND a.c2 = b.c2
               ->  Seq Scan on t1 a
               ->  Hash
                     ->  Seq Scan on t1 b
       Optimizer: legacy query optimizer
      
      -- t1 and t2 are both distributed on (c1, c2),
      -- but as they have different numsegments,
      -- one has to be redistributed
      explain select * from t1 a join t2 b using (c1, c2);
                                QUERY PLAN
      ------------------------------------------------------------------
       Gather Motion 1:1  (slice2; segments: 1)
         ->  Hash Join
               Hash Cond: a.c1 = b.c1 AND a.c2 = b.c2
               ->  Seq Scan on t1 a
               ->  Hash
                     ->  Redistribute Motion 2:1  (slice1; segments: 2)
                           Hash Key: b.c1, b.c2
                           ->  Seq Scan on t2 b
       Optimizer: legacy query optimizer
      ```
      4eb65a53
  4. 19 9月, 2018 1 次提交
  5. 31 8月, 2018 1 次提交
    • H
      Replace GPDB versions of some numeric aggregates with upstream's. · 325e6fcd
      Heikki Linnakangas 提交于
      Among other things, this fixes the inaccuracy of integer avg() and sum()
      functions. (i.e. fixes https://github.com/greenplum-db/gpdb/issues/5525)
      
      The upstream versions are from PostgreSQL 9.6, using the 128-bit math
      from the following commit:
      
      commit 959277a4
      Author: Andres Freund <andres@anarazel.de>
      Date:   Fri Mar 20 10:26:17 2015 +0100
      
          Use 128-bit math to accelerate some aggregation functions.
      
          On platforms where we support 128bit integers, use them to implement
          faster transition functions for sum(int8), avg(int8),
          var_*(int2/int4),stdev_*(int2/int4). Where not supported continue to use
          numeric as a transition type.
      
          In some synthetic benchmarks this has been shown to provide significant
          speedups.
      
          Bumps catversion.
      
          Discussion: 544BB5F1.50709@proxel.se
          Author: Andreas Karlsson
          Reviewed-By: Peter Geoghegan, Petr Jelinek, Andres Freund,
              Oskari Saarenmaa, David Rowley
      325e6fcd
  6. 18 8月, 2018 1 次提交
  7. 03 8月, 2018 1 次提交
  8. 02 8月, 2018 1 次提交
    • R
      Merge with PostgreSQL 9.2beta2. · 4750e1b6
      Richard Guo 提交于
      This is the final batch of commits from PostgreSQL 9.2 development,
      up to the point where the REL9_2_STABLE branch was created, and 9.3
      development started on the PostgreSQL master branch.
      
      Notable upstream changes:
      
      * Index-only scan was included in the batch of upstream commits. It
        allows queries to retrieve data only from indexes, avoiding heap access.
      
      * Group commit was added to work effectively under heavy load. Previously,
        batching of commits became ineffective as the write workload increased,
        because of internal lock contention.
      
      * A new fast-path lock mechanism was added to reduce the overhead of
        taking and releasing certain types of locks which are taken and released
        very frequently but rarely conflict.
      
      * The new "parameterized path" mechanism was added. It allows inner index
        scans to use values from relations that are more than one join level up
        from the scan. This can greatly improve performance in situations where
        semantic restrictions (such as outer joins) limit the allowed join orderings.
      
      * SP-GiST (Space-Partitioned GiST) index access method was added to support
        unbalanced partitioned search structures. For suitable problems, SP-GiST can
        be faster than GiST in both index build time and search time.
      
      * Checkpoints now are performed by a dedicated background process. Formerly
        the background writer did both dirty-page writing and checkpointing. Separating
        this into two processes allows each goal to be accomplished more predictably.
      
      * Custom plan was supported for specific parameter values even when using
        prepared statements.
      
      * API for FDW was improved to provide multiple access "paths" for their tables,
        allowing more flexibility in join planning.
      
      * Security_barrier option was added for views to prevents optimizations that
        might allow view-protected data to be exposed to users.
      
      * Range data type was added to store a lower and upper bound belonging to its
        base data type.
      
      * CTAS (CREATE TABLE AS/SELECT INTO) is now treated as utility statement. The
        SELECT query is planned during the execution of the utility. To conform to
        this change, GPDB executes the utility statement only on QD and dispatches
        the plan of the SELECT query to QEs.
      Co-authored-by: NAdam Lee <ali@pivotal.io>
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      Co-authored-by: NGang Xiong <gxiong@pivotal.io>
      Co-authored-by: NHaozhou Wang <hawang@pivotal.io>
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      Co-authored-by: NShujie Zhang <shzhang@pivotal.io>
      Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
      Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
      4750e1b6
  9. 25 4月, 2018 1 次提交
    • B
      Fix pushing of quals containing window functions · a802eb2f
      Bhuvnesh Chaudhary 提交于
      Fix qual_is_pushdown_safe_set_operation to correctly resolve the
      qual vars and identify if there are any window references in the top
      level of the set operation's left or right subqueries. Before commit
      b8002a9, instead of starting with rte of the level where the qual is
      attached we started scanning the rte of the subqueries of the left and
      right args of setop to identify the qual. However, because of this the
      varno didn't match to the corresponding RTE due to which the quals
      couldn't be resolved to winref and were incorrectly pushed down. This
      caused the planner to return an error during execution.
      a802eb2f
  10. 10 4月, 2018 2 次提交
    • B
      Revert "Fix pushing down of quals in subqueries contains window funcs" · cc11b40e
      Bhuvnesh Chaudhary 提交于
      This reverts commit 54ee5b5c.
      
      In qual_is_pushdown_safe_set_operation it crashed while
      Assert(subquery)
      cc11b40e
    • B
      Fix pushing down of quals in subqueries contains window funcs · 54ee5b5c
      Bhuvnesh Chaudhary 提交于
      Previously, if there was a subquery contaning window functions, pushing
      down of the filters was banned. This commit fixes the issue, by pushing
      downs filters which are not on the columns projected using window
      functions.
      
      Adding relevant tests.
      
      Test case:
      After porting the fix to gpdb master, in the below case the filter `b = 1` is pushed down on
      ```
      explain select b from (select b, row_number() over (partition by b) from foo) f  where b = 1;
                                                   QUERY PLAN
      ----------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)  (cost=0.00..1.05 rows=1 width=4)
         ->  Subquery Scan on f  (cost=0.00..1.05 rows=1 width=4)
               ->  WindowAgg  (cost=0.00..1.04 rows=1 width=4)
                     ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..1.03 rows=1 width=4)
                           Hash Key: foo.b
                           ->  Seq Scan on foo  (cost=0.00..1.01 rows=1 width=4)
                                 Filter: b = 1
       Optimizer: legacy query optimizer
      ```
      
      Currently on master the plan is as below where the filter is not pushed down.
      ```
      explain select b from (select b, row_number() over (partition by b) from foo) f  where b = 1;
                                                   QUERY PLAN
      ----------------------------------------------------------------------------------------------------
       Gather Motion 3:1  (slice2; segments: 3)  (cost=1.04..1.07 rows=1 width=4)
         ->  Subquery Scan on f  (cost=1.04..1.07 rows=1 width=4)
               Filter: f.b = 1
               ->  WindowAgg  (cost=1.04..1.06 rows=1 width=4)
                     Partition By: foo.b
                     ->  Sort  (cost=1.04..1.04 rows=1 width=4)
                           Sort Key: foo.b
                           ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..1.03 rows=1 width=4)
                                 Hash Key: foo.b
                                 ->  Seq Scan on foo  (cost=0.00..1.01 rows=1 width=4)
       Optimizer: legacy query optimizer
      54ee5b5c
  11. 07 3月, 2018 1 次提交
    • V
      Window merge changed the plan generated by Planner · b2e8652f
      Venkatesh Raghavan 提交于
      Instead of a single Window operator, Planner (similar to GPORCA)
      generates a plan with cascades of window operation. The sort operation
      needed is based on the order in which the window functions are specified
      in the query. This is an expected behavior change. So removing the
      fixmes.
      b2e8652f
  12. 01 2月, 2018 1 次提交
  13. 18 1月, 2018 1 次提交
    • H
      Fix whitespace in tests, mostly in expected output. · 06a2bb64
      Heikki Linnakangas 提交于
      Commit ce3153fa, about to be merged from PostgreSQL 9.0 soon, removes
      the -w option from pg_regress's "diff" invocation. That commit will fix
      all the PostgreSQL regression tests to pass without it, but we need to
      also fix all the GPDB tests. That's what this commit does.
      06a2bb64
  14. 10 1月, 2018 1 次提交
  15. 24 11月, 2017 5 次提交
    • E
      Updating plans for olap_window_seq to match upstream · 107d9c34
      Ekta Khanna 提交于
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      107d9c34
    • H
      Silence errors about ambiguous NULL · 7000ad10
      Heikki Linnakangas 提交于
      These test queries produced error:
      
      ERROR:  could not determine polymorphic type because input has type "unknown"
      
      Because it's ambiguous what datatype the NULL is.
      7000ad10
    • H
      Support ordered-set (WITHIN GROUP) aggregates. · fd6212ce
      Heikki Linnakangas 提交于
      This is backport from PostgreSQL 9.4. It brings back functionality that we
      lost with the ripout & replace of the window function implementation.
      
      I left out all the code and tests related to COLLATE, because we don't have
      that feature. Will need to put that back when we merge collation support, in
      9.1.
      
      commit 8d65da1f
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Mon Dec 23 16:11:35 2013 -0500
      
          Support ordered-set (WITHIN GROUP) aggregates.
      
          This patch introduces generic support for ordered-set and hypothetical-set
          aggregate functions, as well as implementations of the instances defined in
          SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
          percent_rank(), cume_dist()).  We also added mode() though it is not in the
          spec, as well as versions of percentile_cont() and percentile_disc() that
          can compute multiple percentile values in one pass over the data.
      
          Unlike the original submission, this patch puts full control of the sorting
          process in the hands of the aggregate's support functions.  To allow the
          support functions to find out how they're supposed to sort, a new API
          function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
          the aggregate call's Aggref node, which may have other uses beyond the
          immediate need.  There is also support for ordered-set aggregates to
          install cleanup callback functions, so that they can be sure that
          infrastructure such as tuplesort objects gets cleaned up.
      
          In passing, make some fixes in the recently-added support for variadic
          aggregates, and make some editorial adjustments in the recent FILTER
          additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
          allowing it to succeed whenever the target type is ANY or ANYELEMENT.
          It was inconsistent that it dealt with other polymorphic target types
          but not these.
      
          Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
          and rather heavily editorialized upon by Tom Lane
      
      Also includes this fixup:
      
      commit cf63c641
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Mon Dec 23 20:24:07 2013 -0500
      
          Fix portability issue in ordered-set patch.
      
          Overly compact coding in makeOrderedSetArgs() led to a platform dependency:
          if the compiler chose to execute the subexpressions in the wrong order,
          list_length() might get applied to an already-modified List, giving a
          value we didn't want.  Per buildfarm.
      fd6212ce
    • H
      Centralize the logic for detecting misplaced aggregates, window funcs, etc. · fc8f849d
      Heikki Linnakangas 提交于
      This cherry-picks the following commit. This is needed because subsequent
      commits depend on this one.
      
      I took the EXPR_KIND_PARTITION_EXPRESSION value from PostgreSQL v10, where
      it's also for partition-related things. Seems like a good idea, even though
      our partitioning implementation is completely different.
      
      commit eaccfded
      Author: Tom Lane <tgl@sss.pgh.pa.us>
      Date:   Fri Aug 10 11:35:33 2012 -0400
      
          Centralize the logic for detecting misplaced aggregates, window funcs, etc.
      
          Formerly we relied on checking after-the-fact to see if an expression
          contained aggregates, window functions, or sub-selects when it shouldn't.
          This is grotty, easily forgotten (indeed, we had forgotten to teach
          DefineIndex about rejecting window functions), and none too efficient
          since it requires extra traversals of the parse tree.  To improve matters,
          define an enum type that classifies all SQL sub-expressions, store it in
          ParseState to show what kind of expression we are currently parsing, and
          make transformAggregateCall, transformWindowFuncCall, and transformSubLink
          check the expression type and throw error if the type indicates the
          construct is disallowed.  This allows removal of a large number of ad-hoc
          checks scattered around the code base.  The enum type is sufficiently
          fine-grained that we can still produce error messages of at least the
          same specificity as before.
      
          Bringing these error checks together revealed that we'd been none too
          consistent about phrasing of the error messages, so standardize the wording
          a bit.
      
          Also, rewrite checking of aggregate arguments so that it requires only one
          traversal of the arguments, rather than up to three as before.
      
          In passing, clean up some more comments left over from add_missing_from
          support, and annotate some tests that I think are dead code now that that's
          gone.  (I didn't risk actually removing said dead code, though.)
      
      Author: Heikki Linnakangas <hlinnakangas@pivotal.io>
      Author: Ekta Khanna <ekhanna@pivotal.io>
      fc8f849d
    • H
      Wholesale rip out and replace Window planner and executor code · f62bd1c6
      Heikki Linnakangas 提交于
      This adds some limitations, and removes some functionality that tte old
      implementation had. These limitations will be lifted, and missing
      functionality will be added back, in subsequent commits:
      
      * You can no longer have variables in start/end offsets
      
      * RANGE is not implemented (except for UNBOUNDED)
      
      * If you have multiple window functions that require a different sort
        ordering, the planner is not smart about placing them in a way that
        minimizes the number of sorts.
      
      This also lifts some limitations that the GPDB implementation had:
      
      * LEAD/LAG offset can now be negative. In the qp_olap_windowerr, a lot of
        queries that used to throw an "ROWS parameter cannot be negative" error
        are now passing. That error was an artifact of the eay LEAD/LAG were
        implemented. Those queries contain window function calls like "LEAD(col1,
        col2 - col3)", and sometimes with suitable values in col2 and col3, the
        second argument went negative. That caused the error. implementation of
        LEAD/LAG is OK with a negative argument.
      
      * Aggregate functions with no prelimfn or invprelimfn are now supported as
        window functions
      
      * Window functions, e.g. rank(), no longer require an ORDER BY. (The output
        will vary from one invocation to another, though, because the order is
        then not well defined. This is more annoying on GPDB than on PostgreSQL,
        because in GDPB the row order tends to vary because the rows are spread
        out across the cluster and will arrive in the master in unpredictable
        order)
      
      * NTILE doesn't require the argument expression to be in PARTITION BY
      
      * A window function's arguments may contain references to an outer query.
      
      This changes the OIDs of the built-in window functions to match upstream.
      Unfortunately, the OIDs had been hard-coded in ORCA, so to work around that
      until those hard-coded values are fixed in ORCA, the ORCA translator code
      contains a hack to map the old OID to the new ones.
      f62bd1c6
  16. 19 11月, 2017 1 次提交
    • H
      Use the new 8.3 style when printing Access Privileges in psql. · ef408151
      Heikki Linnakangas 提交于
      Long time ago, when updating our psql version to 8.3 (or something higher),
      we had decided to keep the old single-line style when displaying access
      privileges, to avoid having to update regression tests. It's time to move
      forward, update the tests, and use the nicer 8.3 style for displaying
      access privileges.
      
      Also, \d on a view no longer prints the View Definition. You need to use
      the verbose \d+ option for that. (I'm not a big fan of that change myself:
      when I want to look at a view I'm almost always interested in the View
      Definition. But let's not second-guess decisions made almost 10 years ago
      in the upstream.)
      
      Note: psql still defaults to the "old-ascii" style when printing multi-line
      fields. The new style was introduced only later, in 9.0, so to avoid
      changing all the expected output files, we should stick to the old style
      until we reach that point in the merge. This commit only changes the style
      for Access privileges, which is different from the multi-line style.
      ef408151
  17. 07 10月, 2017 1 次提交
  18. 25 9月, 2017 1 次提交
    • H
      Remove the concept of window "key levels". · b1651a43
      Heikki Linnakangas 提交于
      It wasn't very useful. ORCA and Postgres both just stack WindowAgg nodes
      on top of each other, and no-one's been unhappy about that, so we might as
      well do that, too. This reduces the difference between GPDB and the upstream
      implementation, and will hopefully make it smoother to switch.
      
      Rename the Window Plan node type to WindowAgg, to match upstream, now
      that it is fairly close to the upstream version.
      b1651a43
  19. 12 9月, 2017 1 次提交
    • H
      Split WindowSpec into separate before and after parse-analysis structs. · 789f443d
      Heikki Linnakangas 提交于
      In the upstream, two different structs are used to represent a window
      definition. WindowDef in the grammar, which is transformed into
      WindowClause during parse analysis. In GPDB, we've been using the same
      struct, WindowSpec, in both stages. Split it up, to match the upstream.
      
      The representation of the window frame, i.e. "ROWS/RANGE BETWEEN ..." was
      different between the upstream implementation and the GPDB one. We now use
      the upstream frameOptions+startOffset+endOffset representation in raw
      WindowDef parse node, but it's still converted to the WindowFrame
      representation for the later stages, so WindowClause still uses that. I
      will switch over the rest of the codebase to the upstream representation as
      a separate patch.
      
      Also, refactor WINDOW clause deparsing to be closer to upstream.
      
      One notable difference is that the old WindowSpec.winspec field corresponds
      to the winref field in WindowDef andWindowClause, except that the new
      'winref' is 1-based, while the old field was 0-based.
      
      Another noteworthy thing is that this forbids specifying "OVER (w
      ROWS/RANGE BETWEEN ...", if the window "w" already specified a window frame,
      i.e. a different ROWS/RANGE BETWEEN. There was one such case in the
      regression suite, in window_views, and this updates the expected output of
      that to be an error.
      789f443d
  20. 08 9月, 2017 2 次提交
    • H
      Remove pessimization of trivial SubqueryScans over Window nodes. · 72490505
      Heikki Linnakangas 提交于
      This hack, to refrain from removing a trivial SubqueryScan if the subnode
      was a Window node, was added back in 2009 along with the test case. I'm
      not sure what the problem was, but this must've been just a quick band-aid
      over whatever the real problem was. It doesn't seem to be needed anymore,
      as everything works without it.
      
      Expand the comment in the test case a little bit to explain what the point
      of the query is. The original commit message said just "Fix MPP-4840", so
      this is all the information I could find about it, unfortunately.
      72490505
    • H
      Refactor window function syntax checks to match upstream. · f819890b
      Heikki Linnakangas 提交于
      Mostly, move the responsibilities of the check_call() function to the
      callers, transformAggregateCall() and transformWindowFuncCall().
      
      This fixes one long-standing, albeit harmless, bug. Previously, you got an
      "Unexpected internal error", if you tried to use a window function in the
      WHERE clause of a DELETE statement, instead of a user-friendly syntax
      error. Add a test case for that.
      
      Move a few similar tests from 'olap_window_seq' to 'qp_olap_windowerr'.
      Seems like a more appropriate place for them. Also, 'olap_window_seq' has
      an alternative expected output file for ORCA, so it's nice to keep tests
      that produce the same output with or without ORCA out of there. Also add a
      test query for creating an index on an expression containing a window
      function. There was a test for that already, but it was missing parens
      around the expression, and therefore produced an error already in the
      grammar.
      f819890b
  21. 04 9月, 2017 2 次提交
  22. 29 8月, 2017 1 次提交
  23. 17 6月, 2017 1 次提交
    • K
      Set a flag when window's child is exhausted. · 18c61de4
      Kavinder Dhaliwal 提交于
      There is an assert failure when Window's child is a HashJoin operator
      and while filling its buffer Window receives a NULL tuple. In this case
      HashJoin will call ExecEagerFreeHashJoin() since it is done returning
      any tuples. However, Window once it has returned all the tuples in its
      input buffer will call ExecProcNode() on HashJoin. This casues an assert
      failure in HashJoin that states that ExecHashJoin() should not be called
      if HashJoin's hashtable has already been released.
      
      This commit fixes the above issue by setting a flag in WindowState when
      Window encounters a null tuple while filling its buffer. This flag then
      guards any subsequent call to ExecProcNode() from fetchCurrentRow()
      18c61de4
  24. 06 6月, 2016 1 次提交
    • H
      Backport b153c092 from PostgreSQL 8.4 · 78b0a42e
      Heikki Linnakangas 提交于
      This is a partial backport of a larger body of work which also already have
      been partially backported.
      
      Remove the GPDB-specific "breadcrumbs" mechanism from the parser. It is
      made obsolete by the upstream mechanism. We lose context information from
      a few errors, which is unfortunate, but seems acceptable. Upstream doesn't
      have context information for those errors either.
      
      The backport was originally done by Daniel Gustafsson, on top of the
      PostgreSQL 8.3 merge. I tweaked it to apply it to master, before the
      merge.
      
      Upstream commit:
      
        commit b153c092
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Mon Sep 1 20:42:46 2008 +0000
      
          Add a bunch of new error location reports to parse-analysis error messages.
          There are still some weak spots around JOIN USING and relation alias lists,
          but most errors reported within backend/parser/ now have locations.
      78b0a42e
  25. 02 6月, 2016 1 次提交
  26. 03 3月, 2016 1 次提交
  27. 12 2月, 2016 1 次提交
    • N
      Fixing buffer overwrite in window operator · 071eb2c2
      Nikos Armenatzoglou 提交于
      We have a list of window values, one per level that the window functions of each level processes. We extract these window values from tuple store. To do the extraction, we use a temporary buffer, called serial_array, to serialize and deserialize the tuples. During this process, we obtain winvalues (the list of input values for a level). If the data type of the winvalues for a level is byref, then it ends up holding a pointer to the serial_array. However, we have only one serial_array for the entire window operator. Therefore, if we have more than one level with byref data types, we may end up overwriting the serial_array when we process another level, corrupting the earlier level’s byref datum pointers. To fix this, we now have one serial_array per level.
      Signed-off-by: NFoyzur Rahman <foyzur@gmail.com>
      071eb2c2
  28. 06 12月, 2015 1 次提交
  29. 28 10月, 2015 1 次提交