1. 15 6月, 2020 4 次提交
    • P
      Retry more for replication synchronization waiting to avoid isolation2 test flakiness. (#10281) · ca360700
      Paul Guo 提交于
      Some test cases have been failing due to too few retries. Let's increase them and also
      create some common UDF for use.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
      ca360700
    • P
      Fix flakiness of "select 1" output after master reset due to injected panic... · 02ad1fc4
      Paul Guo 提交于
      Fix flakiness of "select 1" output after master reset due to injected panic fault before_read_command (#10275)
      
      Several tests inject panic in before_read_command to trigger master reset.
      Previous we run "select 1" after the fault inject query to verify, but the
      output is not deterministic sometimes. i.e. sometimes we do not see the line
      
      PANIC:  fault triggered, fault name:'before_read_command' fault type:'panic'
      
      This was actually observed in test crash_recovery_redundant_dtx per commit
      message and test comment.  It ignores the output of "select 1", but probably
      we still want the output to verify the fault is encountered.
      
      It's still mysterious why sometimes the PANIC message is missing. I spent some
      time on digging but reckon that I can not root cause in short time. One guess
      is that the PANIC message was although sent to the frontend in errfinish() but
      the kernel buffer-ed data was dropped after abort() due to ereport(PANIC);
      Another guess is something wrong related to libpq protocol (not saying it's a
      libpq bug).  In any case, it does not deserve much time to work on the tests
      only, so simply mask the PANIC message to make the test result deterministic
      and also not affect the test purpose.
      Reviewed-by: NHubert Zhang <hzhang@pivotal.io>
      02ad1fc4
    • X
      Move to a resource group with memory_limit 0 · 37a19376
      xiong-gang 提交于
      When move a query to a resource group whose memory_limit is 0, the available
      memory is the current available global shared memory.
      37a19376
    • X
      Fix a recursive AbortTransaction issue · b5c4fdc0
      xiong-gang 提交于
      When the error happens after ProcArrayEndTransaction, it will recurse back to
      AbortTransaction, we need to make sure it will not generate extra WAL record
      and not fail the assertions.
      b5c4fdc0
  2. 13 6月, 2020 2 次提交
  3. 12 6月, 2020 2 次提交
    • (
      Create external table fdw extension under gpcontrib. (#10187) · d86f32e5
      (Jerome)Junfeng Yang 提交于
      Remove pg_exttable.h since the catalog is no longer exist anymore.
      Move function declaration in pg_exttable.h into external.h.
      Extract related code into external.c which maintains all codes that
      can not be moved into an external table fdw extension.
      
      Also, move the external table orca interface into external.c as a workaround.
      Maybe provide orca fdw routine in the future.
      
      Extract the external table's execution logic into external table fdw
      extension.
      
      Create the gp_exttable_fdw extension during gpinitsystem to allow
      creating system external tables.
      d86f32e5
    • D
      a4e230b2
  4. 11 6月, 2020 5 次提交
    • H
      Revert "Fix flaky test exttab1" · f538f4b6
      Hubert Zhang 提交于
      This reverts commit 026e4595.
      This commit break pxf test case. We need to handle it firstly.
      f538f4b6
    • H
      Fix flaky test terminate_in_gang_creation · 63b5adf9
      Hubert Zhang 提交于
      The test case restarts all primaries and expects the old session
      would fail for the next query since gangs are cached.
      But the restart may last more than 18s which is the max idle
      time QEs could exist. In this case, the new query in the old
      session will just fetch a new gang without expected errors.
      Just set gp_vmem_idle_resource_timeout to 0 to fix this flaky test.
      Reviewed-by: NPaul Guo <pguo@pivotal.io>
      63b5adf9
    • H
      Fix flaky test exttab1 · 026e4595
      Hubert Zhang 提交于
      The flaky case happens when select an external table with option
      "fill missing fields". By gdb the qe, this value is not false
      on QE sometimes. When ProcessCopyOptions, we use intVal(defel->arg)
      to parse the boolean value, which is not correct. Using defGetBoolean
      to replace it.
      026e4595
    • J
      Add a new line feed and fix a bad file name · f281ac17
      J·Y 提交于
      f281ac17
    • L
      docs - graph analytics new page (#10138) · 6d7b949c
      Lena Hunter 提交于
      * clarifying pg_upgrade note
      
      * graph edits
      
      * graph analytics updates
      
      * menu edits and code spacing
      
      * graph further edits
      
      * insert links for modules
      6d7b949c
  5. 10 6月, 2020 4 次提交
  6. 09 6月, 2020 2 次提交
  7. 08 6月, 2020 3 次提交
    • H
      Remove unused variable and rel_is_external_table() call. · 71e50c75
      Heikki Linnakangas 提交于
      Was left unused by commit e4b499aa that removed the pg_exttable catalog
      table.
      71e50c75
    • Z
      Fix lateral PANIC issue when subquery contain limit or groupby. · 8d1bb5a8
      Zhenghua Lyu 提交于
      Previous commit 62579728 fixes a lateral panic issue but does
      not handle all the bad cases because it only check if the query
      tree contains limit clause. Bad cases for example: if the subquery
      is like `q1 union all (q2 limit 1)` then the whole query tree
      does not contain limit clause.
      
      Another bad case is the lateral subquery may contain groupby.
      like:
      
          select * from t1_lateral_limit t1 cross join lateral
          (select (c).x+t2.a, sum(t2.a+t2.b) from t2_lateral_limit t2
           group by (c).x+t2.a)x;
      
      When planning the lateraled subquery we do not know where is
      the param in the subquery's query tree. Thus it is a bit complicated
      to precisely and efficiently resolve this issue.
      
      This commit adopts a simple method to fix panic issue: it justs
      check the subquery's query tree to see if there is any group-by
      or limit clause, if so, force gather each relation and materialize
      them. This is not the best plan we might get. But let's make it
      correct first and I think in future we should seriously consider
      how to fully and efficiently support lateral.
      8d1bb5a8
    • P
      Retire guc gp_session_role (#9396) · f6297b96
      Paul Guo 提交于
      Use guc gp_role only now and replace the functionality of guc gp_session_role with it
      also. Previously we have both gucs. The difference of the two gucs are (copied
      from code comment):
      
       * gp_session_role
       *
       * - does not affect the operation of the backend, and
       * - does not change during the lifetime of PostgreSQL session.
       *
       * gp_role
       *
       * - determines the operating role of the backend, and
       * - may be changed by a superuser via the SET command.
      
      This is not friendly for coding. For example, You could find Gp_role and
      Gp_session_role are set as GP_ROLE_DISPATCH on Postmaster & many aux processes
      on all nodes (even QE nodes) in a cluster, so you can see that to differ from
      QD postmaster and QE postmaster, current gpdb uses an additional -E option in
      postmaster arguments. These makes developers confusing when writing role branch
      related code given we have three related variables.  Also some related code is
      even buggy now (e.g. 'set gp_role' even FATAL quits).
      
      With this patch we just have gp_role now. Some changes which might be
      interesting in the patch are:
      
      1. For postmaster, we should specify '-c gp_role=' (e.g. via pg_ctl argument) to
         determine the role else we assume the utility role.
      
      2. For stand-alone backend, utility role is enforced (no need to specify by
         users).
      
      3. Could still connect QE/QD nodes using utility mode with PGOPTIONS, etc as
         before.
      
      4. Remove the '-E' gpdb hacking and align the '-E' usage with upstream.
      
      5. Move pm_launch_walreceiver out of the fts related shmem given the later is
         not used on QE.
      Reviewed-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      Reviewed-by: NGang Xiong <gxiong@pivotal.io>
      Reviewed-by: NHao Wu <gfphoenix78@gmail.com>
      Reviewed-by: NYandong Yao <yyao@pivotal.io>
      f6297b96
  8. 06 6月, 2020 1 次提交
  9. 05 6月, 2020 7 次提交
    • L
      docs - update some xrefs and upgrade info (#10237) · 07c2029c
      Lisa Owen 提交于
      07c2029c
    • A
      Fix wrong data type introduced in bf36fb3b · a251127b
      Asim R P 提交于
      a251127b
    • B
      Fix user specify in matview_ao test · 24df496d
      bzhaoopenstack 提交于
      This patch will create a test role to exec the test.
      24df496d
    • H
      Fix flaky test in recoverseg_from_file · 1ee9998b
      Hubert Zhang 提交于
      1. after stop primary with content=1, we should check promotion
      status with 1U:
      2. after manually update dbid, we should trigger a fts probe and
      wait for the mirror promotion as well.
      Reviewed-by: NPaul Guo <pguo@pivotal.io>
      1ee9998b
    • H
      Support "NDV-preserving" function and op property (#10247) · a4362cba
      Hans Zeller 提交于
      Orca uses this property for cardinality estimation of joins.
      For example, a join predicate foo join bar on foo.a = upper(bar.b)
      will have a cardinality estimate similar to foo join bar on foo.a = bar.b.
      
      Other functions, like foo join bar on foo.a = substring(bar.b, 1, 1)
      won't be treated that way, since they are more likely to have a greater
      effect on join cardinalities.
      
      Since this is specific to ORCA, we use logic in the translator to determine
      whether a function or operator is NDV-preserving. Right now, we consider
      a very limited set of operators, we may add more at a later time.
      
      Let's assume that we join tables R and S and that f is a function or
      expression that refers to a single column and does not preserve
      NDVs. Let's also assume that p is a function or expression that also
      refers to a single column and that does preserve NDVs:
      
      join predicate       card. estimate                         comment
      -------------------  -------------------------------------  -----------------------------
      col1 = col2          |R| * |S| / max(NDV(col1), NDV(col2))  build an equi-join histogram
      f(col1) = p(col2)    |R| * |S| / NDV(col2)                  use NDV-based estimation
      f(col1) = col2       |R| * |S| / NDV(col2)                  use NDV-based estimation
      p(col1) = col2       |R| * |S| / max(NDV(col1), NDV(col2))  use NDV-based estimation
      p(col1) = p(col2)    |R| * |S| / max(NDV(col1), NDV(col2))  use NDV-based estimation
      otherwise            |R| * |S| * 0.4                        this is an unsupported pred
      Note that adding casts to these expressions is ok, as well as switching left and right side.
      
      Here is a list of expressions that we currently treat as NDV-preserving:
      
      coalesce(col, const)
      col || const
      lower(col)
      trim(col)
      upper(col)
      
      One more note: We need the NDVs of the inner side of Semi and
      Anti-joins for cardinality estimation, so only normal columns and
      NDV-preserving functions are allowed in that case.
      
      This is a port of these GPDB 5X and GPOrca PRs:
      https://github.com/greenplum-db/gporca/pull/585
      https://github.com/greenplum-db/gpdb/pull/10090
      
      This is take 2, after reverting the first attempt due to a merge conflict that
      caused a test to fail.
      a4362cba
    • D
      Docs - update gpcc compatibility info for 6.8 · 560ffcb1
      David Yozie 提交于
      560ffcb1
    • L
      docs - add pxf v5.12 to supported platforms (#10235) · fe3464af
      Lisa Owen 提交于
      fe3464af
  10. 04 6月, 2020 3 次提交
    • H
      Fix the logic in pg_lock_status() to keep track of which row to return. · 90634b6a
      Heikki Linnakangas 提交于
      The logic with 'whichrow' and 'whichresultset' introduced in commit
      991273b2 was slightly wrong. The last row (or first? not sure) of each
      result set was returned twice, and the corresponding number of rows at the
      end of the last result set were omitted.
      
      For example:
      
      postgres=# select gp_segment_id, * from pg_locks;
       gp_segment_id |  locktype  | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |      mode       | granted | fastpath | mppsessionid | mppiswriter | gp_segment_id
      ---------------+------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+-----------------+---------+----------+--------------+-------------+---------------
                  -1 | relation   |    13200 |    11869 |      |       |            |               |         |       |          | 1/8                | 28748 | AccessShareLock | t       | t        |            6 | t           |            -1
                  -1 | virtualxid |          |          |      |       | 1/8        |               |         |       |          | 1/8                | 28748 | ExclusiveLock   | t       | t        |            6 | t           |            -1
                   0 | virtualxid |          |          |      |       | 1/7        |               |         |       |          | 1/7                | 28750 | ExclusiveLock   | t       | t        |            6 | t           |             0
                   1 | virtualxid |          |          |      |       | 1/7        |               |         |       |          | 1/7                | 28751 | ExclusiveLock   | t       | t        |            6 | t           |             1
                   1 | virtualxid |          |          |      |       | 1/7        |               |         |       |          | 1/7                | 28751 | ExclusiveLock   | t       | t        |            6 | t           |             1
      (5 rows)
      
      Note how the last row is duplicated. And the row for 'virtualxid' from
      segment 2 is omitted.
      
      I noticed this while working on the PostgreSQL v12 merge: the 'lock'
      regression test was failing because of this. I'm not entriely sure why we
      haven't seen failures on 'master'. I think it's pure chance that none of
      the lines that the test prints have been omitted on 'master'. But because
      that's been failing, I don't feel the need to add more tests for this.
      90634b6a
    • J
      Revert "Support "NDV-preserving" function and op property (#10225)" · 898e66b8
      Jesse Zhang 提交于
      Regression test "gporca" started failing after merging d565edac.
      
      This reverts commit d565edac.
      898e66b8
    • H
      Support "NDV-preserving" function and op property (#10225) · d565edac
      Hans Zeller 提交于
      Orca uses this property for cardinality estimation of joins.
      For example, a join predicate foo join bar on foo.a = upper(bar.b)
      will have a cardinality estimate similar to foo join bar on foo.a = bar.b.
      
      Other functions, like foo join bar on foo.a = substring(bar.b, 1, 1)
      won't be treated that way, since they are more likely to have a greater
      effect on join cardinalities.
      
      Since this is specific to ORCA, we use logic in the translator to determine
      whether a function or operator is NDV-preserving. Right now, we consider
      a very limited set of operators, we may add more at a later time.
      
      Let's assume that we join tables R and S and that f is a function or
      expression that refers to a single column and does not preserve
      NDVs. Let's also assume that p is a function or expression that also
      refers to a single column and that does preserve NDVs:
      
      join predicate       card. estimate                         comment
      -------------------  -------------------------------------  -----------------------------
      col1 = col2          |R| * |S| / max(NDV(col1), NDV(col2))  build an equi-join histogram
      f(col1) = p(col2)    |R| * |S| / NDV(col2)                  use NDV-based estimation
      f(col1) = col2       |R| * |S| / NDV(col2)                  use NDV-based estimation
      p(col1) = col2       |R| * |S| / max(NDV(col1), NDV(col2))  use NDV-based estimation
      p(col1) = p(col2)    |R| * |S| / max(NDV(col1), NDV(col2))  use NDV-based estimation
      otherwise            |R| * |S| * 0.4                        this is an unsupported pred
      Note that adding casts to these expressions is ok, as well as switching left and right side.
      
      Here is a list of expressions that we currently treat as NDV-preserving:
      
      coalesce(col, const)
      col || const
      lower(col)
      trim(col)
      upper(col)
      
      One more note: We need the NDVs of the inner side of Semi and
      Anti-joins for cardinality estimation, so only normal columns and
      NDV-preserving functions are allowed in that case.
      
      This is a port of these GPDB 5X and GPOrca PRs:
      https://github.com/greenplum-db/gporca/pull/585
      https://github.com/greenplum-db/gpdb/pull/10090
      d565edac
  11. 03 6月, 2020 6 次提交
    • S
      Remove unnecessary projections from duplicate sensitive Distribute(s) in ORCA · c02fd5a1
      Shreedhar Hardikar 提交于
      Duplicate sensitive HashDistribute Motions generated by ORCA get
      translated to Result nodes with hashFilter cols set. However, if the
      Motion needs to distribute based on a complex expression (rather than
      just a Var), the expression must be added into the targetlist of the
      Result node and then referenced in hashFilterColIdx.
      
      However, this can affect other operators above the Result node. For
      example, a Hash operator expects the targetlist of its child node to
      contain only elements that are to be hashed. Additional expressions here
      can cause issues with memtuple bindings that can lead to errors.
      
      (E.g The attached test case, when run without our fix, will give an
      error: "invalid input syntax for integer:")
      
      This PR fixes the issue by adding an additional Result node on top of
      the duplicate sensitive Result node to project only the elements from
      the original targetlist in such cases.
      c02fd5a1
    • A
      Squash me: address concerns in code review · bf36fb3b
      Asim R P 提交于
      Remember if the select call was interrupted.  Act on it after emitting
      debug logs and checking cancel requests from dispatcher.
      bf36fb3b
    • A
      Check errno as early as possible · 9fd138da
      Asim R P 提交于
      Previously, the result of select() system call and errno set by it was
      checked after performing several function calls, including checking
      for interrupts and checkForCancelFromQD.  That made it very likely for
      errno to change, losing the original value that was set by the
      select().
      
      This patch fixes it so that the errno is checked immediately after the
      system call.  This should address intermittent failures in CI with
      error message like this:
      
          ERROR","58M01","interconnect error: select: Success"
      9fd138da
    • W
      Add "FILL_MISSING_FIELDS" option for gpload. · cb76c301
      Wen Lin 提交于
      cb76c301
    • A
      Allow CLUSTER on append-optimized tables · 179feb77
      Andrey Borodin 提交于
      Cluster on AO tables is implemented by sorting the entire AO table using
      tuple sort framework, according to a btree index defined on the table.
      
      A faster way to cluster is to scan the tuples in index-order, but this
      requires index-scan support.  Append-optimized tables do not support
      index-scans currently, but when this support is added, the cluster
      operation can be enhanced accordingly.
      
      Author: Andrey Borodin <amborodin@acm.org>
      Reviewed and slightly edited by: Asim R P <pasim@vmare.com>
      
      Merges GitHub PR #9996
      179feb77
    • H
      Refactoring the DbgPrint and OsPrint methods (#10149) · b3fdede6
      Hans Zeller 提交于
      * Make DbgPrint and OsPrint methods on CRefCount
      
      Create a single DbgPrint() method on the CRefCount class. Also create
      a virtual OsPrint() method, making some objects derived from CRefCount
      easier to print from the debugger.
      
      Note that not all the OsPrint methods had the same signatures, some
      additional OsPrintxxx() methods have been generated for that.
      
      * Making print output easier to read, print some stuff on demand
      
      Required columns in required plan properties are always the same
      for a given group. Also, equivalent expressions in required distribution
      properties are important in certain cases, but in most cases they
      disrupt the display and make it harder to read.
      
      Added two traceflags, EopttracePrintRequiredColumns and
      EopttracePrintEquivDistrSpecs that have to be set to print this
      information. If you want to go back to the old display, use these
      options when running gporca_test: -T 101016 -T 101017
      
      * Add support for printing alternative plans
      
      A new method, CEngine::DbgPrintExpr() can be called from
      COptimizer::PexprOptimize, to allow printing of the best plan
      for different contexts. This is only enabled in debug builds.
      
      To use this:
      
      - run an MDP using gporca_test, using a debug build
      - print out memo after optimization (-T 101006 -T 101010)
      - set a breakpoint near the end of COptimizer::PexprOptimize()
      - if, after looking at the contents of memo, you want to see
        the optimal plan for context c of group g, do the following:
        p eng.DbgPrintExpr(g, c)
      
      You could also get the same info from the memo printout, but it
      would take a lot longer.
      b3fdede6
  12. 02 6月, 2020 1 次提交
    • H
      Clarify code that parses DISTRIBUTED BY, with new DistributionKeyElem node. · 506ebada
      Heikki Linnakangas 提交于
      Introduce a new DistributionKeyElem to hold each element in the list of
      columns (and optionally their opclasses) in DISTRIBUTED BY (<col> [opclass],
      ...) syntax. Previously, we have used IndexElem, which conveniently also
      holds a column name and its opclass, but it was not a very good fit because
      IndexElem also contains many other fields that are not needed. Using a
      new node type specifically for DISTRIBUTED BY makes the code dealing with
      distribution key lists more clear. To compare, PostgreSQL v10 uses a struct
      called PartitionElem for similar purposes in PARTITION BY clause. (But that
      is not to be confused with the PartitionElem struct in GPDB 6 and below,
      which is also related to partitioning sytnax but is quite different!)
      
      Unlike IndexElem, the new node type includes a 'location' field, to
      provide error position information in error messages. This can be seen in
      the error message changes in 'gp_create_table' test.
      
      While we're at it, remove the quotes around DISTRIBUTED BY in the
      "column <col> named in DISTRIBTED BY clause does not exist", for
      consistency with the same error message thrown with CREATE TABLE AS from
      setQryDistributionPolicy() function, and with the "duplicate column in
      DISTRIBUTED BY clause" error. The error thrown in CTAS case was not
      covered by existing tests, so also add a test for that.
      Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
      506ebada