1. 21 9月, 2018 1 次提交
  2. 20 9月, 2018 6 次提交
    • J
      Fix ao_upgrade tests and add them to the schedule · 2b96acd2
      Jacob Champion 提交于
      9.1 added a new, more compact "short" format to the numeric datatype.
      This format wasn't handled by the ao_upgrade test in isolation2, so it
      failed -- but the pipeline was still green because I forgot to add the
      new test to the schedule in 54895f54.
      
      To fix the issue, add a new helper which will force any Numeric back to
      the legacy long format, and call that from convertNumericToGPDB4() in
      the ao_upgrade test. And add the test to the schedule, so we don't have
      to do this again.
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      2b96acd2
    • H
      Fix and clean up db/relation/tablespace size functions. · f8a80aeb
      Heikki Linnakangas 提交于
      This fixes several small bugs:
      
      - Schema-qualify the functions in all queries.
      
      - Quote database and tablespace names correctly in the dispatched
        queries.
      
      - In the variants that take OID, also dispatch the OID rather than the
        resolved name. This avoids having to deal with quoting schema and table
        names in the query, and seems like the right thing to do anyway.
      
      - Dispatch pg_table_size() pg_indexes_size() variants. These were added
        in PostgreSQL 9.0, but we missed modifying them in the merge, the same
        way that we have modified all the other variants.
      
      Also, refactor the internal function used to dispatch the pg_*_size()
      calls to use CdbDispatchCommand directly, instead of using SPI and the
      gp_dist_random('gp_id') trick. Seems more straightforward, although I
      believe that trick should've worked, too.
      
      Add tests. We didn't have any bespoken tests for these functions, although
      we used some of the variants in other tests.
      Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
      f8a80aeb
    • H
      Remove the printing of overflowed value, when scale is exceeded. · 4b31e46f
      Heikki Linnakangas 提交于
      Printing the value was added in GPDB, back in 2007. The commit message of
      that change (in the historical pre-open-sourcing git repository) said:
      
          Merge forward from Release-3_0_0-branch. Update comment block.
          Tidy numeric_to_pos_int8_trunc.
      
      That wasn't not very helpful...
      
      Arguably, printing the value can be useful, but if so, we should submit
      this change to upstream. I don't think it's worth the trouble, though, so
      I suggest that we just revert this to the way it is in the upstream. The
      reason I'm doing this now is that this caused merge conflicts in the 9.3
      merge, that we're working on right now. We could probably fix the conflict
      in a way that keeps the extra message, but it's simpler to just drop it.
      4b31e46f
    • H
      Remove some leftover initGpmonPkt* functions. · 46b8293f
      Heikki Linnakangas 提交于
      Commit c1690010 removed most of these, but missed these few in GPDB-
      specific executor node. These are no longer needed, just like all the ones
      that were removed in commit c1690010.
      46b8293f
    • B
      Fix subquery with column alias `zero` produces wrong result (#5790) · b70b0086
      BaiShaoqi 提交于
      Reviewed and ideas brought by heikki
      b70b0086
    • H
      Enable 'gp_types' test. · a90e92d9
      Heikki Linnakangas 提交于
      The test was added in commit 3672f912, but I forgot to add it to the
      schedule. Oops.
      a90e92d9
  3. 19 9月, 2018 6 次提交
    • H
      Fix "could not find pathkey item to sort" error with MergeAppend plans. · 1722adb8
      Heikki Linnakangas 提交于
      When building a Sort node to represent the ordering that is preserved
      by a Motion node, in make_motion(), the call to make_sort_from_pathkeys()
      would sometimes fail with "could not find pathkey item to sort". This
      happened when the ordering was over a UNION ALL operation. When building
      Motion nodes for MergeAppend subpaths, the path keys that represented the
      ordering referred to the items in the append rel's target list, not the
      subpaths. In create_merge_append_plan(), where we do a similar thing for
      each subpath, we correctly passed the 'relids' argument to
      prepare_sort_from_pathkeys(), so that prepare_sort_from_pathkeys() can
      match the target list entries of the append relation with the entries of
      the subpaths. But when creating the Motion nodes for each subpath, we
      were passing NULL as 'relids' (via make_sort_from_pathkeys()).
      
      At a high level, the fix is straightforward: we need to pass the correct
      'relids' argument to prepare_sort_from_pathkeys(), in
      cdbpathtoplan_create_motion_plan(). However, the current code structure
      makes that not so straightforward, so this required some refactoring of
      the make_motion() and related functions:
      
      Previously, make_motion() and make_sorted_union_motion() would take a path
      key list as argument, to represent the ordering, and it called
      make_sort_from_pathkeys() to extract the sort columns, operators etc.
      After this patch, those functions take arrays of sort columns, operators,
      etc. directly as arguments, and the caller is expected to do the call to
      make_sort_from_pathkeys() to get them, or build them through some other
      means. In cdbpathtoplan_create_motion_plan(), call
      prepare_sort_from_pathkeys() directly, rather than the
      make_sort_from_pathkeys() wrapper, so that we can pass the 'relids'
      argument. Because prepare_sort_from_pathkeys() is marked as 'static', move
      cdbpathtoplan_create_motion_plan() from cdbpathtoplan.c to createplan.c,
      so that it can call it.
      
      Add test case. It's a slightly reduced version of a query that we already
      had in 'olap_group' test, but seems better to be explicit. Revert the
      change in expected output of 'olap_group', made in commit 28087f4e,
      which memorized the error in the expected output.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/5695.
      Reviewed-by: NPengzhou Tang <ptang@pivotal.io>
      Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
      1722adb8
    • H
      Simplify 'partindex_test', by using OUT parameters and pg_get_expr(). · fcc7898b
      Heikki Linnakangas 提交于
      OUT parameters make calling the function much less awkward, as you don't
      need to specify the output columns in every invocation. Also, change the
      datatype of a few columns to pg_node_tree, so that you can use pg_get_expr()
      to pretty-print them.
      
      I'm doing this to hide the trivial differences in the internal string
      representation of expressions. This came up in the 9.3 merge, which added
      a new field to FuncExpr again, which would cause the test to fail. Using
      pg_get_expr() to print the fields in human-readable format, which is not
      sensitive to small changes like that, will avoid that problem.
      fcc7898b
    • D
      Fix Greenplum acronym in comment · f879cc9b
      Daniel Gustafsson 提交于
      s/GDPB/GPDB/
      f879cc9b
    • D
      Attempt to make olap_window_seq query deterministic · e6af34f0
      Daniel Gustafsson 提交于
      One of the queries in the olap_window_seq suite was non-deterministic as
      it was ordering around a column which had duplicate values. Switch to the
      _ord version of the table and use the sequence column which yields
      deterministic results.
      Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
      e6af34f0
    • B
      Update optimizer output test files and bump orca to v2.75.0 · 35697528
      Bhuvnesh Chaudhary 提交于
      This commit adds the tests corresponding to the below change introduced in PQO.
      In case of Insert on Randomly distributed relations, a random / redistribute
      motion must exists to ensure randomness of the data inserted, else there will
      be skew on 1 segment.
      
      Consider the below scenario:
      Scenario 1:
      ```
      CREATE TABLE t1_random (a int) DISTRIBUTED RANDOMLY;
      INSERT INTO t1_random VALUES (1), (2);
      SET optimizer_print_plan=on;
      
      EXPLAIN INSERT INTO t1_random VALUES (1), (2);
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["column1" (0)], Action: ("ColRef_0001" (1))   rows:2   width:4  rebinds:1   cost:0.020884   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalComputeScalar
            |--CPhysicalMotionRandom  ==> Random Motion Inserted (2)
            |  +--CPhysicalConstTableGet Columns: ["column1" (0)] Values: [(1); (2)] ==> Delivers Universal Spec (1)
            +--CScalarProjectList
               +--CScalarProjectElement "ColRef_0001" (1)
                  +--CScalarConst (1)
      ",
                                             QUERY PLAN
      ----------------------------------------------------------------------------------------
       Insert  (cost=0.00..0.02 rows=1 width=4)
         ->  Result  (cost=0.00..0.00 rows=1 width=8)
               ->  Result  (cost=0.00..0.00 rows=1 width=4) ==> Random Motion Converted to a Result Node with Hash Filter to avoid duplicates (4)
                     ->  Values Scan on "Values"  (cost=0.00..0.00 rows=1 width=4) ==> Delivers Universal Spec (3)
       Optimizer: PQO version 2.70.0
      ```
      
      When an Insert is requested on t1_random from a Universal Source, Optimization
      framework does add a Random Motion / CPhysicalMotionRandom (See (2) above) to
      redistribute the data. However, since CPhysicalConstTableGet / Values Scan
      delivers Universal Spec, it is converted to a Result Node with hash filter to
      avoid duplicates in DXL to Planned Statement (See (4) above). Now, since there
      is no redistribute motion to spray the data randomly on the segments, due to
      the result node with hash filters, data from only one segment is allowed to
      propagate further, which is inserted by the DML node. This results in all the
      data being inserted on to 1 segment only.
      
      Scenario 2:
      ```
      CREATE TABLE t1_random (a int) DISTRIBUTED RANDOMLY;
      CREATE TABLE t2_random (a int) DISTRIBUTED RANDOMLY;
      EXPLAIN INSERT INTO t1_random SELECT * FROM t1_random;
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["a" (0)], Action: ("ColRef_0008" (8))   rows:1   width:34  rebinds:1   cost:431.010436   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalComputeScalar   rows:1   width:1  rebinds:1   cost:431.000011   origin: [Grp:5, GrpExpr:1]
            |--CPhysicalTableScan "t2_random" ("t2_random")   rows:1   width:34  rebinds:1   cost:431.000006   origin: [Grp:0, GrpExpr:1]
            +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
               +--CScalarProjectElement "ColRef_0008" (8)   origin: [Grp:3, GrpExpr:0]
                  +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                              QUERY PLAN
      ------------------------------------------------------------------------------------------
       Insert  (cost=0.00..431.01 rows=1 width=4)
         ->  Result  (cost=0.00..431.00 rows=1 width=8)
               ->  Table Scan on t1_random  (cost=0.00..431.00 rows=1 width=4)
       Optimizer: PQO version 2.70.0
      ```
      
      When an Insert is request on t1_random (randomly distributed table) from
      another randomly distributed table, optimization framework does not add a
      redistribute motion because CPhysicalDML requested Random Spec and the source
      t2_random delivers random distribution spec matching / satisfying the requested
      spec.
      
      So, in summary, there are 2 causes for skewness in the data inserted into randomly distributed table.
      Scenario 1: Where the Random Motion is converted into a Result Node to Hash Filters
      Scneario 2: Where the requested and derived spec matches.
      
      This patch fixes the issue by ensuring that if an insert is performed on a
      randomly distributed table, there must exists a random motion which has been
      enforced by a true motion.  This is achived in 2 steps:
      1. CPhysicalDML / Insert on Randomly Distributed table must request a Strict Random Spec
      2. Append Enforcer for Random Spec must track if the motion node enforced has
      a universal child, if it has, then update the bool flag to false else true
      
      Characteristic of Strict Random Spec:
      1. Strict Random Spec matches / satisfies Strict Random Spec Only
      2. Random Spec enforced by a true motion matches / satisfies Strict Random Spec.
      
      CPhysicalDML always has a CPhysicalComputeScalar node below it which projects
      the additional column indicating if it's an Insert. The request for Strict
      Motion is not propagated down the CPhysicalComputeScalar node,
      CPhysicalComputeScalar requests Random Spec from its child instead. This is to
      mandate the existence of a true random motion either by the childs of CPhysicalDML
      node or it should be inserted between CPhysicalDML and CPhysicalComputeScalar.
      If there is any motion in the same group holding CPhysicalComputeScalar,
      it should also request Random Spec from its childs.
      
      Plans after the fix
      Scenario 1:
      ```
      EXPLAIN INSERT INTO t1_random VALUES (1), (2);
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["column1" (0)], Action: ("ColRef_0001" (1))   rows:2   width:4  rebinds:1   cost:0.020884   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalMotionRandom   rows:2   width:1  rebinds:1   cost:0.000051   origin: [Grp:5, GrpExpr:2] ==> Strict Random Motion Enforcing true randomness
            +--CPhysicalComputeScalar   rows:2   width:1  rebinds:1   cost:0.000034   origin: [Grp:5, GrpExpr:1]
               |--CPhysicalMotionRandom   rows:2   width:4  rebinds:1   cost:0.000029   origin: [Grp:0, GrpExpr:2]
               |  +--CPhysicalConstTableGet Columns: ["column1" (0)] Values: [(1); (2)]   rows:2   width:4  rebinds:1   cost:0.000008   origin: [Grp:0, GrpExpr:1]
               +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
                  +--CScalarProjectElement "ColRef_0001" (1)   origin: [Grp:3, GrpExpr:0]
                     +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                             QUERY PLAN
      ----------------------------------------------------------------------------------------
       Insert  (cost=0.00..0.02 rows=1 width=4)
         ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..0.00 rows=1 width=8) ==> Strict Random Motion Enforcing true randomness
               ->  Result  (cost=0.00..0.00 rows=1 width=8)
                     ->  Result  (cost=0.00..0.00 rows=1 width=4)
                           ->  Values Scan on "Values"  (cost=0.00..0.00 rows=1 width=4)
       Optimizer: PQO version 2
      
      ```
      
      Scenario 2:
      ```
      EXPLAIN INSERT INTO t1_random SELECT * FROM t1_random;
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["a" (0)], Action: ("ColRef_0008" (8))   rows:2   width:34  rebinds:1   cost:431.020873   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalMotionRandom   rows:2   width:1  rebinds:1   cost:431.000039   origin: [Grp:5, GrpExpr:2] ==> Strict Random Motion Enforcing True Randomness
            +--CPhysicalComputeScalar   rows:2   width:1  rebinds:1   cost:431.000023   origin: [Grp:5, GrpExpr:1]
               |--CPhysicalTableScan "t1_random" ("t1_random")   rows:2   width:34  rebinds:1   cost:431.000012   origin: [Grp:0, GrpExpr:1]
               +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
                  +--CScalarProjectElement "ColRef_0008" (8)   origin: [Grp:3, GrpExpr:0]
                     +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                              QUERY PLAN
      ------------------------------------------------------------------------------------------
       Insert  (cost=0.00..431.02 rows=1 width=4)
         ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..431.00 rows=1 width=8) ==> Strict Random Motion Enforcing true randomness
               ->  Result  (cost=0.00..431.00 rows=1 width=8)
                     ->  Table Scan on t1_random  (cost=0.00..431.00 rows=1 width=4)
       Optimizer: PQO version 2.70.0
      (5 rows)
      ```
      
      Note: If Insert is performed on a Randomly Distributed Table from a Hash
      Distributed Table (childs), an additional redistribute motion is not enforced
      between CPhysicalDML and CPhysicalComputeScalar as there already exists a true
      random motion due to mismatch of random vs hash distributed spec.
      ```
      pivotal=# explain insert into t1_random select * from t1_hash;
      LOG:  statement: explain insert into t1_random select * from t1_hash;
      LOG:  2018-08-20 13:31:01:135438 PDT,THD000,TRACE,"
      Physical plan:
      +--CPhysicalDML (Insert, "t1_random"), Source Columns: ["a" (0)], Action: ("ColRef_0008" (8))   rows:100   width:34  rebinds:1   cost:432.043222   origin: [Grp:1, GrpExpr:2]
         +--CPhysicalComputeScalar   rows:100   width:1  rebinds:1   cost:431.001555   origin: [Grp:5, GrpExpr:1]
            |--CPhysicalMotionRandom   rows:100   width:34  rebinds:1   cost:431.001289   origin: [Grp:0, GrpExpr:2]
            |  +--CPhysicalTableScan "t1_hash" ("t1_hash")   rows:100   width:34  rebinds:1   cost:431.000623   origin: [Grp:0, GrpExpr:1]
            +--CScalarProjectList   origin: [Grp:4, GrpExpr:0]
               +--CScalarProjectElement "ColRef_0008" (8)   origin: [Grp:3, GrpExpr:0]
                  +--CScalarConst (1)   origin: [Grp:2, GrpExpr:0]
      ",
                                                 QUERY PLAN
      -------------------------------------------------------------------------------------------------
       Insert  (cost=0.00..432.04 rows=34 width=4)
         ->  Result  (cost=0.00..431.00 rows=34 width=8)
               ->  Redistribute Motion 3:3  (slice1; segments: 3)  (cost=0.00..431.00 rows=34 width=4)
                     ->  Table Scan on t1_hash  (cost=0.00..431.00 rows=34 width=4)
       Optimizer: PQO version 2.70.0
      (5 rows)
      
      Time: 69.309 ms
      ```
      35697528
    • H
      Add error position to error on coercing partition boundaries. · a0e41dc7
      Heikki Linnakangas 提交于
      I think this is more clear. Now that we the error position is displayed
      with the error, we don't need to work so hard to deparse the value and the
      partition type. The error position makes that clear.
      
      This came up during the 9.3 merge. Passing InvalidOid to
      deparse_context_for() will no longer work in 9.3, so we'd need to change
      this somehow in the 9.3 merge, anyway.
      a0e41dc7
  4. 18 9月, 2018 10 次提交
    • H
      Remove mock test for cdbappendonlystorage.c · 33fc5f17
      Heikki Linnakangas 提交于
      Looking at the test, it seems pretty uninteresting. It basically tests that
      there is call to IsStandbyMode() in appendonly_redo(), and that the
      switch-case statement works. I don't think we need bespoken tests for those
      things.
      
      This came up while working on the 9.3 merge. 9.3 moves all the *_desc()
      functions to separate files, so after that, only the appendonly_redo()
      function would remain in cdbappendonlystorage.c. It would make sense to
      move it into cdbappendonlyxlog.c, but we couldn't do that, because this
      mock test got in the way. We could fix that in the merge, but I'm opening
      a separate PR of this removal, to give more visibility to the decision.
      33fc5f17
    • D
      Fix typos in comments · 8eca01c2
      Daniel Gustafsson 提交于
      Several instances of s/funciton/function/
      8eca01c2
    • D
      Remove dead code following ereport(ERROR.. call · 46cf24ee
      Daniel Gustafsson 提交于
      The relation_close() call directly following ereport(ERROR.. will never be
      called as the ereport won't return. While closing and cleaning up any used
      resources is a good thing, they will be automatically handled by the error
      handler so remove.
      
      Also editorialized the error message to fit the error message style guide
      and fixed test fallout.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      46cf24ee
    • D
      Remove unused tables from test suites · a1412c88
      Daniel Gustafsson 提交于
      The util tables in the OLAP suites was never used in any query, just
      created, populated and subsequently dropped. Remove, and fix a drop
      function statement which had the wrong syntax while at it.
      Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
      a1412c88
    • R
      Revise expect outs for non-equivalence clauses tests. · ec993bdb
      Richard Guo 提交于
      ec993bdb
    • R
      Add more checks when deduce quals from non-equivalence clauses. · 72a325b6
      Richard Guo 提交于
      When we deduce new quals from non-equivalence clauses, we should
      only use operators from the same opfamily, so that the operators can
      have consistent semantics. In addition, datatype and expression should
      also be checked.
      72a325b6
    • P
      Explicitly select the correct side in restrictinfo for copathkey generation in... · 2bb10247
      Paul Guo 提交于
      Explicitly select the correct side in restrictinfo for copathkey generation in cdbpath_match_preds_to_partkey_tail(). (#5728)
      
      Just like what we do in cdbpath_partkeys_from_preds(). This is more
      friendly for code reading and also is better for code maintenance.
      Previous code could potentially introduce bugs since it's relids, not
      equivalenceclass, that could determine the side for comparison accurately.
      2bb10247
    • R
      Resolve compiler warnings caused by volatile variable. · 6be0f8c5
      Richard Guo 提交于
      procarray.c: In function ‘getAllDistributedXactStatus’:
      procarray.c:1616:15: warning: passing argument 1 of ‘strlen’ discards
      ‘volatile’ qualifier from pointer target type
      6be0f8c5
    • H
      Move test query to where it is in the upstream. · a113cda9
      Heikki Linnakangas 提交于
      The " Check that whole-row Vars ..." test query was backported earlier,
      as part of catching up to 8.3.23, before the GPDB 5 release. In the 9.0
      merge, it was accidentally moved to a different location than where it
      is in the upstream. Move it back.
      a113cda9
    • H
      Restore upstream test query on UPDATE on a view. · 9db18178
      Heikki Linnakangas 提交于
      It was removed years ago, presumably because GPDB didn't support updates
      on views. But it seems to work fine now, so put it back.
      9db18178
  5. 17 9月, 2018 3 次提交
    • H
      Move 'am_startup' variable to a better place. · 20c82fea
      Heikki Linnakangas 提交于
      I think startup.c is a more natural place for it. Also, xlog.c is so large
      and heavily modified from upstream, that anything that we can do to reduce
      that diff helps.
      
      Just refactoring, no user-visible change.
      20c82fea
    • N
      resgroup: dump cgroup comp dirs at a later time. · 44667177
      Ning Yu 提交于
      Comp dirs were detected very early, before timezone initialization, logs
      should not be generated at that time.  Move the logs from Probe() to
      Bless(), the later is executed late enough, and is only executed when
      resgroup is enabled.
      44667177
    • N
      resgroup: detect gpdb cgroup comp dirs automatically. · dcc746a1
      Ning Yu 提交于
      Take cpu for example, by default we expect gpdb dir to locate at
      cgroup/cpu/gpdb.  But we'll also check for the cgroup dirs of init
      process (pid 1), e.g. cgroup/cpu/custom, then we'll look for gpdb dir at
      cgroup/cpu/custom/gpdb, if it's found and has good permissions, it can
      be used instead of the default one.
      
      If any of the gpdb cgroup component dir can not be found under init
      process' cgroup dirs or has bad permissions we'll fallback all the gpdb
      cgroup component dirs to the default ones.
      
      NOTE: This auto detection will look for memory & cpuset gpdb dirs even
      on 5X.
      
      (cherry picked from commit f3dc101a)
      dcc746a1
  6. 16 9月, 2018 1 次提交
  7. 15 9月, 2018 2 次提交
    • D
      Fix typos in comments · 57f8b52e
      Daniel Gustafsson 提交于
      Also do minor style fixups such as line length and capitalization to
      some of the affected lines.
      57f8b52e
    • J
      pg_dumpall: improve DB role setting order · dd1ee895
      Jacob Champion 提交于
      Follow-up to 6beb8eb3. In addition to ordering by the role and database,
      order the settings themselves to minimize spurious diffs.
      
      As an example, a role with both search_path and statement_mem set could
      be dumped in two ways:
      
          ALTER ROLE role_setting_test_2 IN DATABASE regression SET search_path TO common_schema;
          ALTER ROLE role_setting_test_2 IN DATABASE regression SET statement_mem TO '150MB';
      or
          ALTER ROLE role_setting_test_2 IN DATABASE regression SET statement_mem TO '150MB';
          ALTER ROLE role_setting_test_2 IN DATABASE regression SET search_path TO common_schema;
      
      Now they are dumped in alphabetical order.
      dd1ee895
  8. 14 9月, 2018 2 次提交
    • J
      Revert "resgroup: detect gpdb cgroup comp dirs automatically." · 941d8ac5
      Jesse Zhang 提交于
      Long story short: the tests for gpinitsystem scan the startup log for a
      timezone, and the logging introduced in commit f3dc101a broke that.
      
      This reverts commit f3dc101a.
      941d8ac5
    • J
      Use StringInfo when logging distributed snapshot info · 89553ad2
      Jesse Zhang 提交于
      Under GCC 8, I get a warning (and rumor has it that you get this under
      GCC 7 with `-Wrestrict`):
      
      ```
      sharedsnapshot.c: In function ‘LogDistributedSnapshotInfo’:
      sharedsnapshot.c:924:11: warning: passing argument 1 to
      restrict-qualified parameter aliases with argument 4 [-Wrestrict]
        snprintf(message, MESSAGE_LEN, "%s, In progress array: {",
                 ^~~~~~~
           message);
           ~~~~~~~
      sharedsnapshot.c:930:13: warning: passing argument 1 to
      restrict-qualified parameter aliases with argument 4 [-Wrestrict]
          snprintf(message, MESSAGE_LEN, "%s, (dx%d)",
                   ^~~~~~~
             message, ds->inProgressXidArray[no]);
             ~~~~~~~
      sharedsnapshot.c:933:13: warning: passing argument 1 to
      restrict-qualified parameter aliases with argument 4 [-Wrestrict]
          snprintf(message, MESSAGE_LEN, "%s (dx%d)",
                   ^~~~~~~
             message, ds->inProgressXidArray[no]);
             ~~~~~~~
      ```
      
      Upon further inspection, the compiler is right: according to C99, it is
      undefined behavior to pass aliased arguments to the "str" argument of
      `snprint` (`restrict`-qualified function parameters, to be pedantic).
      
      To make this safer and more readable, this patch switches to using the
      StringInfo API. This change might come with a teeny tiny bit of
      performance because of:
      
      1. stack vs heap allocation
      2. larger initial allocation size of StringInfo
      
      But this area of the code is *never* a hot spot, and `appendStringInfo`
      and friends are arguably faster than our old call patterns of
      `snprintf`, so I won't sweat on that.
      89553ad2
  9. 13 9月, 2018 2 次提交
    • N
      resgroup: detect gpdb cgroup comp dirs automatically. · f3dc101a
      Ning Yu 提交于
      Take cpu for example, by default we expect gpdb dir to locate at
      cgroup/cpu/gpdb.  But we'll also check for the cgroup dirs of init
      process (pid 1), e.g. cgroup/cpu/custom, then we'll look for gpdb dir at
      cgroup/cpu/custom/gpdb, if it's found and has good permissions, it can
      be used instead of the default one.
      
      If any of the gpdb cgroup component dir can not be found under init
      process' cgroup dirs or has bad permissions we'll fallback all the gpdb
      cgroup component dirs to the default ones.
      
      NOTE: This auto detection will look for memory & cpuset gpdb dirs even
      on 5X.
      f3dc101a
    • B
      Show motion path and path locus type in print_path(). (#5741) · 128d32a8
      BaiShaoqi 提交于
      * Show motion path and path locus type in print_path().
      
      This helps gpdb planner debugging.
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
      
      * Review comments from Heikki Linnakangas
      128d32a8
  10. 12 9月, 2018 5 次提交
    • N
      gdd: disable flaky test case non-lock-107. · 7667af9b
      Ning Yu 提交于
      This case has been flaky since it was added, disable it to keep the
      pipelines green.  We'll keep working on it and reenable it once we find
      out a way to improve it.
      7667af9b
    • P
      Fix imtermittent failure dispatch test cases · 1803e1eb
      Pengzhou Tang 提交于
      In dispatch test cases, we need a way to put a segment to in-recovery
      status to test gang recreating logic of dispatcher.
      
      We used to trigger a panic fault on a segment and suspend the quickdie()
      to simulate in-recovery status. To avoid segment staying in recovery mode
      for a long time, we used a 'sleep' fault instead of 'suspend' in quickdie(),
      so segment can accept new connections after 5 seconds. 5 seconds works
      fine most of time, but still not stable enough, so we decide to use more
      straight-forward mean to simulate in-recovery mode which reports a
      POSTMASTER_IN_RECOVERY_MSG directly in ProcessStartupPacket(). To not
      affecting other backends, we create a new database so fault injectors
      only affect dispatch test cases.
      1803e1eb
    • J
      Fix walrep TINC test · fb895a05
      Joao Pereira 提交于
      The walrep TINC test is currently failing because of a fix made to
      subpartitioning grammar parsing. Just needed to remove a comma.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      fb895a05
    • J
      Ensure that `SUBPARTITION TEMPLATE` only appears after `SUBPARTITION BY` · b2f85691
      Joao Pereira 提交于
      This commit changed the way the grammar was parsing the subpartition
      information to ensure that the templates could not exist without a
      `SUBPARTITION BY`.
      
      Testing coverage for the case where the `SUBPARTITION TEMPLATE`
      expression is written before a `SUBPARTITION BY` was added.
      
      The error displayed when the template appeared after a Partition was
      changed to point to the TEMPLATE
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
      b2f85691
    • A
      Function to execute a command on specified segment · ac5c1319
      Asim R P 提交于
      This function is intented for tests to run DDL/DML commands on a
      specific GPDB segment.
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      ac5c1319
  11. 11 9月, 2018 2 次提交