1. 09 3月, 2017 8 次提交
    • C
      Fix null reference before check Coverity defects in Backup code · 9bb61e8b
      Christopher Hajas 提交于
      Authors: Chris Hajas and Todd Sedano
      9bb61e8b
    • C
      Fix incorrect datatype in Backup code identified by Coverity · fda2c07c
      Christopher Hajas 提交于
      read_bytes will be negative if read() returns a negative value on error.
      Therefore, this should be a signed int.
      fda2c07c
    • C
      Correctly exit dump on error. · 2618229e
      Christopher Hajas 提交于
      This is a backup-breaking failure and the dump should exit rather than
      simply breaking from the loop.
      2618229e
    • A
      Set timezone to PST and updated .sql/.ans files · 64eae8ca
      Ashwin Agrawal 提交于
      These .sql files assumed that the timezone for machines running the tests would
      have the timezone as PST/PDT. This commit explicitly sets the timezone to PST in
      the .sql files and updates the .ans files to handle this new behavior.
      64eae8ca
    • K
      Fix the wrong comparison for sort space type and the type.(closes #1976) · e906fad8
      Karthikeyan Jambu Rajaraman 提交于
      This resolves #1880. Thanks Lirong Jian for the inital PR #1952.
      
      Details:
      Fixed a typo of string comparison in cdbexplain
      Fix the typo for UNINITIALIZED_SORT.
      Add new method to convert sort space type str to enum.
      e906fad8
    • H
      Move test case for a more complex partitioning scheme from TINC. · 8e8ad0fd
      Heikki Linnakangas 提交于
      These were not that exciting tests, because ORCA falls back to the Postgres
      planner for these queries. But it's a reasonable test case of partition
      pruning in general, after adding a WHERE clauses to the query, so add one
      such test case to the main test suite.
      8e8ad0fd
    • H
      Add more tests for CREATEEXTTABLE privileges. · b8bf866c
      Heikki Linnakangas 提交于
      These tests are from the legacy cdbfast test suite, from
      writable_ext_tbl/test_* files. They're not very exciting, and we already
      had tests for some variants of these commands, but looking at the way all
      these privileges are handled in the code, perhaps it's indeed best to have
      tests for many different combinations. The tests run in under a second, so
      it's not too much of a burden.
      
      Compared to the original tests, I removed the SELECTs and INSERTs that
      tested that you can also read/write the external tables, after successfully
      creating one. Those don't seem very useful, they all basically test that
      the owner of an external table can read/write it. By getting rid of those
      statements, these tests don't need a live gpfdist server to be running,
      which makes them a lot simpler and faster to run.
      
      Also move the existing tests on these privileges from the 'external_table'
      test to the new file.
      b8bf866c
    • S
      Skip pg_index.indcheckxmin column in gpcheckcat. · 79caf1c0
      Shoaib Lari 提交于
      The gpcheckcat 'inconsistent' check should skip the pg_index.indcheckxmin column
      for now as due to the HOT feature, the value of this column can be different
      between the master and the segments.
      
      A more long term fix will resolve the underlying issue that makes the
      indcheckxmin column value to be different between the master and the segments.
      79caf1c0
  2. 08 3月, 2017 14 次提交
    • O
      Bump Orca version to 2.10.0 · 0f21b57c
      Omer Arap 提交于
      0f21b57c
    • P
      resgroup: add rolresgroup to pg_roles. · 400280c0
      Pengzhou Tang 提交于
      rolresgroup was added to pg_authid in previous commits, we also should
      add it to pg_authid.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      400280c0
    • K
      Implement ALTER ROLE RESOURCE GROUP syntax. · 45507229
      Kenan Yao 提交于
      There is the same restrictions with the CREATE ROLE syntax, only
      superuser can be assigned to the admin_group.
      
          -- assign the role r1 to resource group g1
          ALTER ROLE r1 RESOURCE GROUP g1;
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      45507229
    • K
      Implement CREATE ROLE RESOURCE GROUP syntax. · 8c0c0c4d
      Kenan Yao 提交于
      There are two default resource groups, default_group and admin_group,
      roles will be assigned to one of them depending on they are superuser or
      not unless a group is explicitly specified. Only superuser can be
      assigned to the admin_group.
      
          -- create a role r1 and assign it to resgroup g1
          CREATE ROLE r1 RESOURCE GROUP g1;
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      8c0c0c4d
    • E
      Fix incorrect columnnames and missing schema in docs · 3fb8f35d
      Eamon 提交于
      [ci skip]
      3fb8f35d
    • H
      Enable dblink in GPDB · 976d1569
      Haozhou Wang 提交于
      dblink is a bulit-in module of postgresql, this patch enable it in GPDB.
      
      1. Enable compiling dblink with GPDB
      2. Update and enable dblink test cases
      
      Known issue:
      
      1. The following functions are not supported by GPDB and disabled temporarily
         due to GPDB behavior changed.
         dblink_send_query()
         dblink_is_busy()
         dblink_get_result()
      
      2. Anonymous connection and connection by name are not supported in a nested statement with
         both write and read. E.g.
         INSERT INTO tblname SELECT * FROM dblink(['connName'],'SELECT * FROM otherdb.tblname') as tbl;
         Currently, only connection with connection string is supported in such statements.
      Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
      Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
      Signed-off-by: NJianxia Chen <jchen@pivotal.io>
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      
      976d1569
    • D
      Update pull_request resource to get non-shallow clones · 1247a734
      David Sharp 提交于
      v22 of the pull_request resource removes the default behavior making
      shallow clones. A shallow clone will not fetch tags, since no tagged
      commits are walked over during the fetch. We need tags to be present to
      build and determine the version string.
      Signed-off-by: NJingyi Mei <jmei@pivotal.io>
      1247a734
    • J
      Add timeout for PR pipeline for each job · ec96e755
      Jingyi Mei 提交于
      Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
      ec96e755
    • J
      Transaction id support (#1935) · 5fb1b625
      Jane Beckman 提交于
      * Transaction id support
      
      * Removed extra line
      [ci skip]
      5fb1b625
    • M
      Test persistent table rebuild after cluster rebalance (tinc) · db4dd09e
      Marbin Tan 提交于
      - Also remove unwanted test: the abort test should be skipped,
        based on feedback from Storage team
      Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
      db4dd09e
    • M
      Add persistent table rebuild behave tests · a2881a59
      Marbin Tan 提交于
      a2881a59
    • M
    • H
      Remove redundant tests on SAVEPOINTs. · 0fcaf449
      Heikki Linnakangas 提交于
      These things are covered by the 'transactions' test, and the
      'uao_dml/uao_dml_column' test, in the main test suite.
      0fcaf449
    • B
      Fix representation of CONST in explain plan · cc3c1ab3
      Bhunvesh Chaudhary 提交于
      When `explain` traces a VAR to a CONST, it assumes (rightly so) that
      it's most likely the only constant being projected at that operator,
      e.g.
      
      ```
      EXPLAIN SELECT a AS c FROM (SELECT 1 AS a) AS bar ORDER BY 1;
                         QUERY PLAN
      ------------------------------------------------
       Sort  (cost=0.03..0.04 rows=1 width=4)
         Sort Key: (1)
         ->  Result  (cost=0.00..0.01 rows=1 width=0)
       Optimizer status: legacy query optimizer
      (4 rows)
      
      ```
      
      For a query containing the `VALUES` construct, planner would generate
      `ValuesScan`, whereas ORCA generates an `Append` with each constant as a
      `Result` node underneath. Notice how the ORCA plan shape breaks the
      assumption of the `EXPLAIN` rendering logic, resulting in somehow
      confusing output like this:
      
      ```
      explain select * from foo, (values (1), (2)) f(i) where foo.a=f.i;
                                        QUERY PLAN
      ------------------------------------------------------------------------------
       Gather Motion 3:1  (slice1; segments: 3)  (cost=0.00..431.00 rows=1 width=8)
         ->  Hash Join  (cost=0.00..431.00 rows=1 width=8)
               Hash Cond: (1) = a
               ->  Append  (cost=0.00..0.00 rows=1 width=4)
                     ->  Result  (cost=0.00..0.00 rows=1 width=4)
                           ->  Result  (cost=0.00..0.00 rows=1 width=4)
                     ->  Result  (cost=0.00..0.00 rows=1 width=4)
                           ->  Result  (cost=0.00..0.00 rows=1 width=4)
               ->  Hash  (cost=431.00..431.00 rows=1 width=4)
                     ->  Table Scan on foo  (cost=0.00..431.00 rows=1 width=4)
       Settings:  optimizer=on
       Optimizer status: PQO version 2.7.0
      (12 rows)
      ```
      In the above case, `(1)` is less than helpful because that is not the
      only constant in the relation.
      
      This commit renders a `VAR` node that trace back to a `CONST` that has a
      non empty `resname` with the `resname` instead of the constant value.
      
      An unintended (but good) consequence of this change is that some
      constants are now rendered as their projected column instead of the
      constant value, e.g.:
      
      ```
      EXPLAIN SELECT a AS c FROM (SELECT 1 AS a) AS bar ORDER BY 1;
                         QUERY PLAN
      ------------------------------------------------
       Sort  (cost=0.03..0.04 rows=1 width=4)
         Sort Key: c
         ->  Result  (cost=0.00..0.01 rows=1 width=0)
       Optimizer status: legacy query optimizer
      (4 rows)
      ```
      Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
      Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
      cc3c1ab3
  3. 07 3月, 2017 18 次提交