1. 17 3月, 2017 10 次提交
  2. 16 3月, 2017 5 次提交
    • P
      auto-completion for CREATE/DROP RESOURCE GROUP. · 0c62d8b0
      Pengzhou Tang 提交于
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      0c62d8b0
    • N
      Implement DROP RESOURCE GROUP syntax. · 01565b0a
      Ning Yu 提交于
      Any resgroups created by CREATE RESOURCE GROUP syntax can be dropped
      with DROP RESOURCE GROUP syntax; the default resgroups, default_group
      and admin_group, can't be dropped; only superuser can drop resgroups;
      resgroups with roles bound to can't be dropped.
      
          -- drop a resource group
          DROP RESOURCE GROUP rg1;
      
      *NOTE*: this commit only implement the DROP RESOURCE GROUP syntax, the
      actual resource management are not yet supported, which will be provided
      later based on these syntax commits.
      
      *NOTE*: test cases are provided for both CREATE and DROP syntax.
      Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
      01565b0a
    • P
      Implement CREATE RESOURCE GROUP syntax. · 8ab1ccdb
      Pengzhou Tang 提交于
      There are two default resource groups 'default_group' and 'admin_group',
      to create more please use the CREATE RESOURCE GROUP command, group
      options can be specified with the WITH clause, at the moment
      'cpu_rate_limit' and 'memory_limit' are mandatory, other options are
      all optional.
      
          -- create a resource group
          CREATE RESOURCE GROUP rg1 WITH (
              concurrency=1,
              cpu_rate_limit=.2,
              memory_limit=.2
          );
          -- query the resource group
          SELECT oid FROM pg_resgroup WHERE rsgname='rg1';
          SELECT * from gp_toolkit.gp_resgroup_config
              WHERE groupname='rg1';
          -- create/alter a role and assign it to this group
          CREATE ROLE r1 RESOURCE GROUP rg1;
          ALTER ROLE r2 RESOURCE GROUP rg1;
      
      *NOTE*: this commit only implement the SQL syntax, the actual resource
      limitation will not take effect at the moment as the resource group is
      still under development.
      
      *NOTE*: test cases are not included in this commit as once a testing
      resgroup is created it can't be dropped due to lack of DROP syntax, so
      the test case can't be re-run and will introduce side-effect to the
      system. So it's better to provide test cases after the DROP RESOURCE
      GROUP is implemented.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      8ab1ccdb
    • A
      Restrict the gpfdist sslclean workaround to cmdline only · 78c29ff4
      Adam Lee 提交于
      gpfdist waits 5 seconds to close SSL sessions to workaround a system
      related issue on Solaris, this commit restricts the workaround to
      cmdline only.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
      78c29ff4
    • T
      Make git submodules clonable recursively without github keys · eb888c69
      Tushar Dadlani 提交于
      - If you are working on a computer and want to get the entirety of the
        source code of gpdb including submodules, you cannot do that as you would need your github
        key to clone the submodules.
      eb888c69
  3. 15 3月, 2017 7 次提交
    • D
      rebased to upstream tag REL8_4_22 (#2011) · 614ca952
      Dave Cramer 提交于
      fix tests, add init_file to ignore GPDB segment output
      use init_file from main regression tests, fix expected results
      Requires GPDB to be configure with --with-openssl to pass
      614ca952
    • D
      Fix spelling of commandline option in gpfdist · bcac963c
      Daniel Gustafsson 提交于
      bcac963c
    • H
      Use the new optimizer_trace_fallback GUC to detect ORCA fallbacks. · bb3e4b4f
      Heikki Linnakangas 提交于
      qp_olap_group2 did an elaborate dance with optimizer_log=on,
      client_min_messages=on, and gpdiff rules to detect whether the queries
      fell back to the traditional planner. Replace all that with the new simple
      optimizer_trace_fallback GUC.
      
      Also enable optimizer_trace_fallback in the 'gp_optimizer' test. Since this
      test is all about testing ORCA, seems appropriate to memorize which queries
      currently fall back and which do not, so that we detect regressions where
      we start to fall back on queries that ORCA used to be able to plan. There
      was one existing test that explicitly set client_min_messages=on, like the
      tests in qp_olap_group2, to detect fallback. I kept the those extra logging
      GUCs for that one case, so that we have some coverage for that too, although
      I'm not sure how worthwhile it is anymore.
      
      In the passing, in the one remaining test in gp_optimizer that sets
      client_min_messages='log', don't assume that log_statement is set to 'all'.
      
      Setting optimizer_trace_fallback=on for 'gp_optimizer' caught the issue
      I fixed in previous commit, that one of the ANALYZE queries still used ORCA.
      bb3e4b4f
    • H
      Disable ORCA also for the internal query to get index size. · 702d5683
      Heikki Linnakangas 提交于
      For the same reasons we disabled ORCA in all the other ANALYZE queries.
      Missed this one.
      
      I belive we don't want to use ORCA for any of the ANALYZE's internal
      queries, current or future ones, so move the disabling of ORCA one level
      up, into a wrapper around analyze_rel().
      702d5683
    • E
      Fix error in pgbouncer doc example · 2be49a69
      Eamon 提交于
      [ci skip]
      2be49a69
    • C
      Doc linkcheck (#2040) · 06062962
      Chuck Litzell 提交于
      * docs: fix broken links and permanent redirects
      
      * fix doubled slashes in URLs
      06062962
    • H
      Fix planning of a table function with an aggregate. · be4d9ca4
      Heikki Linnakangas 提交于
      add_second_stage_agg() builds a SubQueryScan, and moves the "current" plan
      underneath it. The SS_finalize_plan() call was misplaced, it needs to be
      called before constructing the new upper-level range table, because any
      references to the range table in the subplan refer to the original range
      table, not the new dummy one that only contains an entry for the
      SubQueryScan. It looks like the only thing in SS_finalize_plan() processing
      that accesses the range table is processing a TableFunctionScan. That lead
      to an assertion failure or crash, if a table function was used in a query
      with an aggregate.
      
      Fixes github issue #2033.
      be4d9ca4
  4. 14 3月, 2017 12 次提交
  5. 13 3月, 2017 6 次提交
    • A
      Right way to test in Makefile · 14f8c0da
      Adam Lee 提交于
      `-[ "$str" = "foo" ] && $(MAKE) -C bar` ignores errors while making bar,
      better to use full `if-then` clause.
      14f8c0da
    • H
      Move test case for querying pg_views, while a view is dropped. · 59eb4ac1
      Heikki Linnakangas 提交于
      I modified the test slightly, to eliminate the sleep, making the test run
      faster.
      59eb4ac1
    • H
      Remove redundant test on dropping partition constraint. · c8bb60d6
      Heikki Linnakangas 提交于
      We have a test for this in the main test suite, in the 'partition' test.
      This one:
      
      create table rank_damage (i int, j int)  partition by range(j) (start(1)
      end(10) every(1));
      -- [other tests]
      alter table rank_damage_1_prt_1 drop constraint rank_damage_1_prt_1_check;
      c8bb60d6
    • A
      Not to compile gpfdist/ext in OSes other than Windows · 59833476
      Adam Lee 提交于
      > src/bin/gpfdist/ext is a submodule, from
      > https://github.com/greenplum-db/gpfdist-ext. It contains tarballs of libapr,
      > libevent, openssl and zlib, and a bunch of patches. Why do we build gpfdist
      > like that? On my laptop, I never use those ancient, patched, versions, I
      > link with the normal libraries that come with my distribution, and it works
      > fine.
      >
      > I think that has something to do with Windows, because I also found this in
      > src/interfaces/libpq/Makefile:
      >
      > ifeq ($(PORTNAME), win32)
      > SHLIB_LINK += -L../../../../gpAux/ext/win32/kfw-3-2-2/lib
      > -L../../../src/bin/gpfdist/ext/lib -lgssapi32 -lshfolder -lwsock32 -lws2_32
      > -lsecur32 $(filter -leay32 -lssleay32 -lcomerr32 -lkrb5_32, $(LIBS))
      > endif
      >
      > But if that's only for Windows builds, why do we try to build it on Linux?
      >
      > - Heikki
      
      Thanks Heikki for bringing this up, we created a story to review the
      gpfdist/ext submodule, this commit is to remove it from other OSes first.
      59833476
    • A
      Fix a coverity issue in dblink · 8a6d38d6
      Adam Lee 提交于
      It's safe actually because the functions we created have two arguments
      at most. Fix it anyway in case three arguments functions are introduced
      in future.
      
      /tmp/build/0e1b53a0/gpdb_src/contrib/dblink/dblink.c: 241 in dblink_connect()
      
      ________________________________________________________________________________________________________
      *** CID 163942:  Null pointer dereferences  (FORWARD_NULL)
      /tmp/build/0e1b53a0/gpdb_src/contrib/dblink/dblink.c: 241 in dblink_connect()
      235      * Create a persistent connection to another database
      236      */
      237     PG_FUNCTION_INFO_V1(dblink_connect);
      238     Datum
      239     dblink_connect(PG_FUNCTION_ARGS)
      240     {
      >>>     CID 163942:  Null pointer dereferences  (FORWARD_NULL)
      >>>     Assigning: "connstr" = "NULL".
      241             char       *connstr = NULL;
      242             char       *connname = NULL;
      243             char       *msg;
      244             PGconn     *conn = NULL;
      245             remoteConn *rconn = NULL;
      246
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
      8a6d38d6
    • A
      Backport to fix dblink_connect() password verification · caeddd3d
      Adam Lee 提交于
      	commit cae7ad90
      	Author: Tom Lane <tgl@sss.pgh.pa.us>
      	Date:   Mon Sep 22 13:55:14 2008 +0000
      
      	    Fix dblink_connect() so that it verifies that a password is supplied in the
      	    conninfo string *before* trying to connect to the remote server, not after.
      	    As pointed out by Marko Kreen, in certain not-very-plausible situations
      	    this could result in sending a password from the postgres user's .pgpass file,
      	    or other places that non-superusers shouldn't have access to, to an
      	    untrustworthy remote server.  The cleanest fix seems to be to expose libpq's
      	    conninfo-string-parsing code so that dblink can check for a password option
      	    without duplicating the parsing logic.
      
      	    Joe Conway, with a little cleanup by Tom Lane
      caeddd3d