1. 25 9月, 2017 3 次提交
    • P
      Add pipeline support for AIX clients and loaders · 68362b41
      Peifeng Qiu 提交于
      Concourse doesn't support AIX natively, we need to clone the repo
      with the correspond commit on remote machine, compile the packages,
      and download them back to concourse container as output.
      
      Testing client and loader for platform without gpdb server is
      another challenge. We setup GPDB server on concourse container just
      like most installcheck tests, and use SSH tunnel to forward ports
      from and to the remote host. This way both CL tools and GPDB server
      feel they are on the same machine, and the test can run normally.
      68362b41
    • A
      Report COPY PROGRAM's error output · 2b51c16b
      Adam Lee 提交于
      Replace popen() with popen_with_stderr() which is used in external web
      table also to collect the stderr output of program.
      
      Since popen_with_stderr() forks a `sh` process, it's almost always
      sucessful, this commit catches errors happen in fwrite().
      
      Also passes variables as the same as what external web table does.
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      2b51c16b
    • Z
      Fix cgroup mount point detect in gpconfig. · 37e3e66d
      Zhenghua Lyu 提交于
      Previous code us python package psutil to get the mount
      information of the system which will read the content
      of /etc/mtab. In some environments, /etc/mtab does not
      contain the mount point information of cgroups. In this
      commit, we scan /proc/self/mounts to find out cgroup
      mount point.
      37e3e66d
  2. 23 9月, 2017 9 次提交
    • K
      Coverity fix: elog string formatting · d4a707c7
      Kavinder Dhaliwal 提交于
      d4a707c7
    • K
      Add a long living account for Relinquished Memory · 1822c826
      Kavinder Dhaliwal 提交于
      There are cases where during execution a Memory Intensive Operator (MI)
      may not use all the memory that is allocated to it. This means that this
      extra memory (quota - allocated) can be relinquished for other MI nodes
      to use during execution of a statement. For example
      
      ->  Hash Join
               ->  HashAggregate
               ->  Hash
      In the above query fragment the HashJoin operator has a MI operator for
      both its inner and outer subtree. If there ever is the case that the
      Hash node used much less memory than was given as its quota it will now
      call MemoryAccounting_DeclareDone() and the difference between its
      quota and allocated amount will be added to the allocated amount of the
      RelinquishedPool. Doing this will enable HashAggregate to request memory
      from this RelinquishedPool if it exhausts its quota to prevent spilling.
      
      This PR adds two new API's to the MemoryAccounting Framework
      
      MemoryAccounting_DeclareDone(): Add the difference between a memory
      account's quota and its allocated amount to the long living
      RelinquishedPool
      
      MemoryAccounting_RequestQuotaIncrease(): Retrieve all relinquished
      memory by incrementing an operator's operatorMemKb and setting the
      RelinquishedPool to 0
      
      Note: This PR introduces the facility for Hash to relinquish memory to
      the RelinquishedPool memory account and for the Agg operator
      (specifically HashAgg) to request an increase to its quota before it
      builds its hash table. This commit does not generally apply this
      paradigm to all MI operators
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      1822c826
    • S
      Cherry-pick 'ae47eb1' from upstream to fix Nested CTE errors (#3360) · 009b1809
      sambitesh 提交于
      Before this cherry-pick the below query would have errored out
      
      WITH outermost(x) AS (
        SELECT 1
        UNION (WITH innermost as (SELECT 2)
               SELECT * FROM innermost
               UNION SELECT 3)
      )
      SELECT * FROM outermost;
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      009b1809
    • T
      Update 5.json with catalog changes (amgetmulti -> amgetbitmap) · 4daa7c5f
      Tom Meyer 提交于
      To update 5.json, we ran:
      
      cat src/include/catalog/*.h | perl src/backend/catalog/process_foreign_keys.pl > gpMgmt/bin/gppylib/data/5.json
      Signed-off-by: NJacob Champion <pchampion@pivotal.io>
      4daa7c5f
    • T
      Tightens readme · bb022db3
      Todd Sedano 提交于
      bb022db3
    • T
      Add gp_stat_replication view · 1546ec3b
      Taylor Vesely 提交于
      In order to view the primary segments' replication stream data from
      their pg_stat_replication view, we currently need to connect to the
      primary segment individually via utility mode. To make life easier, we
      introduce a function that will fetch each primary segment's
      replication stream data and wrap it with a view named
      gp_stat_replication. It will now be possible to view all the cluster
      replication information from the master in a regular psql session.
      
      Authors: Taylor Vesely and Jimmy Yih
      1546ec3b
    • B
      Bump ORCA to 2.46.2 · 4e9da061
      Bhuvnesh Chaudhary 提交于
      Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
      4e9da061
    • A
      PXF CI: run pxf_automation suite as part of pxf regression test job (#3347) · facc4df1
      Alexander Denissov 提交于
      * run pxf_automation as part of pxf regression test
      Signed-off-by: NAlexander Denissov <adenissov@pivotal.io>
      
      * added missing input
      Signed-off-by: NLav Jain <ljain@pivotal.io>
      
      * Fix symbolic links
      
      * create extension pxf before running automation tests
      
      * Hack python psi module by copying it from system to gpdb python
      
      * remove if exists for extension
      
      * Run pxf_automation before regression tests
      
      * Change owner to gpadmin before running tests
      
      * Generalize copying of PSI package
      
      * Generalize install path using GPHOME
      facc4df1
    • D
      Edits to make 'resource management' using resource queues, resources … (#3353) · 8652cbe5
      David Yozie 提交于
      * Edits to make 'resource management' using resource queues, resources groups, consistent throughout. This is to distinguish between resource management and general workload profiles, as well as to avoid conusion with Workload Manager product.
      
      * Edits from Lisa's review
      8652cbe5
  3. 22 9月, 2017 6 次提交
    • D
      Merge amgetbitmap AM functions. · 1e39a91e
      Daniel Gustafsson 提交于
      This merges and backports the upstream commits which replaces the
      amgetmulti AM function with amgetbitmap, which performs the whole indexscan
      in one call (for HashBitmap, StreamBitmaps are not affected by this). GPDB
      was more or less already doing this, as the upstream patch was from the
      beginning submitted from Greenplum. This commit refactors the AM function
      to mimick the upstream behavior, while keeping the GPDB API for the
      callsites.
      
      The below commits are included either in full, or in part:
      
        commit 4e82a954
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Thu Apr 10 22:25:26 2008 +0000
      
          Replace "amgetmulti" AM functions with "amgetbitmap", in which the whole
          indexscan always occurs in one call, and the results are returned in a
          TIDBitmap instead of a limited-size array of TIDs.  This should improve
          speed a little by reducing AM entry/exit overhead, and it is necessary
          infrastructure if we are ever to support bitmap indexes.
      
          In an only slightly related change, add support for TIDBitmaps to preserve
          (somewhat lossily) the knowledge that particular TIDs reported by an index
          need to have their quals rechecked when the heap is visited.  This facility
          is not really used yet; we'll need to extend the forced-recheck feature to
          plain indexscans before it's useful, and that hasn't been coded yet.
          The intent is to use it to clean up 8.3's horrid @@@ kluge for text search
          with weighted queries.  There might be other uses in future, but that one
          alone is sufficient reason.
      
          Heikki Linnakangas, with some adjustments by me.
      
        commit 1dcf6fdf
        Author: Teodor Sigaev <teodor@sigaev.ru>
        Date:   Sat Aug 23 10:37:24 2008 +0000
      
          Fix possible duplicate tuples while  GiST scan. Now page is processed
          at once and ItemPointers are collected in memory.
      
          Remove tuple's killing by killtuple() if tuple was moved to another
          page - it could produce unaceptable overhead.
      
          Backpatch up to 8.1 because the bug was introduced by GiST's concurrency support.
      
        commit b9856b67
        Author: Teodor Sigaev <teodor@sigaev.ru>
        Date:   Wed Oct 22 12:53:56 2008 +0000
      
          Fix GiST's killing tuple: GISTScanOpaque->curpos wasn't
          correctly set. As result, killtuple() marks as dead
          wrong tuple on page. Bug was introduced by me while fixing
          possible duplicates during GiST index scan.
      1e39a91e
    • K
      Enable ORCA to be tracked by Mem Accounting · 669dd279
      Kavinder Dhaliwal 提交于
      Before this commit all memory allocations made by ORCA/GPOS were a
      blackbox to GPDB. However the ground work had been in place to allow
      GPDB's Memory Accounting Framework to track memory consumption by ORCA.
      This commit introduces two new functions
      Ext_OptimizerAlloc and Ext_OptimizerFree which
      pass through their parameters to gp_malloc and gp_free and do some bookeeping
      against the Optimizer Memory Account. This introduces very little
      overhead to the GPOS memory management framework.
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      669dd279
    • J
      resgroup isolation2: increase memory limits for 8.4 · 39bb8145
      Jacob Champion 提交于
      8.4 seems to use more memory during this test. To get master green
      again, we're checking in these changes to the memory limits for the
      resource group tests. Follow-up should be on issue #3345; there's a good
      chance this will not be our final solution to this test failure.
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      39bb8145
    • H
      Fix comment, rendered incorrect by commit f7101d98. · a133901a
      Heikki Linnakangas 提交于
      We can encounter tuples that belong to later batches even after the first
      pass. Revert the comment to the way it is in upstream.
      I forgot to update
      a133901a
    • H
      Merge with commit 'f260edb1', from PostgreSQL 8.4devel. · 3b4cd788
      Heikki Linnakangas 提交于
      Noteworthy changes that were not totally straightforward to merge:
      
      * Changes in the hash function. This replaces the contents of hashfunc.c
        directly with REL8_4_STABLE, not just changes otherwise included in the
        merge batch. That includes later changes to the hash algorithm used. I
        didn't feel like trying to fix it to an intermediate state that we would
        just rewrite again later.
      
        The hash function had been replaced in GPDB, too, but I couldn't quite
        figure out what the GPDB algorithm was, and whether it was better or how.
        In any case, I believe the new PostgreSQL algorithm is decent, so let's
        just use that. I'm not very impressed by the old code, there was weird
        stuff going on with the little and big endianess stuff. And at the top,
        WORDS_BIGENDIAN was misspelled as WORS_BIGENDIAN, so it never worked as
        intended on big endian systems.
      
        Note that GPDB uses a completely different set of hash functions for
        calculating the DISTRIBUTED BY key, so this doesn't affect pg_upgrade.
        This does invalidate hash indexes, but they're not supported on GPDB
        anyway. And we don't support hash partitioning either.
      
      * Pattern selectivity functions had been heavily modified in GPDB, but this
        replaces it with the upstream version. It was not clear to us what the
        purpose of the GPDB changes were. That ought to be revisited, and there's
        a GPDB_84_MERGE_FIXME comment about it.
      
      * Commit 95c238d9, to make COPY of CSV files faster, was not merged.
        The function had been heavily modified in GPDB, and it was not
        immediately clear how to resolve the conflicts. That commit was just a
        performance enhancement, so we can revisit that later. Added a
        GPDB_84_MERGE_FIXME comment about that too.
      
      * Resurrect the MyXactAccessedTempRel global variable. It's not used for
        anything in GPDB, as noted in the comment in PrepareTransaction. We had
        #ifdef'd out the variable, and all the places that set the variable. To
        reduce future merge conflicts, it seems better to have the variable and
        keep all the places where it's set unmodified from the upstream, and only
        comment out the place where it's checked in PrepareTransaction.
      
      * heap_release_fetch was removed in upstream, because it was unused.
        However, it was still used in one GPDB-specific function, in nbtree.c.
        Replace the call in nbtree.c with a ReleaseBuffer() + heap_fetch(), and
        add a GPDB_84_MERGE_FIXME to revisit.
      
      * This merge included an upstream change to add USE_SEGMENTED_FILES flag,
        but it was later later in the 8.4 dev cycle. Cherry-pick the change to
        remove it now, to avoid having to make it work just to remove it later.
        (commit 3c6248a8)
      
      * This adds support for enum-type GUCs, but we do not yet take advantage
        of that in the GPDB-specific GUCs, except for a few that shared code
        with client_min_messages and log_min_messages.
      
      * Reshuffle a few OIDs to avoid collision. We had reserved OID 1980 for
        int8_ops opclass. But that is now used for numeric_div_trunc function(),
        which we just merged in. In the upstream, we have reserved OID 3124 for
        the opclass, but only since version 9.2. Before that, we used whatever
        was free at initdb time. But we have been using OID 3124 for the
        GPDB-specific pg_proc_callback system table
      
        To resolve this mess, change the OID of pg_proc_callback from 3124 to
        7176, to make 3124 available. And then use 3124 for int8_ops. That leaves
        1980 for numeric_div_trunc function(), like in upstream.
      
      * TRUNCATE triggers now work, and to make that work, I made some changes to
        the way statement-level triggers are fired in general. The goal with
        statement-level triggers is to always execute them on the dispatcher, but
        they've been broken and unsupported before. At first, I thought these
        changes would be enough to do that for all statement-level triggers, but
        testing shows that not quite. So statement-level triggers are broken,
        like they were before, even though we pass the truncate-trigger tests
        now.
      
      This has been a joint effort between Heikki Linnakangas, Daniel Gustafsson,
      Jacob Champion and Tom Meyer.
      3b4cd788
    • L
      docs - add suse11 swapaccount req to resgroup cgroup cfg (#3323) · 430e7343
      Lisa Owen 提交于
      * docs - add suse11 swapaccount req to resgroup cgroup cfg
      
      * must reboot after setting boot parameters
      430e7343
  4. 21 9月, 2017 14 次提交
    • H
      Mask out differences in plperl.c line numbers in errors. · 8b153171
      Heikki Linnakangas 提交于
      Ideally, we would use proper error codes, or find some other way to prevent
      the useless "(plperl.c:2118)" from appearing in PL/perl errors. Later
      versions of PostgreSQL do that, so we'll get that eventually. In the
      meanwhile, silence errors caused by code movement in that file. Same as
      we had done for plperl's own tests already.
      8b153171
    • D
      Use autoconf for resolving PXF library dependency · 6f1ca717
      Daniel Gustafsson 提交于
      Leverage the core autoconf scaffolding for resolving the dependency
      on libcurl. Enabling PXF in autoconf now automatically adds libcurl
      as a dependency. Coupled with the recent commit which relaxes the
      curl version requirement on macOS, we can remove the library copying
      from the PXF makefile as well.
      6f1ca717
    • H
      Fix bug in handling re-scan of a hash join. · f7101d98
      Heikki Linnakangas 提交于
      The WITH RECURSIVE test case in 'join_gp' would miss some rows, if
      the hash algorithm (src/backend/access/hash/hashfunc.c) was replaced
      with the one from PostgreSQL 8.4, or if statement_mem was lowered from
      1000 kB to 700 kB. This is what happened:
      
      1. A tuple belongs to batch 0, and is kept in memory during processing
         batch 0.
      
      2. The outer scan finishes, and we spill the inner batch 0 from memory
         to a file, with SpillFirstBatch, and start processing tuple 1
      
      3. While processing batch 1, the number of batches is increased, and
         the tuple that belonged to batch 0, and was already written to the
         batch 0's file, is moved, to a later batch.
      
      4. After the first scan is complete, the hash join is re-scanned
      
      5. We reload the batch file 0 into memory. While reloading, we encounter
         the tuple that now doesn't seem to belong to batch 0, and throw it
         away.
      
      6. We perform the rest of the re-scan. We have missed any matches to the
         tuple that was thrown away. It was not part of the later batch files,
         because in the first pass, it was handled as part of batch 0. But in
         the re-scan, it was not handled as part of batch 0, because nbatch was
         now larger, so it didn't belong there.
      
      To fix, when reloading a batch file we see a tuple that actually belongs
      to a later batch file, we write it to that later file. To avoid adding
      it there multiple times, if the hash join is re-scanned multiple times,
      if any tuples are moved when reloading a batch file, destroy the batch
      file and re-create it with just the remaining tuples.
      
      This is made a bit complicated by the fact that BFZ temp files don't support
      appending to a file that's already been rewinded for reading. So what we
      actually do, is always re-create the batch file, even if there has been no
      changes to it. I left comments about that, Ideally, we would either support
      re-appending to BFZ files, or stopped using BFZ workfiles for this
      altogether (I'm not convinced they're any better than plain BufFiles). But
      that can be done later.
      
      Fixes github issue #3284
      f7101d98
    • H
      Don't double-count inner tuples reloaded from file. · 429ff8c4
      Heikki Linnakangas 提交于
      ExecHashTableInsert also increments the counter, so we don't need to do it
      here. This is harmless AFAICS, the counter isn't used for anything but
      instrumentation at the moment, but it confused me while debugging.
      429ff8c4
    • H
      Fix CURRENT OF to work with PL/pgSQL cursors. · 91411ac4
      Heikki Linnakangas 提交于
      It only worked for cursors declared with DECLARE CURSOR, before. You got
      an "there is no parameter $0" error if you tried. This moves the decision
      on whether a plan is "simply updatable", from the parser to the planner.
      Doing it in the parser was awkward, because we only want to do it for
      queries that are used in a cursor, and for SPI queries, we don't know it
      at that time yet.
      
      For some reason, the copy, out, read-functions of CurrentOfExpr were missing
      the cursor_param field. While we're at it, reorder the code to match
      upstream.
      
      This only makes the required changes to the Postgres planner. ORCA has never
      supported updatable cursors. In fact, it will fall back to the Postgres
      planner on any DECLARE CURSOR command, so that's why the existing tests
      have passed even with optimizer=off.
      91411ac4
    • H
      Remove now-unnecessary code from gp_read_error_log to dispatch the call. · 4035881e
      Heikki Linnakangas 提交于
      There was code in gp_read_error_log(), to "manually" dispatch the call to
      all the segments, if it was executed in the dispatcher. This was
      previously necessary, because even though the function was marked with
      prodataaccess='s', the planner did not guarantee that it's executed in the
      segments, when called in the targetlist like "SELECT
      gp_read_error_log('tab')". Now that we have the EXECUTE ON ALL SEGMENTS
      syntax, and are more rigorous about enforcing that in the planner, this
      hack is no longer required.
      4035881e
    • N
      Refactor resource group source code, part 2. · a2cf9bdf
      Ning Yu 提交于
      * resgroup: provide helper funcs for memory usage updates.
      
      We used to have complex and duplicate logic to update group & slot
      memory usage under different context, now we provide two helper
      functions to increase or decrease memory usage in group and slot.
      
      Two bad named functions `attachToSlot()` and `detachFromSlot()` are
      retired now.
      
      * resgroup: provide helper function to unassign a dropped resgroup.
      
      * resgroup: move complex checks into helper functions.
      
      Many helper functions were added with descriptive names to increase
      readability of lots of complex checks.
      
      Also added a pointer to resource group slot in self.
      
      * resgroup: add helper functions for wait queue operations.
      a2cf9bdf
    • A
      Fix aix7_ppc_64 making script · 15c04803
      Adam Lee 提交于
          $ make -j -s install
          ...
          --- subprocess32, Linux only
          /bin/sh: line 3: [: =: unary operator expected
          --- stream
          ...
          Greenplum Database installation complete.
      
      When `$(BLD_ARCH)` is empty, the check becomes `[ = 'aix7_ppc_64' ]`, and gets
      the unary operator expected error.
      15c04803
    • A
      Make gp_replication.conf for USE_SEGWALREP only. · b7ce6930
      Ashwin Agrawal 提交于
      The intend of this extra configuration file is to control the
      synchronization between primary and mirror for WALREP.
      
      The gp_replication.conf is not designed to work with filerep, for
      example, the scripts like gp_expand will fail since it directly modify
      the configuration files instead of going through initdb.
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      b7ce6930
    • A
      d60e2389
    • L
    • H
      Take advantage of the new EXECUTE ON syntax in gp_toolkit. · 9a039e4f
      Heikki Linnakangas 提交于
      Also change a few regression tests to use the new syntax, instead of
      gp_toolkit's __gp_localid and __gp_masterid functions.
      9a039e4f
    • H
      Add support for CREATE FUNCTION EXECUTE ON [MASTER | ALL SEGMENTS] · aa148d2a
      Heikki Linnakangas 提交于
      We already had a hack for the EXECUTE ON ALL SEGMENTS case, by setting
      prodataaccess='s'. This exposes the functionality to users via DDL, and adds
      support for the EXECUTE ON MASTER case.
      
      There was discussion on gpdb-dev about also supporting ON MASTER AND ALL
      SEGMENTS, but that is not implemented yet. There is no handy "locus" in the
      planner to represent that. There was also discussion about making a
      gp_segment_id column implicitly available for functions, but that is also
      not implemented yet.
      
      The old behavior was that a function that if a function was marked as
      IMMUTABLE, it could be executed anywhere. Otherwise it was always executed
      on the master. For backwards-compatibility, this keeps that behavior for
      EXECUTE ON ANY (the default), so even if a function is marked as EXECUTE ON
      ANY, it will always be executed on the master unless it's IMMUTABLE.
      
      There is no support for these new options in ORCA. Using any ON MASTER or
      ON ALL SEGMENTS functions in a query cause ORCA to fall back. This is the
      same as with the prodataaccess='s' hack that this replaces, but now that it
      is more user-visible, it would be nice to teach ORCA about it.
      
      The new options are only supported for set-returning functions, because for
      a regular function marked as EXECUTE ON ALL SEGMENTS, it's not clear how
      the results should be combined. ON MASTER would probably be doable, but
      there's no need for that right now, so punt.
      
      Another restriction is that a function with ON ALL SEGMENTS or ON MASTER can
      only be used in the FROM clause, or in the target list of a simple SELECT
      with no FROM clause. So "SELECT func()" is accepted, but "SELECT func() FROM
      foo" is not. "SELECT * FROM func(), foo" works, however. EXECUTE ON ANY
      functions, which is the default, work the same as before.
      aa148d2a
    • B
      Fix multistage aggregation plan targetlists · 41640e69
      Bhuvnesh Chaudhary 提交于
      If there are aggregation queries with aliases same as the table actual
      columns and they are propagated further from subqueries and grouping is
      applied on the column alias it may result in inconsistent targetlists
      for aggregation plan causing crash.
      
      	CREATE TABLE t1 (a int) DISTRIBUTED RANDOMLY;
      	SELECT substr(a, 2) as a
      	FROM
      		(SELECT ('-'||a)::varchar as a
      			FROM (SELECT a FROM t1) t2
      		) t3
      	GROUP BY a;
      41640e69
  5. 20 9月, 2017 8 次提交
    • P
      Dump more detailed info for memory usage in gp_resgroup_status · 2816fe67
      Pengzhou Tang 提交于
      In this commit, we add more detailed memory metrics to the 'memory_usage'
      column of gp_resgroup_status include current/available memory usage in
      a group, current/available memory usage for a slot, current/available
      memory usage for the shared part.
      2816fe67
    • G
      resource group: refine ResGroupSlotAcquire · 4646bbc6
      Gang Xiong 提交于
      Previously, waiters waiting on a dropped resource group need to be
      reassigned to a new group, to achieve it, ResGroupSlotAcquire is
      modified to be complicated and not easy to understand, this commit
      refines it.
      
      Author: Gang Xiong <gxiong@pivotal.io>
      4646bbc6
    • P
      resgroup: Allow concurrency to be zero. · 77007ff6
      Pengzhou Tang 提交于
      Allow CREATE RESOURCE GROUP and ALTER RESOURCE GROUP to set concurrency
      to 0, so there will eventually be no running queries after some time, so
      the resource group can be dropped. On drop all pending queries will be
      moved to the new resource group assigned to the role; but if the role is
      also dropped the pending queries will all be canceled. Another thing is
      we do not allow setting concurrency of admin group to zero, superuser is
      under admin group and only superuser can alter resource group, so once
      concurrency of admin group is set to zero, there will be no chance to set
      it again.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      77007ff6
    • M
      Report error when 'COPY (SELECT ...) TO' with 'ON SEGMENT' · cbddcc86
      Ming LI 提交于
      Because we don't know the data location of the result of SELECT query,
      ON SEGMENT is forbidden.
      cbddcc86
    • R
      Remove the restriction on sum of memory_spill_ratio and memory_shared_quota. · c5a5780a
      Richard Guo 提交于
      This commit does two changes:
      1. Remove the restriction that sum of memory_spill_ratio and memory_shared_quota
      must be no larger than 100.
      2. Change the range of memory_spill_ratio to be [0, 100].
      c5a5780a
    • H
      Fix warning of passing const to non-const parameter. · f4417c50
      Hubert Zhang 提交于
      Function FaultInjectorIdentifierStringToEnum(faultName) pass a const
      string to a non-const parameter, which cause a build warnig. But on the
      second thought, we have supported injecting fault by fault name without
      corresponding fault identifier, so it's better to use faultname instead
      of fault enum identifier in the ereport.
      f4417c50
    • C
    • T
      Developer version of gpstart for WALRep · dc549c2f
      Taylor Vesely 提交于
      Adds a clusterstart command to gpsegwalrep.py allow a user to start a
      cluster with WALRep configured. This is a developer utility that assumes
      all cluster replicas are present on localhost, and thus is not intended
      for production use.
      dc549c2f