1. 19 6月, 2018 3 次提交
  2. 11 6月, 2018 2 次提交
  3. 06 6月, 2018 3 次提交
    • J
      Fix potential bugs reported by CoverityScan (#5105) · eb3d124b
      Jialun 提交于
      - Change strncpy to StrNCpy, make sure dest string be terminated
      - Initilize some variables before use it.
      eb3d124b
    • A
      Pass relstorage type to smgr layer. · 85fee736
      Ashwin Agrawal 提交于
      Without this patch the strorage layout is not known in md and smgr layer. Due
      to lack of this info sub-optimal operations need to be performed generically
      for all table types. For example Heap specific functions like
      ForgetRelationFsyncRequests(), DropRelFileNodeBuffers() gets called even for AO
      and CO tables.
      
      Adding new RelFileNodeWithStorageType struct to carry pass storage type to md
      and smgr layer. XLOG_XACT_COMMIT and XLOG_XACT_ABORT wal records use the new
      structure which has RelFileNode and storage type
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      85fee736
    • A
      Optimize and correct copy_append_only_data(). · b3aff72d
      Ashwin Agrawal 提交于
      Alter tablespace needs to copy all underlying files of table from one tablespace
      to other. For AO/CO tables this was implemented using full directory scan to
      find files and copy when persistent tables were removed. This gets very
      inefficient and varies in performance based on number of files present in the
      directory. Instead use the same optimization logic used for `mdunlink_ao()`
      leveraging known file layout for AO/CO tables.
      
      Also, old logic had couple of bugs:
      - missed coping the base or .0 file. Which means data loss if table was altered in past.
      - xlogging even for temp tables
      
      These are fixed as well with this patch. Additional tests added to cover for
      those missing scenaiors. Also, moved the AO specific code to aomd.c, out of
      tablecmds.c file to reduce conflicts to upstream.
      b3aff72d
  4. 05 6月, 2018 1 次提交
    • J
      Implement CPUSET (#5023) · 0c0782fe
      Jialun 提交于
      * Implement CPUSET, a new management of cpu resource in resource
      group which can reserve the specified cores for specified
      resource group exclusively. This can ensure that there are always
      available cpu resources for the group which has set CPUSET.
      The most common scenario is allocating fixed cores for short
      queries.
      
      - One can use it by executing CREATE RESOURCE GROUP xxx WITH (
        cpuset='0-1', xxxx). 0-1 are the reserved cpu cores for
        this group. Or ALTER RESOURCE GROUP SET CPUSET '0,1' to modify
        the value.
      - The syntax of CPUSET is a combination of the tuples, each
        tuple represents one core number or the core numbers interval,
        separated by comma. E.g. 0,1,2-3. All the core in CPUSET must be
        available in system and the core numbers in each group cannot
        have overlap.
      - CPUSET and CPU_RATE_LIMIT are mutually exclusive. One cannot
        create a resource group with both CPUSET and CPU_RATE_LIMIT.
        But the CPUSET and CPU_RATE_LIMIT can be freely switched in
        one group by executing ALTER operation, that means if one
        feature has been set, the other is disabled.
      - The cpu cores will be returned to GPDB, when the group has been
        dropped, or the CPUSET value has been changed, or the CPU_RATE_LIMIT
        has been set.
      - If some of the cores have been allocated to the resource group,
        then the CPU_RATE_LIMIT in other groups only indicating the
        percentage of cpu resources of the left cpu cores.
      - If the GPDB is busy, all the other cores which have not be
        allocated to any resource groups exclusively through CPUSET
        have already been run out, the cpu cores in CPUSET will still
        not be allocated.
      - The cpu cores in CPUSET will be used exclusively only in GPDB
        level, the other non-GPDB processes in system may use them.
      - Add test cases for this new feature, and the test environment
        must contain at least two cpu cores, so we upgrade the configuration
        of instance_type in resource_group jobs.
      
      * - Compatible with the case that cgroup directory cpuset/gpdb
        does not exist
      - Implement pg_dump for cpuset & memory_auditor
      - Fix a typo
      - Change default cpuset value from empty string to -1, for
        the code in 5X assume that all the default value in
        resource group is integer, a non-integer value will make the
        system fail to start
      0c0782fe
  5. 04 6月, 2018 2 次提交
  6. 02 6月, 2018 1 次提交
    • A
      Fix some incorrectly merged code (#5084) · 50903a55
      Ashwin Agrawal 提交于
      * Remove redundant copy of toast and its index in ATExecSetTableSpace()
      
      Commit f70f49fe introduced this double copy of
      toast and its index. Lets fix it.
      
      * Fix mismerged lines in src/interfaces/libpq/Makefile.
      
      Author: Ashwin Agrawal <aagrawal@pivotal.io>
      50903a55
  7. 01 6月, 2018 4 次提交
    • T
      Add tests for GPDB specific collation creation · e0845912
      Taylor Vesely 提交于
      Unlike upstream, GPDB needs to keep collations in-sync between multiple
      databases. Add tests for GPDB specific collation behavior.
      
      These tests need to import a system locale, so add a @syslocale@ variable to
      gpstringstubs.pl in order to test the creation/deletion of collations from
      system locales.
      Co-authored-by: NJim Doty <jdoty@pivotal.io>
      e0845912
    • T
      Add dispatch to collation creation commands · d73a185b
      Taylor Vesely 提交于
      Make CREATE COLLATION and pg_import_system_collations() parallel aware by
      dispatching collation creation to the QEs.
      
      In order for collations to work correctly, we need to be sure that every
      collation that is created on the QD is also installed on the QEs, and that the
      OID matches in every database. We will take advantage of two phase commit to
      prevent collations from being created if there is a problem adding it on any
      QE. In upstream, collations are created during initdb, but this won't work for
      GPDB, because while initdb is running there is no way to be sure that every
      segment has the same locales installed.
      
      We disable collation creation during initdb, and make it the responsibility of
      the system administrator to initialize any needed collations by either running
      a CREATE COLLATION command, or running the pg_import_system_collations() UDF.
      Co-authored-by: NJim Doty <jdoty@pivotal.io>
      d73a185b
    • T
      Updated version of pg_import_system_collations() · 91d65139
      Tom Lane 提交于
      Pull in more recent version of pg_import_system_collations() from
      upstream. We have not pulled in the ICU collations, so wholesale
      remove the sections of code that deal with them.
      
      This commit is primarily a cherry-pick of 0b13b2a7, but also pulls
      in prerequisite changes for CollationCreate().
      
      	Rethink behavior of pg_import_system_collations().
      
      	Marco Atzeri reported that initdb would fail if "locale -a" reported
      	the same locale name more than once.  All previous versions of Postgres
      	implicitly de-duplicated the results of "locale -a", but the rewrite
      	to move the collation import logic into C had lost that property.
      	It had also lost the property that locale names matching built-in
      	collation names were silently ignored.
      
      	The simplest way to fix this is to make initdb run the function in
      	if-not-exists mode, which means that there's no real use-case for
      	non if-not-exists mode; we might as well just drop the boolean argument
      	and simplify the function's definition to be "add any collations not
      	already known".  This change also gets rid of some odd corner cases
      	caused by the fact that aliases were added in if-not-exists mode even
      	if the function argument said otherwise.
      
      	While at it, adjust the behavior so that pg_import_system_collations()
      	doesn't spew "collation foo already exists, skipping" messages during a
      	re-run; that's completely unhelpful, especially since there are often
      	hundreds of them.  And make it return a count of the number of collations
      	it did add, which seems like it might be helpful.
      
      	Also, re-integrate the previous coding's property that it would make a
      	deterministic selection of which alias to use if there were conflicting
      	possibilities.  This would only come into play if "locale -a" reports
      	multiple equivalent locale names, say "de_DE.utf8" and "de_DE.UTF-8",
      	but that hardly seems out of the question.
      
      	In passing, fix incorrect behavior in pg_import_system_collations()'s
      	ICU code path: it neglected CommandCounterIncrement, which would result
      	in failures if ICU returns duplicate names, and it would try to create
      	comments even if a new collation hadn't been created.
      
      	Also, reorder operations in initdb so that the 'ucs_basic' collation
      	is created before calling pg_import_system_collations() not after.
      	This prevents a failure if "locale -a" were to report a locale named
      	that.  There's no reason to think that that ever happens in the wild,
      	but the old coding would have survived it, so let's be equally robust.
      
      	Discussion: https://postgr.es/m/20c74bc3-d6ca-243d-1bbc-12f17fa4fe9a@gmail.com
      	(cherry picked from commit 0b13b2a7)
      91d65139
    • P
      Add function to import operating system collations · 7dee9e44
      Peter Eisentraut 提交于
      Move this logic out of initdb into a user-callable function.  This
      simplifies the code and makes it possible to update the standard
      collations later on if additional operating system collations appear.
      Reviewed-by: NAndres Freund <andres@anarazel.de>
      Reviewed-by: NEuler Taveira <euler@timbira.com.br>
      (cherry picked from commit aa17c06f)
      7dee9e44
  8. 29 5月, 2018 2 次提交
    • N
      Support RETURNING for replicated tables. · fb7247b9
      Ning Yu 提交于
      * rpt: reorganize data when ALTER from/to replicated.
      
      There was a bug that altering from/to a replicated table has no effect,
      the root cause is that we did not change gp_distribution_policy neither
      reorganize the data.
      
      Now we perform the data reorganization by creating a temp table with the
      new dist policy and transfering all the data to it.
      
      * rpt: support RETURNING for replicated tables.
      
      This is to support below syntax (suppose foo is a replicated table):
      
      	INSERT INTO foo VALUES(1) RETURNING *;
      	UPDATE foo SET c2=c2+1 RETURNING *;
      	DELETE * FROM foo RETURNING *;
      
      A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN
      output, data will be received from one explicit sender in this motion
      type.
      
      * rpt: fix motion type under explicit gather motion.
      
      Consider below query:
      
      	INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo
      	  RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan;
      
      We used to generate a plan like this:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Gather Motion 3:1  (slice1; segments: 1)
      	                ->  Seq Scan on int8_tbl
      
      A gather motion is used for the subplan, which is wrong and will cause a
      runtime error.
      
      A correct plan is like below:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Materialize
      	                ->  Broadcast Motion 3:3  (slice1; segments: 3)
      	                      ->  Seq Scan on int8_tbl
      
      * rpt: add test case for with both PRIMARY and UNIQUE.
      
      On a replicated table we could set both PRIMARY KEY and UNIQUE
      constraints, test cases are added to ensure this feature during future
      development.
      
      (cherry picked from commit 72af4af8)
      fb7247b9
    • N
      Preserve persistence when reorganizing temp tables. · 0ce07109
      Ning Yu 提交于
      When altering a table's distribution policy we might need to reorganize
      the data by creating a __temp__ table, copying the data to it, then swap
      the underlying relation files.  However we always create the __temp__
      table as permanent, then when the original table is temp the underlying
      files can not be found in later queries.
      
      	CREATE TEMP TABLE t1 (c1 int, c2 int) DISTRIBUTED BY (c1);
      	ALTER TABLE t1 SET DISTRIBUTED BY (c2);
      	SELECT * FROM t1;
      0ce07109
  9. 28 5月, 2018 2 次提交
    • N
      Revert "Support RETURNING for replicated tables." · a74875cd
      Ning Yu 提交于
      This reverts commit 72af4af8.
      a74875cd
    • N
      Support RETURNING for replicated tables. · 72af4af8
      Ning Yu 提交于
      * rpt: reorganize data when ALTER from/to replicated.
      
      There was a bug that altering from/to a replicated table has no effect,
      the root cause is that we did not change gp_distribution_policy neither
      reorganize the data.
      
      Now we perform the data reorganization by creating a temp table with the
      new dist policy and transfering all the data to it.
      
      
      * rpt: support RETURNING for replicated tables.
      
      This is to support below syntax (suppose foo is a replicated table):
      
      	INSERT INTO foo VALUES(1) RETURNING *;
      	UPDATE foo SET c2=c2+1 RETURNING *;
      	DELETE * FROM foo RETURNING *;
      
      A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN
      output, data will be received from one explicit sender in this motion
      type.
      
      
      * rpt: fix motion type under explicit gather motion.
      
      Consider below query:
      
      	INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo
      	  RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan;
      
      We used to generate a plan like this:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Gather Motion 3:1  (slice1; segments: 1)
      	                ->  Seq Scan on int8_tbl
      
      A gather motion is used for the subplan, which is wrong and will cause a
      runtime error.
      
      A correct plan is like below:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Materialize
      	                ->  Broadcast Motion 3:3  (slice1; segments: 3)
      	                      ->  Seq Scan on int8_tbl
      
      
      * rpt: add test case for with both PRIMARY and UNIQUE.
      
      On a replicated table we could set both PRIMARY KEY and UNIQUE
      constraints, test cases are added to ensure this feature during future
      development.
      72af4af8
  10. 26 5月, 2018 1 次提交
  11. 25 5月, 2018 1 次提交
    • J
      Fix an issue with COPY FROM for partition tables · 01a22423
      Jimmy Yih 提交于
      The Postgres 9.1 merge introduced a problem where issuing a COPY FROM
      to a partition table could result in an unexpected error, "ERROR:
      extra data after last expected column", even though the input file was
      correct. This would happen if the partition table had partitions where
      the relnatts were not all the same (e.g. ALTER TABLE DROP COLUMN,
      ALTER TABLE ADD COLUMN, and then ALTER TABLE EXCHANGE PARTITION). The
      internal COPY logic would always use the COPY state's relation, the
      partition root, instead of the actual partition's relation to obtain
      the relnatts value. In fact, the only reason this is intermittently
      seen is because the COPY logic, when working on the leaf partition's
      relation that has a different relnatts value, was looking beyond a
      boolean array's allocated memory and got a phony value that would
      evaluate to TRUE.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      01a22423
  12. 24 5月, 2018 1 次提交
    • J
      Minimize the time sensitivity in autovacuum regression test · f437fe4f
      Jimmy Yih 提交于
      To verify that autovacuum actually freezes template0, we used to just
      busy wait for about two minutes, expecting to observe the change of
      pg_database.datfrozenxid. While this "usually works", it's too sensitive
      to the amount of time it takes to vacuum freeze template0. Specifically,
      in some of our very I/O-deprived environments, this process sometimes
      takes slightly longer than two minutes.
      
      This patch introduces a fault injector to help us observe the expected
      vacuuming. The wait-in-a-loop is still there, but the bulk of the
      uncertain timing is now before the loop, not during the loop.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      f437fe4f
  13. 19 5月, 2018 1 次提交
    • A
      Introduce RelationIsAppendOptimized() macro. · 958a672a
      Ashwin Agrawal 提交于
      Many places is code need to check if its row or column oriented storage table,
      which means basically is it AppendOptimized table or not. Currently its done by
      combination of two macros RelationIsAoRows() and RelationIsAoCols(). Simplify
      the same with new macro RelationIsAppendOptimized().
      958a672a
  14. 17 5月, 2018 1 次提交
    • A
      COPY: expand the type of numcompleted to 64 bits · 8d40268b
      Adam Lee 提交于
      Integer overflow occurs without this when copied more than 2^31 rows,
      under the `COPY ON SEGMENT` mode.
      
      Errors happen when it is casted to uint64, the type of `processed` in
      `CopyStateData`, third-party Postgres driver, which takes it as an
      int64, fails out of range.
      8d40268b
  15. 16 5月, 2018 1 次提交
    • J
      Correctly set pg_exttable.logerrors (#4985) · a33b8fc6
      Jesse Zhang 提交于
      Consider the following SQL, we expect logging to be turned off for table
      `ext_error_logging_off`
      
      ```sql
      create external table ext_error_logging_off (a int, b int)
          location ('file:///tmp/test.txt') format 'text'
          segment reject limit 100;
      \d+ ext_error_logging_off
      ```
      And then in this next case we expect error logging to be turned on for
      table `ext_t2`:
      
      ```sql
      create external table ext_t2 (a int, b int)
          location ('file:///tmp/test.txt') format 'text'
          log errors segment reject limit 100;
      \d+ ext_t2
      ```
      
      Before this patch, we are making two mistakes in handling these external
      table DDL:
      
      1. We intend to enable error logging *whenever* the user specifies
      `SEGMENT REJECT` clause, completely ignoring whether he or she specifies
      `LOG ERRORS` 1. Even then, we make the mistake of implicitly coercing
      the OID (an unsigned 32-bit integer) to a bool (which is really just a C
      `char`): that means, 255/256 of the time (99.6%) the result is `true`,
      and 0.4% of the time we get a `false` instead.
      
      The `OID` to `bool` implicit conversion could have been caught by a
      `-Wconversion` GCC/Clang flag. It's most likely a leftover from commit
      8f6fe2d6.
      
      This bug manifests itself in the `dsp` regression test mysteriously
      failing about once every 200 runs -- with the only diff on a `\d+` of an
      external table that should have error logging turned on, but the
      returned definition has it turned off.
      
      While working on this we discovered that all of our existing external
      tables have both `LOG ERRORS` and `SEGMENT REJECT`, which is why this
      bug wasn't caught in the first place.
      
      This patch fixes the issue by properly setting the catalog column
      `pg_exttable.logerrors` according to the user input.
      
      While we were at it, we also cleaned up a few dead pieces of code and
      made the `dsp` test a bit friendlier to debug.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      a33b8fc6
  16. 12 5月, 2018 2 次提交
  17. 11 5月, 2018 2 次提交
    • N
      resgroup: refactor memory auditor implementation. · 86b0c56b
      Ning Yu 提交于
      We used to implement the memory auditor feature differently on master
      and 5X, on master the attribute is stored in pg_resgroup while on 5X
      it's stored in pg_resgroupcapability.  This increases the maintenance
      effort significantly.  So we refactor this feature on master to minimize
      the difference between these two branches.
      
      - Revert "resgroup: fix an access to uninitialized address."
        This reverts commit 56c20709.
      - Revert "ic: Mark binary_swap_gpdb as optional input for resgroup jobs."
        This reverts commit 9b3d0cfc.
      - Revert "resgroup: fix a boot failure when cgroup is not mounted."
        This reverts commit 4c8f28b0.
      - Revert "resgroup: backward compatibility for memory auditor"
        This reverts commit f2f86174.
      - Revert "Show memory statistics for cgroup audited resource group."
        This reverts commit d5fb628f.
      - Revert "Fix resource group test failure."
        This reverts commit 78b885ec.
      - Revert "Support cgroup memory auditor for resource group."
        This reverts commit 6b3d0f66.
      - Apply "resgroup: backward compatibility for memory auditor"
        This cherry picks commit 23cd8b1e
      - Apply "ic: Mark binary_swap_gpdb as optional input for resgroup jobs."
        This Cherry picks commit c86652e6
      - Apply "resgroup: fix an access to uninitialized address."
        This Cherry picks commit b257b344
      86b0c56b
    • A
      Teach heap_truncate_one_rel to handle AO tables as well · f490c566
      Ashwin Agrawal 提交于
      Upstream commit cab9a065 introduced an optimization to truncate tables
      in scenarios that permit "unsafe" operations where we don't have to
      churn on the relfilenode for the underlying tables. AO table got a free
      ride but for the wrong reason.
      
      This patch teaches heap_truncate_one_rel() to perform the unsafe /
      optimal truncation on AO tables. This allows us to converge the callers
      back to how they look like in Postgres 9.0.
      
      Specifically, we're now able to inline TruncateRelfiles() back into
      ExecuteTruncate() .
      
      One caveat introduced by this patch though, is the "optimal" / unsafe
      truncation of an AO table can potentially leak some disk space: we are
      not performing a real file-level truncate, merely seeking back to offset
      0 -- because the aoseg auxiliary table is truncated -- on the next
      write, therefore the space after the EOF mark is wasted in some sense.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      f490c566
  18. 08 5月, 2018 1 次提交
  19. 07 5月, 2018 1 次提交
    • N
      resgroup: change max spill_ratio back to 100. · 533e4d0c
      Ning Yu 提交于
      In 3be31490 we changed max spill_ratio
      to INT_MAX, but it will not take effect until more related logic is
      accordingly changed.
      
      Before we have all the work done we will change the max spill_ratio back
      to 100.
      533e4d0c
  20. 19 4月, 2018 2 次提交
    • J
      User query can use global shared memory across resource group when... · 3be31490
      Jialun 提交于
      User query can use global shared memory across resource group when available(Same as the PR 4843, just made the test cases stable) (#4866)
      
      1) Global shared memory will be used if the query has run out of
      	the group shared memory.
      2) The limit of memory_spill_ratio changes to [0, INT_MAX], because
      	global shared memory can be allocated, 100% limitation is not
      	make sense.
      3) Using Atomic Operation & "Compare And Save" instead of lock to
      	get high performance.
      4) Modify the test cases according to the new rules.
      3be31490
    • N
      resgroup: backward compatibility for memory auditor · f2f86174
      Ning Yu 提交于
      Memory auditor was a new feature introduced to allow external components
      (e.g. pl/container) managed by resource group.  This feature requires a
      new gpdb dir to be created in cgroup memory controller, however on 5X
      branch unless the users created this new dir manually then the upgrade
      from a previous version would fail.
      
      In this commit we provide backward compatibility by checking the release
      version:
      
      - on 6X and master branches the memory auditor feature is always enabled
        so the new gpdb dir is mandatory;
      - on 5X branch only if the new gpdb dir is created with proper
        permissions the memory auditor feature could be enabled, when it's
        disabled `CREATE RESOURCE GROUP WITH (memory_auditor='cgroup') will fail
        with guide information on how to enable it;
      
      Binary swap tests are also provided to verify backward compatibility in
      future releases.  As cgroup need to be configured to enable resgroup we
      split the resgroup binary swap tests into two parts:
      
      - resqueue mode only tests which can be triggered in the
        icw_gporca_centos6 pipeline job after the ICW tests, these parts have
        no requirements on cgroup;
      - complete resqueue & resgroup modes tests which can be triggered in the
        mpp_resource_group_centos{6,7} pipeline jobs after the resgroup tests,
        these parts need cgroup to be properly configured;
      f2f86174
  21. 17 4月, 2018 2 次提交
  22. 11 4月, 2018 1 次提交
    • B
      Fix Analyze privilege issue when executed by superuser · 3c139b9f
      Bhuvnesh Chaudhary 提交于
      The patch 62aba765 from upstream fixed
      the CVE-2009-4136 (security vulnerability) with the intent to properly
      manage session-local state during execution of an index function by a
      database superuser, which in some cases allowed remote authenticated
      users to gain privileges via a table with crafted index functions.
      
      Looking into the details of the CVE-2009-4136 and related CVE-2007-6600,
      the patch should ideally have limited the scope while we calculate the
      stats on the index expressions, where we run functions to evaluate the
      expression and could potentially present a security threat.
      
      However, the patch changed the user to table owner before collecting the
      sample, due to which even if analyze was run by superuser the sample
      couldn't be collected as the table owner did not had sufficient
      privileges to access the table. With this commit, we switch back to the
      original user while collecting the sample as it does not deal with
      indexes, or function call which was the original intention of the patch.
      
      Upstream did not face the privilege issue, as it does block sampling
      instead of issuing a query.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      3c139b9f
  23. 08 4月, 2018 1 次提交
  24. 02 4月, 2018 1 次提交
  25. 30 3月, 2018 1 次提交
    • D
      Remove spclocation field from pg_tablespace · 4483b7d3
      Daniel Gustafsson 提交于
      This is a backport of the below commit from upstream 9.3:
      
        commit 16d8e594
        Author: Magnus Hagander <magnus@hagander.net>
        Date:   Wed Dec 7 10:35:00 2011 +0100
      
          Remove spclocation field from pg_tablespace
      
          Instead, add a function pg_tablespace_location(oid) used to return
          the same information, and do this by reading the symbolic link.
      
          Doing it this way makes it possible to relocate a tablespace when the
          database is down by simply changing the symbolic link.
      4483b7d3