1. 06 6月, 2018 1 次提交
  2. 05 6月, 2018 4 次提交
    • A
      SPI 64 bit changes for pl/Python (#4154) · ce22b327
      Andreas Scherbaum 提交于
      SPI 64 bit changes for pl/Python
      
      Includes fault injection tests
      ce22b327
    • J
      Implement CPUSET (#5023) · 0c0782fe
      Jialun 提交于
      * Implement CPUSET, a new management of cpu resource in resource
      group which can reserve the specified cores for specified
      resource group exclusively. This can ensure that there are always
      available cpu resources for the group which has set CPUSET.
      The most common scenario is allocating fixed cores for short
      queries.
      
      - One can use it by executing CREATE RESOURCE GROUP xxx WITH (
        cpuset='0-1', xxxx). 0-1 are the reserved cpu cores for
        this group. Or ALTER RESOURCE GROUP SET CPUSET '0,1' to modify
        the value.
      - The syntax of CPUSET is a combination of the tuples, each
        tuple represents one core number or the core numbers interval,
        separated by comma. E.g. 0,1,2-3. All the core in CPUSET must be
        available in system and the core numbers in each group cannot
        have overlap.
      - CPUSET and CPU_RATE_LIMIT are mutually exclusive. One cannot
        create a resource group with both CPUSET and CPU_RATE_LIMIT.
        But the CPUSET and CPU_RATE_LIMIT can be freely switched in
        one group by executing ALTER operation, that means if one
        feature has been set, the other is disabled.
      - The cpu cores will be returned to GPDB, when the group has been
        dropped, or the CPUSET value has been changed, or the CPU_RATE_LIMIT
        has been set.
      - If some of the cores have been allocated to the resource group,
        then the CPU_RATE_LIMIT in other groups only indicating the
        percentage of cpu resources of the left cpu cores.
      - If the GPDB is busy, all the other cores which have not be
        allocated to any resource groups exclusively through CPUSET
        have already been run out, the cpu cores in CPUSET will still
        not be allocated.
      - The cpu cores in CPUSET will be used exclusively only in GPDB
        level, the other non-GPDB processes in system may use them.
      - Add test cases for this new feature, and the test environment
        must contain at least two cpu cores, so we upgrade the configuration
        of instance_type in resource_group jobs.
      
      * - Compatible with the case that cgroup directory cpuset/gpdb
        does not exist
      - Implement pg_dump for cpuset & memory_auditor
      - Fix a typo
      - Change default cpuset value from empty string to -1, for
        the code in 5X assume that all the default value in
        resource group is integer, a non-integer value will make the
        system fail to start
      0c0782fe
    • A
      Remove incorrect fixme about temp tables · 41dad081
      Asim R P 提交于
      Temp tables must be included in PREPARE and COMMIT records in GPDB because they
      are not exempt from 2PC, as in upstream.
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      41dad081
    • A
      Always generate relfilenode for about to be created relations · a549a53c
      Asim R P 提交于
      We have found the culprit causing relfilenode collisions to be VACUUM
      FULL on a mapped relation.  The code was reusing OID as relfilenode for
      the temporary table created by vacuum full.  This happened without
      bumping the relfilenode counter.  The patch fixes this such that
      relfilenode is always generated, even in case of mapped relations.
      
      With this, we believe that the possibility of collision still exists in the way
      sequence OIDs are generated.  That needs to be fixed in a separate patch.  The
      fixme in GetNewRelFileNode() should be sufficient to note this.
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      a549a53c
  3. 04 6月, 2018 3 次提交
  4. 02 6月, 2018 3 次提交
  5. 01 6月, 2018 5 次提交
    • T
      Add tests for GPDB specific collation creation · e0845912
      Taylor Vesely 提交于
      Unlike upstream, GPDB needs to keep collations in-sync between multiple
      databases. Add tests for GPDB specific collation behavior.
      
      These tests need to import a system locale, so add a @syslocale@ variable to
      gpstringstubs.pl in order to test the creation/deletion of collations from
      system locales.
      Co-authored-by: NJim Doty <jdoty@pivotal.io>
      e0845912
    • T
      Add dispatch to collation creation commands · d73a185b
      Taylor Vesely 提交于
      Make CREATE COLLATION and pg_import_system_collations() parallel aware by
      dispatching collation creation to the QEs.
      
      In order for collations to work correctly, we need to be sure that every
      collation that is created on the QD is also installed on the QEs, and that the
      OID matches in every database. We will take advantage of two phase commit to
      prevent collations from being created if there is a problem adding it on any
      QE. In upstream, collations are created during initdb, but this won't work for
      GPDB, because while initdb is running there is no way to be sure that every
      segment has the same locales installed.
      
      We disable collation creation during initdb, and make it the responsibility of
      the system administrator to initialize any needed collations by either running
      a CREATE COLLATION command, or running the pg_import_system_collations() UDF.
      Co-authored-by: NJim Doty <jdoty@pivotal.io>
      d73a185b
    • T
      Updated version of pg_import_system_collations() · 91d65139
      Tom Lane 提交于
      Pull in more recent version of pg_import_system_collations() from
      upstream. We have not pulled in the ICU collations, so wholesale
      remove the sections of code that deal with them.
      
      This commit is primarily a cherry-pick of 0b13b2a7, but also pulls
      in prerequisite changes for CollationCreate().
      
      	Rethink behavior of pg_import_system_collations().
      
      	Marco Atzeri reported that initdb would fail if "locale -a" reported
      	the same locale name more than once.  All previous versions of Postgres
      	implicitly de-duplicated the results of "locale -a", but the rewrite
      	to move the collation import logic into C had lost that property.
      	It had also lost the property that locale names matching built-in
      	collation names were silently ignored.
      
      	The simplest way to fix this is to make initdb run the function in
      	if-not-exists mode, which means that there's no real use-case for
      	non if-not-exists mode; we might as well just drop the boolean argument
      	and simplify the function's definition to be "add any collations not
      	already known".  This change also gets rid of some odd corner cases
      	caused by the fact that aliases were added in if-not-exists mode even
      	if the function argument said otherwise.
      
      	While at it, adjust the behavior so that pg_import_system_collations()
      	doesn't spew "collation foo already exists, skipping" messages during a
      	re-run; that's completely unhelpful, especially since there are often
      	hundreds of them.  And make it return a count of the number of collations
      	it did add, which seems like it might be helpful.
      
      	Also, re-integrate the previous coding's property that it would make a
      	deterministic selection of which alias to use if there were conflicting
      	possibilities.  This would only come into play if "locale -a" reports
      	multiple equivalent locale names, say "de_DE.utf8" and "de_DE.UTF-8",
      	but that hardly seems out of the question.
      
      	In passing, fix incorrect behavior in pg_import_system_collations()'s
      	ICU code path: it neglected CommandCounterIncrement, which would result
      	in failures if ICU returns duplicate names, and it would try to create
      	comments even if a new collation hadn't been created.
      
      	Also, reorder operations in initdb so that the 'ucs_basic' collation
      	is created before calling pg_import_system_collations() not after.
      	This prevents a failure if "locale -a" were to report a locale named
      	that.  There's no reason to think that that ever happens in the wild,
      	but the old coding would have survived it, so let's be equally robust.
      
      	Discussion: https://postgr.es/m/20c74bc3-d6ca-243d-1bbc-12f17fa4fe9a@gmail.com
      	(cherry picked from commit 0b13b2a7)
      91d65139
    • P
      Add function to import operating system collations · 7dee9e44
      Peter Eisentraut 提交于
      Move this logic out of initdb into a user-callable function.  This
      simplifies the code and makes it possible to update the standard
      collations later on if additional operating system collations appear.
      Reviewed-by: NAndres Freund <andres@anarazel.de>
      Reviewed-by: NEuler Taveira <euler@timbira.com.br>
      (cherry picked from commit aa17c06f)
      7dee9e44
    • O
      Bump ORCA to v2.60.0 · f67b3948
      Omer Arap 提交于
      f67b3948
  6. 31 5月, 2018 1 次提交
  7. 30 5月, 2018 5 次提交
    • T
      Refine the fault injector framework (#5013) · 723e5848
      Tang Pengzhou 提交于
      * Refine the fault injector framework
      
      * Add counting feature so a fault can be triggered N times.
      * Add a simpler version named gp_inject_fault_infinite.
      * Refine and make code cleaner include renaming sleepTimes
        to extraArg so it can be used by other fault types.
      
      Now 3 functions provided:
      
      1. gp_inject_fault(faultname, type, ddl, database, tablename,
      					start_occurrence, end_occurrence, extra_arg, db_id)
      startOccurrence: nth occurrence that a fault starts triggering
      endOccurrence: nth occurrence that a fault stops triggering,
      -1 means the fault is always triggered until it is reset.
      
      2. gp_inject_fault(faultname, type, db_id)
      simpler version for fault triggered only once.
      
      3. gp_inject_fault_infinite(faultname, type, db_id)
      simpler version for fault always triggered until it's reset.
      
      * fix bgwriter_checkpoint case
      
      * use gp_inject_fault_infinite here instead of gp_inject_fault so cache
        of pg_proc that contains gp_inject_fault_infinite is loaded before
        checkpoint and the following gp_inject_fault_infinite don't dirty the
        buffer again.
      * Add a matchsubs to ignore 5 or 6 times hits of fsync_counter.
      
      * Fix flaky twophase_tolerance_with_mirror_promotion test
      
      * use different session for  Scenario 2 and  Scenario 3 because
        the gang of session 2 is no longer valid.
      * wait for wanted fault to be triggered so no unexpected error occurs.
      
      * Add more segment status info to identify error quickly
      
      Some cases are right behind FTS test cases. If the segments are not
      in the desired status, those test cases will fail unexpectedly, this
      commit adds more debug info at the beginning of test cases to help
      to identify issues quickly.
      
      * Enhance cases to skip fts probe for sure
      
      * Do FTS probe request twice to guarantee fts error is triggered
      723e5848
    • P
      Add -Werror=implicit-function-declaration in CFLAGS (#5012) · a3104caa
      Paul Guo 提交于
      This kind of error could lead to serious problems sometimes.
      a3104caa
    • M
      docs - gpbackup/gprestore plugin for DD Boost (#5039) · 45c8312b
      Mel Kiyama 提交于
      * docs - gpbackup/gprestore plugin for DD Boost
      
      - also minor update to S3 plugin- change yaml file parameter backupdir to folder.
      
      * docs - review updates for gpbackup plugin for ddboost
      also an update for S3. change keyword backupdir -> folder in example.
      
      * docs - more review updates for gpbackup plugin for ddboost
      
      updated files on review comments.
      also updated HTML on review site.
      
      * docs - another set of review updates for gpbackup plugin for ddboost
      --edits/updates
      --change toc -move plugin api up a level.
      --tweaked format of parameter definition.
      --also update S3 plugin to parallel gpbackup doc.
      
      * docs - added pivotal only attribute to topic.
      45c8312b
    • L
      docs - misc updates to gpbackup/gprestore plugin api docs (#5047) · a196a471
      Lisa Owen 提交于
      * docs - misc updates to gpbackup/gprestore plugin api docs
      
      * address review comments from david
      a196a471
    • J
      pipeline: --keep-going during ICW · 86c30215
      Jacob Champion 提交于
      The 9.1 merge brought upstream support for `make -k` when running
      installcheck-world. Use it in the master pipeline.
      86c30215
  8. 29 5月, 2018 3 次提交
    • N
      Update RETURNING test cases of replicated tables. · 97fff0e1
      Ning Yu 提交于
      Some error messages were updated during the 9.1 merge, update the
      answers for the RETURNING test cases of replicated tables.
      97fff0e1
    • N
      Support RETURNING for replicated tables. · fb7247b9
      Ning Yu 提交于
      * rpt: reorganize data when ALTER from/to replicated.
      
      There was a bug that altering from/to a replicated table has no effect,
      the root cause is that we did not change gp_distribution_policy neither
      reorganize the data.
      
      Now we perform the data reorganization by creating a temp table with the
      new dist policy and transfering all the data to it.
      
      * rpt: support RETURNING for replicated tables.
      
      This is to support below syntax (suppose foo is a replicated table):
      
      	INSERT INTO foo VALUES(1) RETURNING *;
      	UPDATE foo SET c2=c2+1 RETURNING *;
      	DELETE * FROM foo RETURNING *;
      
      A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN
      output, data will be received from one explicit sender in this motion
      type.
      
      * rpt: fix motion type under explicit gather motion.
      
      Consider below query:
      
      	INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo
      	  RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan;
      
      We used to generate a plan like this:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Gather Motion 3:1  (slice1; segments: 1)
      	                ->  Seq Scan on int8_tbl
      
      A gather motion is used for the subplan, which is wrong and will cause a
      runtime error.
      
      A correct plan is like below:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Materialize
      	                ->  Broadcast Motion 3:3  (slice1; segments: 3)
      	                      ->  Seq Scan on int8_tbl
      
      * rpt: add test case for with both PRIMARY and UNIQUE.
      
      On a replicated table we could set both PRIMARY KEY and UNIQUE
      constraints, test cases are added to ensure this feature during future
      development.
      
      (cherry picked from commit 72af4af8)
      fb7247b9
    • N
      Preserve persistence when reorganizing temp tables. · 0ce07109
      Ning Yu 提交于
      When altering a table's distribution policy we might need to reorganize
      the data by creating a __temp__ table, copying the data to it, then swap
      the underlying relation files.  However we always create the __temp__
      table as permanent, then when the original table is temp the underlying
      files can not be found in later queries.
      
      	CREATE TEMP TABLE t1 (c1 int, c2 int) DISTRIBUTED BY (c1);
      	ALTER TABLE t1 SET DISTRIBUTED BY (c2);
      	SELECT * FROM t1;
      0ce07109
  9. 28 5月, 2018 2 次提交
    • N
      Revert "Support RETURNING for replicated tables." · a74875cd
      Ning Yu 提交于
      This reverts commit 72af4af8.
      a74875cd
    • N
      Support RETURNING for replicated tables. · 72af4af8
      Ning Yu 提交于
      * rpt: reorganize data when ALTER from/to replicated.
      
      There was a bug that altering from/to a replicated table has no effect,
      the root cause is that we did not change gp_distribution_policy neither
      reorganize the data.
      
      Now we perform the data reorganization by creating a temp table with the
      new dist policy and transfering all the data to it.
      
      
      * rpt: support RETURNING for replicated tables.
      
      This is to support below syntax (suppose foo is a replicated table):
      
      	INSERT INTO foo VALUES(1) RETURNING *;
      	UPDATE foo SET c2=c2+1 RETURNING *;
      	DELETE * FROM foo RETURNING *;
      
      A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN
      output, data will be received from one explicit sender in this motion
      type.
      
      
      * rpt: fix motion type under explicit gather motion.
      
      Consider below query:
      
      	INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo
      	  RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan;
      
      We used to generate a plan like this:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Gather Motion 3:1  (slice1; segments: 1)
      	                ->  Seq Scan on int8_tbl
      
      A gather motion is used for the subplan, which is wrong and will cause a
      runtime error.
      
      A correct plan is like below:
      
      	Explicit Gather Motion 3:1  (slice2; segments: 3)
      	  ->  Insert
      	        ->  Seq Scan on foo
      	        SubPlan 1  (slice2; segments: 3)
      	          ->  Materialize
      	                ->  Broadcast Motion 3:3  (slice1; segments: 3)
      	                      ->  Seq Scan on int8_tbl
      
      
      * rpt: add test case for with both PRIMARY and UNIQUE.
      
      On a replicated table we could set both PRIMARY KEY and UNIQUE
      constraints, test cases are added to ensure this feature during future
      development.
      72af4af8
  10. 26 5月, 2018 1 次提交
  11. 25 5月, 2018 3 次提交
    • D
      Run unittests as part of Travis builds · e4c8d80a
      Daniel Gustafsson 提交于
      Running the full ICW under Travis was problematic, but the unittests
      doesn't require a running cluster so let's run them to boost our test
      coverage a little.
      
      This changes the invocation of mocker.py to accomodate how Travis need
      it to be, without changing the effect of it.
      e4c8d80a
    • N
      Fix crash issues in global deadlock detector. · 91110afc
      Ning Yu 提交于
      * gdd: alloc MyProcPort on stack.
      
      We used to allocate MyProcPort with malloc() and does not check for the
      result, it's reported to trigger a crash in gprecoverseg test:
      
      	(gdb) bt
      	0x00007f56b506794f in __strlen_sse42 () from /lib64/libc.so.6
      	0x00000000009f7894 in write_message_to_server_log ()
      	0x00000000009fc200 in send_message_to_server_log ()
      	0x00000000009ffb85 in EmitErrorReport ()
      	0x00000000009fccf5 in errfinish ()
      	0x0000000000acd99f in FtsTestSegmentDBIsDown ()
      
      Now we allocate it directly on stack.
      
      We can not find out a way to construct a testcase for this issue, but
      according to the report it should be covered in existing tests.
      
      * gdd: store name of super user on stack.
      
      We used to store name of super user in a buffer allocated by strdup()
      and did not check for the result, it's reported to trigger a crash in
      gprecoverseg test.
      
      Now we store it in a buffer on stack.
      
      A fatal error will be triggered if no super user can be found.
      
      We can not figure out a way to construct a testcase for this change, but
      according to the report it should be covered in existing tests.
      91110afc
    • J
      Fix an issue with COPY FROM for partition tables · 01a22423
      Jimmy Yih 提交于
      The Postgres 9.1 merge introduced a problem where issuing a COPY FROM
      to a partition table could result in an unexpected error, "ERROR:
      extra data after last expected column", even though the input file was
      correct. This would happen if the partition table had partitions where
      the relnatts were not all the same (e.g. ALTER TABLE DROP COLUMN,
      ALTER TABLE ADD COLUMN, and then ALTER TABLE EXCHANGE PARTITION). The
      internal COPY logic would always use the COPY state's relation, the
      partition root, instead of the actual partition's relation to obtain
      the relnatts value. In fact, the only reason this is intermittently
      seen is because the COPY logic, when working on the leaf partition's
      relation that has a different relnatts value, was looking beyond a
      boolean array's allocated memory and got a phony value that would
      evaluate to TRUE.
      Co-authored-by: NJimmy Yih <jyih@pivotal.io>
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      01a22423
  12. 24 5月, 2018 5 次提交
  13. 23 5月, 2018 4 次提交
    • T
      Syncing python-dependencies with pythonsrc-ext · 16bcc71c
      Todd Sedano 提交于
      Fixed typo
      
      [ci skip]
      Authored-by: NTodd Sedano <tsedano@pivotal.io>
      16bcc71c
    • A
      Make wait for postmaster.pid file 5 seconds as before. · 445ca7ea
      Ashwin Agrawal 提交于
      This was left behind to get converted to seconds by commit
      7d59d215ec065983c666b80bc2c982d13e476c48. Most likely reason for gpexpand jobs.
      445ca7ea
    • A
      Speedup pg_ctl stop,start,restart by reducing 1 sec sleep. · b2909e49
      Ashwin Agrawal 提交于
      This is cherry-pcik of upstream commit c61559ec.
      ---------------------
      Reduce pg_ctl's reaction time when waiting for postmaster start/stop.
      
      pg_ctl has traditionally waited one second between probes for whether
      the start or stop request has completed.  That behavior was embodied
      in the original shell script written in 1999 (commit 5b912b08) and
      I doubt anyone's questioned it since.  Nowadays, machines are a lot
      faster, and the shell script is long since replaced by C code, so it's
      fair to reconsider how long we ought to wait.
      
      This patch adjusts the coding so that the wait time can be any even
      divisor of 1 second, and sets the actual probe rate to 10 per second.
      That's based on experimentation with the src/test/recovery TAP tests,
      which include a lot of postmaster starts and stops.  This patch alone
      reduces the (non-parallelized) runtime of those tests from ~4m30s to
      ~3m5s on my machine.  Increasing the probe rate further doesn't help
      much, so this seems like a good number.
      
      In the real world this probably won't have much impact, since people
      don't start/stop production postmasters often, and the shutdown checkpoint
      usually takes nontrivial time too.  But it makes development work and
      testing noticeably snappier, and that's good enough reason for me.
      
      Also, by reducing the dead time in postmaster restart sequences, this
      change has made it easier to reproduce some bugs that have been lurking
      for awhile.  Patches for those will follow.
      
      Discussion: https://postgr.es/m/18444.1498428798@sss.pgh.pa.us
      ---------------------
      b2909e49
    • J