1. 12 4月, 2017 13 次提交
  2. 11 4月, 2017 14 次提交
    • P
      Transform the pg_lock_status() ids in tinc cases. · 63f62b59
      Pengzhou Tang 提交于
      The ids in "(relation,*)" lines are not transformed, this would cause
      failures if executed on segments, like below:
      
          SELECT pg_lock_status() FROM foo
                                       pg_lock_status
           ----------------------------------------------------------------------
          - (relation,151955,151957,,,,,,,,2/4,29461,AccessShareLock,t,1117,t,0)
          - (relation,151955,151957,,,,,,,,2/4,29461,AccessShareLock,t,1117,t,0)
          - (relation,151955,151957,,,,,,,,2/4,29462,AccessShareLock,t,1117,t,1)
          - (relation,151955,151957,,,,,,,,2/4,29462,AccessShareLock,t,1117,t,1)
          + (relation,151955,151957,,,,,,,,2/4,29453,AccessShareLock,t,1116,t,0)
          + (relation,151955,151957,,,,,,,,2/4,29453,AccessShareLock,t,1116,t,0)
          + (relation,151955,151957,,,,,,,,2/4,29454,AccessShareLock,t,1116,t,1)
          + (relation,151955,151957,,,,,,,,2/4,29454,AccessShareLock,t,1116,t,1)
            (virtualxid,,,,,VIRTUAL/XID,,,,,VIRTUAL/XID,PID,ExclusiveLock,t,SESSIONID,t,0)
            (virtualxid,,,,,VIRTUAL/XID,,,,,VIRTUAL/XID,PID,ExclusiveLock,t,SESSIONID,t,0)
            (virtualxid,,,,,VIRTUAL/XID,,,,,VIRTUAL/XID,PID,ExclusiveLock,t,SESSIONID,t,1)
            (virtualxid,,,,,VIRTUAL/XID,,,,,VIRTUAL/XID,PID,ExclusiveLock,t,SESSIONID,t,1)
           (8 rows)
      
      So we should transform them like the "(virtualxid,*)" lines.
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      63f62b59
    • D
      Fix two NULL related warnings in resource queue/groups · 770ad742
      Daniel Gustafsson 提交于
      Passing InvalidOid to as a HeapTuple argument cause a warning for
      NULL pointer constant, replace InvalidOid with NULL in call to
      AlterResqueueCapabilityEntry() to avoid. Also initialize the owner
      in GetResGroupIdForRole() since we otherwise read an uninitialized
      value in case the CurrentResourceOwner was set.
      770ad742
    • P
      Remap transient typmods on receivers instead of on senders. · d8ac3308
      Pengzhou Tang 提交于
      QD used to send a transient types table to QEs, then QE would remap the
      tuples with this table before sending them to QD. However in complex
      queries QD can't discover all the transient types so tuples can't be
      correctly remapped on QEs. One example is like below:
      
          SELECT q FROM (SELECT MAX(f1) FROM int4_tbl
                         GROUP BY f1 ORDER BY f1) q;
          ERROR:  record type has not been registered
      
      To fix this issue we changed the underlying logic: instead of sending
      the possibly incomplete transient types table from QD to QEs, we now
      send the tables from motion senders to motion receivers and do the remap
      on receivers. Receivers maintain a remap table for each motion so tuples
      from different senders can be remapped accordingly. In such way, queries
      contain multi-slices can also handle transient record type correctly
      between two QEs.
      
      The remap logic is derived from the executor/tqueue.c in upstream
      postgres. There is support for composite/record types and arrays as well
      as range types, however as range types are not yet supported in GPDB so
      the logic is put under a conditional compilation macro, in theory it
      shall be automatically enabled when range types are supported in GPDB.
      
      One side effect for this approach is that on receivers a performance
      down is introduced as the remap requires recursive checks on each tuple
      of record types. However optimization is made to make this side effect
      minimum on non-record types.
      
      Old logic that building transient types table on QD and sending them to
      QEs are retired.
      Signed-off-by: NGang Xiong <gxiong@pivotal.io>
      Signed-off-by: NNing Yu <nyu@pivotal.io>
      d8ac3308
    • A
      Remove unite_ccle_gpdb5 from main pipeline · 3dc200ca
      Adam Lee 提交于
      All test cases it used to run are covered by ICW now.
      3dc200ca
    • A
      Fix and refactor blockdirectory update for AO and CO. · fe1b7616
      Ashwin Agrawal 提交于
      Alter table add column for CO table completely missed updating block-directory
      if default value for column is greater than blockSize. In this case one large
      content block will be created followed by small content blocks containing the
      actual column value. Missing to update block-directory generates wrong result
      during index scans after such alter. The commit fixes the issue by updating the
      block-directory for such a case accompanied with test to validate the same.
      
      Also, while fixing the same refactor code
      - rename lastWriteBeginPosition to logicalBlockStartOffset for better clarity
      based on its usage
      - centralize block-directory insert in datumstream block read-write routine
      - remove redundant buildBlockDirectory flag
      fe1b7616
    • A
      Record correct offset in block-directory for CO. · ace8b8a5
      Ashwin Agrawal 提交于
      Incorrect block offset was recorded in block-directory incase of inserting
      column value greater than blocksize for CO table. For such a case, column value
      is divided into multiple small content blocks stitched together by a large
      content block at start. hence, block-directory for such case should record
      offset of large content block as thats the logical start of block, instead
      offset for last small content block was recorded.
      
      Hence, fixing the issue by not touching `lastWriteBeginPosition` inside
      `AppendOnlyStorageWrite_FinishBuffer()` as that's called for completing every
      physical block. Means incase of large content block for large content block plus
      every following small content block chain of it. Instead move the responsbility
      of updating `lastWriteBeginPosition` to caller of
      `AppendOnlyStorageWrite_FinishBuffer()` to finish block routines which clearly
      know logical block boundary, plus makes it consistent through-out codebase.
      ace8b8a5
    • A
      Checkpoint after 1st phase in gpcheckmirrorseg script. · 8ec4658d
      Ashwin Agrawal 提交于
      For some tests (mainly filerep tests) inconsistency was reported between primary
      and mirror by script used to detect inconsistency between the two
      gpcheckmirrorseg.pl. The script performs checkpoint at start but if 1st phase
      finds diffs it performs some catalog lookups via persistent table. With heap
      page pruning its possible, the page may get modified during this scan. Hence
      checkpoint again post it to make sure the change makes to mirror and then only
      move ahead with further comparisons to avoid false positives.
      8ec4658d
    • S
      Always show chain statistics in EXPLAIN ANALYZE · 6e1f6d5f
      Shreedhar Hardikar 提交于
      This is useful information in EXPLAIN ANALYZE even when there is no
      spilling. Also refactor this code into a separate function.
      6e1f6d5f
    • S
      Use the hashtable to maintain parent shift · 5412efbc
      Shreedhar Hardikar 提交于
      This shift value is used to alter the hash function when reloading
      spilled entries. The hash table uses only one such value at a time, so
      it can be maintained in the hash table to simplify access methods.
      5412efbc
    • S
      Clean up HHashTable implementation · 8baaa67a
      Shreedhar Hardikar 提交于
      The commit contains a number of minor refactors and fixes.
      - Fix indentation and spelling
      - Remove unused variable (pass)
      - Reset bloom filter after spilling HashAgg
      - Remove dead code in ExecAgg
      - Move to AggStatus to AggState as HashAggStatus because state change
        algorithm is implemented in nodeAgg
      8baaa67a
    • C
      Add behave test to cover gpperfmon installation · 2bd99ddd
      C.J. Jameson 提交于
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      2bd99ddd
    • M
      Integrate gpperfmon into top-level make system · f3b523eb
      Marbin Tan 提交于
      - add new configure option `--enable-gpperfmon`
      - include gpperfmon libraries into configure
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
      Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
      f3b523eb
    • C
      Remove net-snmp references · 6985ba3a
      Chumki Roy 提交于
      6985ba3a
    • L
      add prototypes to avoid compilation warnings · 4d3874e8
      Larry Hamel 提交于
      Signed-off-by: NChumki Roy <croy@pivotal.io>
      4d3874e8
  3. 10 4月, 2017 4 次提交
  4. 08 4月, 2017 3 次提交
  5. 07 4月, 2017 6 次提交
    • D
      Use backend quoting functions instead of libpq · 44d06dee
      Daniel Gustafsson 提交于
      libpq is front-end code and shouldn't be used in backend processes.
      The requirement here is to correctly quote the relation name in
      partitioning such that pg_dump/gp_dump can create working DDL for
      the partition hierarchy. For this purpose, quote_literal_internal()
      does the same thing as PQescapeString(). The following relation
      definitions were hitting the previous bug fixed by applying proper
      quoting:
      
      CREATE TABLE part_test (id int, id2 int)
        PARTITION BY LIST (id2) (
          PARTITION "A1" VALUES (1)
        );
      
      CREATE TABLE sales (trans_id int, date date)
        DISTRIBUTED BY (trans_id)
        PARTITION BY RANGE (date) (
          START (date '2008-01-01') INCLUSIVE
          END (date '2009-01-01') EXCLUSIVE
          EVERY (INTERVAL '1 month')
        );
      
      ALTER TABLE sales
        SPLIT PARTITION FOR ('2008-01-01')
        AT ('2008-01-16')
        INTO (PARTITION jan081to15, PARTITION jan0816to31);
      ALTER TABLE sales
        ADD DEFAULT PARTITION other;
      ALTER TABLE sales
        SPLIT DEFAULT PARTITION
        START ('2009-01-01') INCLUSIVE
        END ('2009-02-01') EXCLUSIVE
        INTO (PARTITION jan09, PARTITION other);
      
      This commit was previously pushed and reverted due to test failures
      in the build pipeline. It seems the errors were due to another patch
      that went in at the same time so re-applying this commit.
      44d06dee
    • C
      Make wiki discoverable from README · 729c0454
      C.J. Jameson 提交于
      729c0454
    • D
      Update docs and README with new locations · 48eaee75
      Daniel Gustafsson 提交于
      Set the correct Wiki URL to use for the project, we have moved over
      to using the Github wiki on the main repo and not the wiki on the
      website repo.
      
      Remove references to Pivotal Greenplum documentation in the README
      and stick to greenplum.orc/docs/ there as those are the applicable
      docs.  While there update the README.md to actually match reality,
      the docs are now in the main repo.
      48eaee75
    • K
      Rename commands/resgroup.c to commands/resgroupcmds.c, to distinguish from · bb15e06e
      Kenan Yao 提交于
      utils/resgroup/resgroup.c.
      
      Signed-off-by Richard Guo <riguo@pivotal.io>
      Signed-off-by Gang Xiong <gxiong@pivotal.io>
      bb15e06e
    • K
      Define a GUC 'max_resource_groups' to constraint resource groups created in · 3dc4703f
      Kenan Yao 提交于
      database.
      
      Signed-off-by Richard Guo <riguo@pivotal.io>
      Signed-off-by Gang Xiong <gxiong@pivotal.io>
      3dc4703f
    • K
      Implement concurrency limit of resource group. · d0c6a352
      Kenan Yao 提交于
      Works include:
      * define structures used by resource group in shared memory;
      * insert/remove shared memory object when Create/Drop Resource Group;
      * clean up and restore when Create/Drop Resource Group fails;
      * implement concurrency slot acquire/release functionality;
      * sleep when concurrency slot is not available, and wake up others when
      releasing a concurrency slot if necessary;
      * handle signals in resource group properly;
      
      Signed-off-by Richard Guo <riguo@pivotal.io>
      Signed-off-by Gang Xiong <gxiong@pivotal.io>
      d0c6a352