1. 11 11月, 2017 5 次提交
    • J
      Add unit tests for append-optimized usage of invalid_page_tab hash table · 74d07939
      Jimmy Yih 提交于
      cdbappendonlystorage_test.c:
      Test that calling appendonly_redo with XLOG_APPENDONLY_INSERT or
      XLOG_APPENDONLY_TRUNCATE will call ao_insert_replay or
      ao_truncate_replay respectively.
      
      xlog_mm_test.c:
      Test that an MMXLOG_REMOVE_FILE AO transaction log record will call
      XLogAODropSegmentFile.
      
      cdbmirroredappendonly_test.c:
      Test that ao_insert_replay or ao_truncate_replay will call
      XLogAOSegmentFile when we cannot find the AO segment file.
      
      xlogutils_test.c:
      Test that the invalid_page_tab hash table is properly used when
      invalid AO segment files are encountered.
      
      These four pieces combined will test the case when AO XLOG records are
      replayed but the corresponding segment files are missing from
      filesystem. The replay is entered into the invalid_page_tab hash table
      until a corresponding MMXLOG_REMOVE_FILE XLOG record type comes along
      to remove the entry.
      
      Currently these unit tests are under --enable-segwalrep configure flag
      in the Makefile but can be moved to the actual TARGET if non-segwalrep
      tests are added and need to be tested.
      
      Reference:
      https://github.com/greenplum-db/gpdb/commit/b659d0479fa57c383e7dbe15043cd2ca9d46f77f
      74d07939
    • J
      CreateFakeRelcacheEntry: remove dead standby-filespace code · c7d0ee16
      Jacob Champion 提交于
      The directory creation code here seems to be duplicated in multiple
      places. Since tests are passing now, we're not sure if it's necessary.
      It seems to only apply to a standby master, when using non-default
      filespaces, and we haven't found a good test case to reproduce the
      initial issue that it was supposed to solve -- it seems like originally,
      there was a problem where the filespace hashtable was not initialized on
      the standby.
      
      Since things seem to be working as intended now, get rid of the dead
      code.
      Signed-off-by: NJacob Champion <pchampion@pivotal.io>
      Signed-off-by: NAsim R P <apraveen@pivotal.io>
      c7d0ee16
    • A
      XLogReadBuffer: resolve 8.4 merge FIXME · ea7eabb0
      Asim R P 提交于
      Add a FIXME to ao_insert_replay to check if it can work similar to
      replay of heap insert xlog records.
      ea7eabb0
    • D
      Align simplify_EXISTS_query with upstream · c823e7c6
      Dhanashree Kashid 提交于
      This function had diverged a lot from upstream; post subselect merge.
      One of the main reason is that upstream has lot of restrictive checks
      which prevent pull-up of EXISTS/NOT EXISTS. GPDB handles them
      differently; thus producing a join/initplan or a one-time filter.
      
      The cases that GPDB handles and for which we have not ported the checks
      from upstream are as follows:
      
      - AGG with limit count with/without offset
      - HAVING clause without AGG
      - AGG without HAVING clause
      
      For other conditions, we bail out as upstream. Hence we have added
      checks differently for having and aggs inside simplify_EXISTS_query.
      Rest of the code is similar to upstream.
      Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
      c823e7c6
    • H
      Fix error message on DECLARE CURSOR ... SELECT INTO. · 98d3c182
      Heikki Linnakangas 提交于
      The error message was changed accidentally in commit 78b0a42e. Probably
      a copy-paste mistake. Change it back.
      98d3c182
  2. 10 11月, 2017 4 次提交
    • N
      resgroup: remove cpu test and part of memory test. · 65fef74e
      Ning Yu 提交于
      These tests are not stable enough, so remove them for now. We will add
      them back after improvements.
      65fef74e
    • D
      Back-patch libpq support for TLS versions beyond v1 · a7c6138c
      Daniel Gustafsson 提交于
      This was in part already supported as the backend part of the below
      commit was already backported. This brings in the frontend changes
      as well the followup commit to remove backend support for SSLv3 as
      it no longer make any sense to keep it around.
      
        commit 4dddf8552801ef013c40b22915928559a6fb22a0
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Thu May 21 20:41:55 2015 -0400
      
          Back-patch libpq support for TLS versions beyond v1.
      
          Since 7.3.2, libpq has been coded in such a way that the only SSL protocol
          it would allow was TLS v1.  That approach is looking increasingly obsolete.
          In commit 820f08ca we fixed it to allow TLS >= v1, but did not
          back-patch the change at the time, partly out of caution and partly because
          the question was confused by a contemporary server-side change to reject
          the now-obsolete SSL protocol v3.  9.4 has now been out long enough that
          it seems safe to assume the change is OK; hence, back-patch into 9.0-9.3.
      
          (I also chose to back-patch some relevant comments added by commit
          326e1d73, but did *not* change the server behavior; hence, pre-9.4
          servers will continue to allow SSL v3, even though no remotely modern
          client will request it.)
      
          Per gripe from Jan Bilek.
      
        commit 326e1d73
        Author: Tom Lane <tgl@sss.pgh.pa.us>
        Date:   Fri Jan 31 17:51:07 2014 -0500
      
          Disallow use of SSL v3 protocol in the server as well as in libpq.
      
          Commit 820f08ca claimed to make the server
          and libpq handle SSL protocol versions identically, but actually the server
          was still accepting SSL v3 protocol while libpq wasn't.  Per discussion,
          SSL v3 is obsolete, and there's no good reason to continue to accept it.
          So make the code really equivalent on both sides.  The behavior now is
          that we use the highest mutually-supported TLS protocol version.
      
          Marko Kreen, some comment-smithing by me
      a7c6138c
    • N
      resgroup: fix the slot memory quota size on QE, and misc changes. · fe118987
      Ning Yu 提交于
      * resgroup: correct the memory quota size on QE.
        QE might have different caps with QD, so the runtime status must also
        be considered to decide the per slot memory quota.
      * resgroup: retire sessionId in slot.
      * resgroup: also update memQuotaUsed on QEs.
        This value is necessary on ALTER RESOURCE GROUP to decide the new
        memory capabilities, but it was not updated on QEs in the past, so
        the memory quota might be over released on QEs. This does not lead to
        runtime errors, and the memory quota can be regained laterly, but the
        performance can be affected in worst case.
        To fix it now we update this value also on QEs.
      * resgroup: reduce the duplicated code in groupApplyMemCaps().
        Move the duplicated logic into mempoolAutoRelease().
      * resgroup: validate proc is in the right resgroup wait queue.
        We used to check proc's queuing status, now we also validate it's in
        the specific resgroup's wait queue. This check is expensive so it's
        only enabled in the debugging build.
      * resgroup: retire MyProc->resWaiting.
        resWaiting is only true when proc is in the wait queue, and is only
        false when proc is not in the wait queue, so it's a redundant flag.
        And checking proc's queuing status can be performed automatically,
        so resWaiting can be safely retired.
        Their helper functions procIsWaiting() and procIsInWaitQueue() are
        also merged into one.
      * and other misc changes.
      fe118987
    • R
      misc cleanup on resource group cpu/memory codes · 936ab367
      Richard Guo 提交于
      * Set/Unset doMemCheck in Attach/Detach slot.
      * Use IsResGroupEnabled() in ResGroupReleaseMemory().
      * Remove AssertImply in Attach/Detach slot.
      * Replace magic number 0 with macro to represent cgroup top dir.
      * Remove static global variable cpucores.
      * Dump more info in ResGroupDumpMemoryInfo.
      936ab367
  3. 09 11月, 2017 7 次提交
    • D
      Automatically regenerate gppylib catalog JSON file · 28e897e6
      Daniel Gustafsson 提交于
      Instead of having to remember to manually update the gppylib JSON
      file (which has frequently been forgotten), hook the generation
      into the src/backend/catalog "all" target such that it's generated
      automatically when needed and thus can be removed from the repo
      (removing the risk of using a stale file).
      
      Also updates the documentation and some minor comment fixes to the
      process_foreign_key script.
      28e897e6
    • P
      resgroup: move UnassignResGroup into AtEOXact_ResGroup · 1f54f7d3
      Pengzhou Tang 提交于
      * Do UnassignResGroup within prepareTransaction too, prepareTransaction()
      will put QE out of any transactions temporarily until the second commit
      command arrives, so any failures in this gap will cause leaks of resource
      group including slots etc.
      * Clean code, move UnassignResGroup() into AtEOXact_ResGroup() so resource
      group related codes will not spread across prepare/commit/abort functions.
      * Do not call callback functions in PrepareTransaction because the transaction
      is not trully commited.
      1f54f7d3
    • P
      Bring back c3d8a92e which was reverted by accident · eaae4a5b
      Pengzhou Tang 提交于
      eaae4a5b
    • M
      Replace int with uint64 in grow_unsorted_array · 74ecedb4
      Melanie Plageman 提交于
      In situations in which our available memory is much larger than the
      memory in our sortcontext, it was previously possible to overflow the
      maxNumEntries variable.
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      74ecedb4
    • A
      ATExecAddColumn: resolve FIXME and remove old JIRA references · 2db2231e
      Asim R P 提交于
      Unfortunately we can't remove the code referenced by the FIXME yet, but
      we've pulled the existing context into the comment and moved to GitHub
      for tracking.
      
      [ci skip]
      Signed-off-by: NJacob Champion <pchampion@pivotal.io>
      Signed-off-by: NAsim R P <apraveen@pivotal.io>
      2db2231e
    • X
    • T
      Fixes order when splitting nested partitions · b12a4c0d
      Taylor Vesely 提交于
      Running ALTER TABLE PARTITION SPLIT on range subpartitions results in both new
      partitions to incorrectly have the same partition order value (parruleord in
      pg_partition_rule). ALTER TABLE PARTITION SPLIT is accomplished by running
      multiple DDLs in sequence:
      
        1. CREATE TEMP TABLE to match the data type/orientation of the partition we are
           splitting
      
        2. ALTER TABLE PARTITION EXCHANGE the partition with the new temporary table.
           The temporary table now contains the partition data, and the partition
           table is now empty.
      
        3. ALTER TABLE DROP PARTITION on the exchanged partition (the new empty table)
      
        3a. Drop the partitioning rule on the empty partition
      
        3b. DROP TABLE on the empty partition
      
      At this point (in the old behavior) we remove the partition rule from the in
      memory copy of the partition metadata. We need to remove it from the context
      here or ADD PARTITION will believe that a partition for the split range already
      exists, and will fail to create a new partition.
      
      Now, create two new partitions in the place of the old one. For each partition:
      
        4a. CREATE TABLE for the new range
      
        4b. ADD PARTITION - Search for a hole in the partition order to place the
            partition. Open up a hole in the parruleord if needed.
      
      When adding a subpartition, ADD PARTITION relies on the partition rules passed
      to it in order to find any holes in the partition range. Previously, the
      metadata was not refreshed when adding the second partition, and this resulted
      in the ADD PARTITION command creating both tables with the same partition rule
      order (parruleord). This commit resolves the issue by refreshing the partition
      metadata (PgPartRule) passed to the CREATE TABLE/ADD PARTITION commands upon
      each iteration.
      b12a4c0d
  4. 08 11月, 2017 15 次提交
  5. 07 11月, 2017 2 次提交
    • D
      Support hash subpartitions in PartitionSelector node · f685a09e
      Daniel Gustafsson 提交于
      When inserting tuples into a partitioned table with Orca, where the
      tuple belongs in a hashed subpartition the PartitionSelector node
      errored out on an unknown partition type. Extend partition selection
      for equality comparison to handle hash partitions. Uncovered when
      adding the partition_ddl test from Tinc into ICW.
      f685a09e
    • W
      merge passwordcheck plugin from pg (#3689) · f9e6c5e1
      Weinan WANG 提交于
      * merge passwordcheck plugin from pg(c742b795)
      ```
      original commit message :
      Add a hook to CREATE/ALTER ROLE to allow an external module to check the
      strength of database passwords, and create a sample implementation of
      such a hook as a new contrib module "passwordcheck".
      
      Laurenz Albe, reviewed by Takahiro Itagaki
      
      ```
      * merge reason: some customer waiting
      f9e6c5e1
  6. 06 11月, 2017 7 次提交
    • H
      Remove pointless mock test. · 09416670
      Heikki Linnakangas 提交于
      There's a direct call to MemoryAccounting_Reset() in MemoryContextInit().
      This test was merely testing that the call is indeed there. That seems
      completely uninteresting to me.
      
      I tested what happens if the call is removed. Postmaster segfaults at
      startup. So if that call somehow accidentally gets removed, we will find
      out quickly even without this test.
      09416670
    • M
      Remove Assert on hashList length related to nodeResult. · 44352db9
      Max Yang 提交于
      We don't need to check
      Assert(list_length(resultNode->hashList) <= resultSlot->tts_tupleDescriptor->natts),
      because optimizer could be smart to reuse columns for following queries:
      create table tbl(a int, b int, p text, c int) distributed by(a, b);
      create function immutable_generate_series(integer, integer) returns setof integer as
      'generate_series_int4' language internal immutable;
      set optimizer=on;
      insert into tbl select i, i, i || 'SOME NUMBER SOME NUMBER', i % 10 from immutable_generate_series(1, 1000) i;
      The hashList specified by planner is (1, 1) which references immutable_generate_series for (a, b), and
      resultSlot->tts_tupleDescriptor only contains immutable_generate_series. It's good, so we don't need to
      check again. And slot_getattr(resultSlot, attnum, &isnull) will check
      attnum <= resultSlot->tts_tupleDescriptor->natts for us
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      44352db9
    • R
      Improve resource group codes. · 54ef8c21
      Richard Guo 提交于
      * Set resource group id to be InvalidOid in pg_stat_activity
         when transaction ends.
      * Change the mode of ResGroupLock to LW_SHARED in decideResGroup().
      * Add necessary comments for resource group functions.
      * Change some function names and variable names.
      54ef8c21
    • A
      Symlink libpq files for backend and optimize the makefiles · 1e9cd7d9
      Adam Lee 提交于
      src/backend's makefiles have its own rules, this commit symlinks libpq
      files for backend to leverage them, canonical and much simpler.
      
      What are the rules?
      
      1, src/backend compile SUBDIR, list OBJS in sub-directories'
      objfiles.txt, then link them all into postgres.
      
      2, mock.mk links all OBJS, but filters out the objects which mocked by
      cases.
      1e9cd7d9
    • A
      Revert "Update mock makefiles to work with libpq objfiles" · e1d673dc
      Adam Lee 提交于
      This reverts commit 0ed93335.
      e1d673dc
    • M
      Fix memory tuple binding leak when calling tuplestore_putvalues for tuplestore. · b9058a4d
      Max Yang 提交于
      For instance, the simple query:
      create table spilltest2 (a integer);
      insert into spilltest2 select a from generate_series(1,40000000) a;
      Current optimizer would generate FunctionTableScan for generate_series, which stores
      result of generate_series in tuple store. Memory leak would happen because memory tuple
      binding will be constructed every time in tuplestore_putvalues. It is not only memory leak
      problem, also performance problem, because we don't need to construct memory tuple binding
      for every row, but just once. This fix just change the interface of tuplestore_putvalues. It
      will receive memory tuple binding as input, which is constructed once outside.
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      b9058a4d
    • M
      Fix some indention and remove extra ending whitespace of following files · 645b6473
      Max Yang 提交于
      before we make some changes for them. Just split indention and code changes
      to separated commits to make review easier:
      pg_exttable.c
      prepare.c
      execQual.c
      plpgsql.h
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      645b6473