1. 06 11月, 2017 16 次提交
    • H
      88aa3411
    • M
      Remove Assert on hashList length related to nodeResult. · 44352db9
      Max Yang 提交于
      We don't need to check
      Assert(list_length(resultNode->hashList) <= resultSlot->tts_tupleDescriptor->natts),
      because optimizer could be smart to reuse columns for following queries:
      create table tbl(a int, b int, p text, c int) distributed by(a, b);
      create function immutable_generate_series(integer, integer) returns setof integer as
      'generate_series_int4' language internal immutable;
      set optimizer=on;
      insert into tbl select i, i, i || 'SOME NUMBER SOME NUMBER', i % 10 from immutable_generate_series(1, 1000) i;
      The hashList specified by planner is (1, 1) which references immutable_generate_series for (a, b), and
      resultSlot->tts_tupleDescriptor only contains immutable_generate_series. It's good, so we don't need to
      check again. And slot_getattr(resultSlot, attnum, &isnull) will check
      attnum <= resultSlot->tts_tupleDescriptor->natts for us
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      44352db9
    • A
      Silence Bison deprecation warnings · 9fce2f7a
      Adam Lee 提交于
          commit 55fb759a
          Author: Peter Eisentraut <peter_e@gmx.net>
          Date:   Tue Jun 3 22:36:35 2014 -0400
      
              Silence Bison deprecation warnings
      
              Bison >=3.0 issues warnings about
      
                  %name-prefix="base_yy"
      
              instead of the now preferred
      
                  %name-prefix "base_yy"
      
              but the latter doesn't work with Bison 2.3 or less.  So for now we
              silence the deprecation warnings.
      9fce2f7a
    • A
      Fix compiler warnings in pg_dump · e020e03d
      Adam Lee 提交于
      Heikki at 6a76c5d0 back ported two commits from
      upstream to fix this, but hung gp_dump_agent and got reverted.
      
      This time I just backport one of it, which is enough as I tested.
      
          commit d923125b
          Author: Peter Eisentraut <peter_e@gmx.net>
          Date:   Fri Mar 2 22:30:01 2012 +0200
      
              Fix incorrect uses of gzFile
      
              gzFile is already a pointer, so code like
      
              gzFile *handle = gzopen(...)
      
              is wrong.
      
              This used to pass silently because gzFile used to be defined as void*,
              and you can assign a void* to a void**.  But somewhere between zlib
              versions 1.2.3.4 and 1.2.6, the definition of gzFile was changed to
              struct gzFile_s *, and with that new definition this usage causes
              compiler warnings.
      
              So remove all those extra pointer decorations.
      
              There is a related issue in pg_backup_archiver.h, where
      
              FILE       *FH;             /* General purpose file handle */
      
              is used throughout pg_dump as sometimes a real FILE* and sometimes a
              gzFile handle, which also causes warnings now.  This is not yet fixed
              here, because it might need more code restructuring.
      
      GitHub issue #447
      e020e03d
    • A
      Fix libpq C type warnings · 7e46c187
      Adam Lee 提交于
      C files need to include postgres.h on backend or postgres_fe.h on
      frontend, to adopt our C types.
      
      fe-protocol3.c: In function ‘pqParseInput3’:
      fe-protocol3.c:256:22: warning: passing argument 1 of ‘pqGetInt64’ from incompatible pointer type [-Wincompatible-pointer-types]
             if (pqGetInt64(&(ao->tupcount), conn))
                            ^
      In file included from fe-protocol3.c:21:0:
      libpq-int.h:629:14: note: expected ‘int64 * {aka long int *}’ but argument is of type ‘long long int *’
       extern int64 pqGetInt64(int64 *result, PGconn *conn);  /* GPDB only */
                    ^~~~~~~~~~
      7e46c187
    • A
      Fix cdb_ddboost_util compiling warnings · 92f83f54
      Adam Lee 提交于
      cdb_ddboost_util.c: In function ‘createFakeRestoreFile’:
      cdb_ddboost_util.c:52:82: warning: pointer type mismatch in conditional expression
       #define DDDOPEN(path, mode, compress) (((compress) == 1) ? (GZDOPEN(path, mode)) : (fdopen(path, mode)))
                                                                                        ^
      cdb_ddboost_util.c:1856:10: note: in expansion of macro ‘DDDOPEN’
         ddfp = DDDOPEN(fd[0], "r", isCompress);
                ^~~~~~~
      cdb_ddboost_util.c:51:80: warning: pointer type mismatch in conditional expression
       #define DDOPEN(path, mode, compress) (((compress) == 1) ? (GZOPEN(path, mode)) : (fopen(path, mode)))
                                                                                      ^
      cdb_ddboost_util.c:1864:14: note: in expansion of macro ‘DDOPEN’
         ddfpTemp = DDOPEN(dd_options->to_file, "w", isCompress);
                    ^~~~~~
      92f83f54
    • H
      Allow pg_config --configure. · 193c128d
      Heikki Linnakangas 提交于
      Despite the comment, I see no reason to hide it.
      
      Fixes github issue #2685.
      193c128d
    • N
      resgroup: collect debug info more frequently in cpu test. · 522f5797
      Ning Yu 提交于
      The resgroup cpu test is still flaky on concourse, but the failure is
      hard to trigger manually, so we have to put more debug info in the test
      logs.
      522f5797
    • R
      Improve resource group codes. · 54ef8c21
      Richard Guo 提交于
      * Set resource group id to be InvalidOid in pg_stat_activity
         when transaction ends.
      * Change the mode of ResGroupLock to LW_SHARED in decideResGroup().
      * Add necessary comments for resource group functions.
      * Change some function names and variable names.
      54ef8c21
    • R
      Retire resource group test case resgroup_verify_guc. · fae1feb3
      Richard Guo 提交于
      Test case resgroup_verify_guc is to verify the setting
      of GUC statement_mem. Since this GUC does not take
      effect in resource group, remove this test case.
      fae1feb3
    • A
      Symlink libpq files for backend and optimize the makefiles · 1e9cd7d9
      Adam Lee 提交于
      src/backend's makefiles have its own rules, this commit symlinks libpq
      files for backend to leverage them, canonical and much simpler.
      
      What are the rules?
      
      1, src/backend compile SUBDIR, list OBJS in sub-directories'
      objfiles.txt, then link them all into postgres.
      
      2, mock.mk links all OBJS, but filters out the objects which mocked by
      cases.
      1e9cd7d9
    • A
      Revert "Update mock makefiles to work with libpq objfiles" · e1d673dc
      Adam Lee 提交于
      This reverts commit 0ed93335.
      e1d673dc
    • M
      Fix memory tuple binding leak when calling tuplestore_putvalues for tuplestore. · b9058a4d
      Max Yang 提交于
      For instance, the simple query:
      create table spilltest2 (a integer);
      insert into spilltest2 select a from generate_series(1,40000000) a;
      Current optimizer would generate FunctionTableScan for generate_series, which stores
      result of generate_series in tuple store. Memory leak would happen because memory tuple
      binding will be constructed every time in tuplestore_putvalues. It is not only memory leak
      problem, also performance problem, because we don't need to construct memory tuple binding
      for every row, but just once. This fix just change the interface of tuplestore_putvalues. It
      will receive memory tuple binding as input, which is constructed once outside.
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      b9058a4d
    • M
      Fix some indention and remove extra ending whitespace of following files · 645b6473
      Max Yang 提交于
      before we make some changes for them. Just split indention and code changes
      to separated commits to make review easier:
      pg_exttable.c
      prepare.c
      execQual.c
      plpgsql.h
      Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
      645b6473
    • H
      Make qp_dml_oids test case more robust. · 8f5e111a
      Heikki Linnakangas 提交于
      I frequently saw failures when running the test on my laptop with ORCA.
      The reason is that the OIDs on a user table are not guaranteed to be
      unique across segments in GPDB. Depending on concurrent activity and
      timing, the OID counters on different segments are not always in sync,
      and can produce duplicates when looked at across all nodes. In fact, I
      think the only reason this is currently passing on the pipeline so
      reliably as it is, is because the test is run in parallel with other
      tests that also create objects, which creates enough noise in the OID
      allocations.
      
      To fix, modify the test data in the test so that all the initial test rows
      reside on the same segment. Within a segment, the OIDs are unique.
      8f5e111a
    • H
      Refactoring around ExecInsert. · d0fbecf6
      Heikki Linnakangas 提交于
      This is mostly in preparation for changes soon to be merged from PostgreSQL
      8.4, commit a77eaa6a to be more precise. Currently GPDB's ExecInsert
      uses ExecSlotFetch*() functions to get the tuple from the slot, while in
      the upstream, it makes a modifiable copy with ExecMaterializeSlot(). That's
      OK as the code stands, because there's always a "junk filter" that ensures
      that the slot doesn't point directly to an on-disk tuple. But commit
      a77eaa6a will change that, so we have to start being more careful.
      
      This does fix an existing bug, namely that if you UPDATE an AO table with
      OIDs, the OIDs currently change (github issue #3732). Add a test case for
      that.
      
      More detailed breakdown of the changes:
      
      * In ExecInsert, create a writeable copy of the tuple when we're about
        to modify it, so that we don't accidentally modify an existing on-disk
        tuple. By calling ExecMaterializeSlot().
      
      * In ExecInsert, track the OID of the tuple we're about to insert in a
        local variable, when we call the BEFORE ROW triggers, because we don't
        have a "tuple" yet.
      
      * Add ExecMaterializeSlot() function, like in the upstream, because we now
        need it in ExecInsert. Refactor ExecFetchSlotHeapTuple to use
        ExecMaterializeSlot(), like in upstream.
      
      * Cherry-pick bug fix commit 3d02cae3 from upstream. We would get that
        soon anyway as part of the merge, but we'll soon have test failures if
        we don't fix it immediately.
      
      * Change the API of appendonly_insert(), so that it takes the new OID as
        argument, instead of extracting it from the passed-in MemTuple. With this
        change, appendonly_insert() is guaranteed to not modify the passed-in
        MemTuple, so we don't need the equivalent of ExecMaterializeSlot() for
        MemTuples.
      
      * Also change the API of appendonly_insert() so that it returns the new OID
        of the inserted tuple, like heap_insert() does. Most callers ignore the
        return value, so this way they don't need to pass a dummy pointer
        argument.
      
      * Add test case for the case that a BEFORE ROW trigger sets the OID of
        a tuple we're about to insert.
      
      This is based on earlier patches against the 8.4 merge iteration3 branch by
      Jacob and Max.
      d0fbecf6
  2. 04 11月, 2017 8 次提交
  3. 03 11月, 2017 7 次提交
    • H
      Fix memory leak in PL/python. · 13cd7529
      Heikki Linnakangas 提交于
      Commit eb1740c6 backported a bunch of code from upstream, but the
      backported code was structured slightly differently. That added a pstrdup()
      call, with no corresponding pfree(). That lead to a memory leak in the
      executor memory context, which adds up if e.g. the conversion of the
      PL/Python function's return value to a Postgres type is very complicated.
      
      This was revealed by a test case that returns a huge array. Converting the
      Python array to a PostgreSQL array leaked the string representation of
      every element.
      
      create or replace function gen_array(x int)
      returns float8[]
      as
      $$
          from random import random
          return [random() for _ in range(x)]
      $$language plpythonu;
      
      EXPLAIN ANALYZE select gen_array(120000000);
      
      I did not add that test case to the regression suite, as there is no
      convenient place to add it to. A memory leak just means that it consumes
      a lot of memory, which would be difficult to test reliably.
      
      Fixes github issue #3654.
      13cd7529
    • D
      Resgroup: Add hook for resource group assignment (#3592) · 42321b63
      David Sharp 提交于
      If set, resgroup_assign_hook is called during transaction setup, and the
      transaction is assigned to the resource group corresponding to the returned
      Oid. This allows an extension to change how transactions are assigned to
      resource groups.
      
      Also adds the Makefile and .c file necessary for running CMockery tests for
      resgroup.c, as well as unit tests over the added code, which can be run with:
      
          cd test && make -C .. clean all && make && ./resgroup.t
      
      We set the CurrentResourceOwner in GetResGroupIdForName so it can be called
      from decideResGroupId, which is called outside a transaction when
      CurrentResourceOwner is not set.
      Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io>
      Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
      42321b63
    • H
      Revert "Bump Orca version to 2.48.3" · 42786dfe
      Haisheng Yuan 提交于
      This reverts commit a59d8338.
      42786dfe
    • H
      Bump Orca version to 2.48.3 · a59d8338
      Haisheng Yuan 提交于
      a59d8338
    • K
      Bump CCP to 4.0.1.1 for bug fix · d623e161
      Kris Macoskey 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      d623e161
    • H
      Don't fire INSERT or DELETE triggers on UPDATE. · 07a88acc
      Heikki Linnakangas 提交于
      With ORCA, an UPDATE is actually implemented as an DELETE + INSERT. Don't
      fire INSERT or DELETE triggers in that case.
      
      This was broken by commit 740f304e. I wonder if we should somehow fire
      UPDATE triggers in that case, but I don't see any existing code to do that
      either.
      07a88acc
    • L
      docs - add palloc/malloc discussion (#3681) · 19e44d63
      Lisa Owen 提交于
      19e44d63
  4. 02 11月, 2017 9 次提交
    • N
      Change the order in which gpsmon sends packets to gpmmon · b0a68ad6
      Nadeem Ghani 提交于
      Before this commit, gpmmon expected to see QLOG packets before QUERYSEG packets.
      Out of order packets were quietly dropped. This behavior was causing
      intermittent test failure with the message: No segments for CPU skew
      calculation.
      
      This commit change the order of packet sends on gpsmon to fix these failures.
      Signed-off-by: NJacob Champion <pchampion@pivotal.io>
      b0a68ad6
    • L
      Remove Solaris conditions and use PYTHONHOME only when set (#3698) · e97e8b1a
      Larry Hamel 提交于
      - Remove Solaris special cases
      
      - We don't support solaris anymore
      
      - When PYTHONHOME is not set, don't use it.
      
      - PYTHONHOME should remain default (unset) and not used as a variable,
      unless a bundled python is available and preferred.
      
      - Use LD_LIBRARY_PATH only
        since macos 10.5, LD_LIBRARY_PATH has been supported, so 
        remove conditionals for darwin, discarding DYLD_LIBRARY_PATH in favor of
        the standard LD_LIBRARY_PATH
      e97e8b1a
    • H
      Wake up faster, if a segment returns an error. · 3bbedbe9
      Heikki Linnakangas 提交于
      Previously, if a segment reported an error after starting up the
      interconnect, it would take up to 250 ms for the main thread in the QD
      process to wake up and poll the dispatcher connections, and to see that
      there was an error. Shorten that time, by waking up immediately if the
      QD->QE libpq socket becomes readable while we're waiting for data to
      arrive in a Motion node.
      
      This isn't a complete solution, because this will only wake up if one
      arbitrarily chosen connection becomes readable, and we still rely on
      polling for the others. But this greatly speeds up many common scenarios.
      In particular, the "qp_functions_in_select" test now runs in under 5 s
      on my laptop, when it took about 60 seconds before.
      3bbedbe9
    • H
      Use a Postgres latch for wakeup of main thread from interconnect RX thread. · 2dc14878
      Heikki Linnakangas 提交于
      The problem with pthread wait conditions is that there is no way to wait
      for the wakeup from anothre thread, and for other events, like a socket
      becomeing readable, at the same time. We currently rely on polling on the
      other events, which leads to unnecessary delays. In particular, if a QE
      throws an ERROR, we will wait up to 250 milliseconds before the timeout
      is reached, before waking up the QD main thread to process the error.
      
      This commit doesn't actually address that problem yet, just changes the
      signaling mechanism between the RX thread and the main thread. I'll make
      the changes to avoid that delay as a separate commit, for easier review.
      2dc14878
    • R
      Change the way QE alloc/free resource group slot. · b5e1f43a
      Richard Guo 提交于
      Previously QD dispatches resource group slot id to QEs
      and each QE gets a slot according to the slot id.
      
      The problem with this way is if QD exits before QE and
      then dispatches the same slot id in a new session, two
      different sessions on QE may share the same slot.
      
      In this commit, QD no longer dispatches slot id. Each QE
      alloc/free resource group slot from its own slot pool.
      Signed-off-by: Nxiong-gang <gxiong@pivotal.io>
      b5e1f43a
    • D
    • K
      Run tinc task as gpadmin in place of centos · e5fe076b
      Kris Macoskey 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      e5fe076b
    • K
      CI - update pipeline external worker tags · 76d59be1
      Kris Macoskey 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      76d59be1
    • D
      Make the default user centos for run_tinc task · 5408ef0e
      Divya Bhargov 提交于
      The current user of the container is 'root'. This does not work for
      ssh'ing into CCP AWS clusters because 'root' is explicitly disabled for
      ssh. This is a standard pattern across any AWS AMI.
      
      This is a quick fix to unblock the pipeline. Some refactors may follow
      this that change the run_tinc PRETEST_SCRIPT pattern.
      Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
      5408ef0e