1. 04 11月, 2017 3 次提交
  2. 03 11月, 2017 7 次提交
    • H
      Fix memory leak in PL/python. · 13cd7529
      Heikki Linnakangas 提交于
      Commit eb1740c6 backported a bunch of code from upstream, but the
      backported code was structured slightly differently. That added a pstrdup()
      call, with no corresponding pfree(). That lead to a memory leak in the
      executor memory context, which adds up if e.g. the conversion of the
      PL/Python function's return value to a Postgres type is very complicated.
      
      This was revealed by a test case that returns a huge array. Converting the
      Python array to a PostgreSQL array leaked the string representation of
      every element.
      
      create or replace function gen_array(x int)
      returns float8[]
      as
      $$
          from random import random
          return [random() for _ in range(x)]
      $$language plpythonu;
      
      EXPLAIN ANALYZE select gen_array(120000000);
      
      I did not add that test case to the regression suite, as there is no
      convenient place to add it to. A memory leak just means that it consumes
      a lot of memory, which would be difficult to test reliably.
      
      Fixes github issue #3654.
      13cd7529
    • D
      Resgroup: Add hook for resource group assignment (#3592) · 42321b63
      David Sharp 提交于
      If set, resgroup_assign_hook is called during transaction setup, and the
      transaction is assigned to the resource group corresponding to the returned
      Oid. This allows an extension to change how transactions are assigned to
      resource groups.
      
      Also adds the Makefile and .c file necessary for running CMockery tests for
      resgroup.c, as well as unit tests over the added code, which can be run with:
      
          cd test && make -C .. clean all && make && ./resgroup.t
      
      We set the CurrentResourceOwner in GetResGroupIdForName so it can be called
      from decideResGroupId, which is called outside a transaction when
      CurrentResourceOwner is not set.
      Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io>
      Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
      42321b63
    • H
      Revert "Bump Orca version to 2.48.3" · 42786dfe
      Haisheng Yuan 提交于
      This reverts commit a59d8338.
      42786dfe
    • H
      Bump Orca version to 2.48.3 · a59d8338
      Haisheng Yuan 提交于
      a59d8338
    • K
      Bump CCP to 4.0.1.1 for bug fix · d623e161
      Kris Macoskey 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      d623e161
    • H
      Don't fire INSERT or DELETE triggers on UPDATE. · 07a88acc
      Heikki Linnakangas 提交于
      With ORCA, an UPDATE is actually implemented as an DELETE + INSERT. Don't
      fire INSERT or DELETE triggers in that case.
      
      This was broken by commit 740f304e. I wonder if we should somehow fire
      UPDATE triggers in that case, but I don't see any existing code to do that
      either.
      07a88acc
    • L
      docs - add palloc/malloc discussion (#3681) · 19e44d63
      Lisa Owen 提交于
      19e44d63
  3. 02 11月, 2017 19 次提交
    • N
      Change the order in which gpsmon sends packets to gpmmon · b0a68ad6
      Nadeem Ghani 提交于
      Before this commit, gpmmon expected to see QLOG packets before QUERYSEG packets.
      Out of order packets were quietly dropped. This behavior was causing
      intermittent test failure with the message: No segments for CPU skew
      calculation.
      
      This commit change the order of packet sends on gpsmon to fix these failures.
      Signed-off-by: NJacob Champion <pchampion@pivotal.io>
      b0a68ad6
    • L
      Remove Solaris conditions and use PYTHONHOME only when set (#3698) · e97e8b1a
      Larry Hamel 提交于
      - Remove Solaris special cases
      
      - We don't support solaris anymore
      
      - When PYTHONHOME is not set, don't use it.
      
      - PYTHONHOME should remain default (unset) and not used as a variable,
      unless a bundled python is available and preferred.
      
      - Use LD_LIBRARY_PATH only
        since macos 10.5, LD_LIBRARY_PATH has been supported, so 
        remove conditionals for darwin, discarding DYLD_LIBRARY_PATH in favor of
        the standard LD_LIBRARY_PATH
      e97e8b1a
    • H
      Wake up faster, if a segment returns an error. · 3bbedbe9
      Heikki Linnakangas 提交于
      Previously, if a segment reported an error after starting up the
      interconnect, it would take up to 250 ms for the main thread in the QD
      process to wake up and poll the dispatcher connections, and to see that
      there was an error. Shorten that time, by waking up immediately if the
      QD->QE libpq socket becomes readable while we're waiting for data to
      arrive in a Motion node.
      
      This isn't a complete solution, because this will only wake up if one
      arbitrarily chosen connection becomes readable, and we still rely on
      polling for the others. But this greatly speeds up many common scenarios.
      In particular, the "qp_functions_in_select" test now runs in under 5 s
      on my laptop, when it took about 60 seconds before.
      3bbedbe9
    • H
      Use a Postgres latch for wakeup of main thread from interconnect RX thread. · 2dc14878
      Heikki Linnakangas 提交于
      The problem with pthread wait conditions is that there is no way to wait
      for the wakeup from anothre thread, and for other events, like a socket
      becomeing readable, at the same time. We currently rely on polling on the
      other events, which leads to unnecessary delays. In particular, if a QE
      throws an ERROR, we will wait up to 250 milliseconds before the timeout
      is reached, before waking up the QD main thread to process the error.
      
      This commit doesn't actually address that problem yet, just changes the
      signaling mechanism between the RX thread and the main thread. I'll make
      the changes to avoid that delay as a separate commit, for easier review.
      2dc14878
    • R
      Change the way QE alloc/free resource group slot. · b5e1f43a
      Richard Guo 提交于
      Previously QD dispatches resource group slot id to QEs
      and each QE gets a slot according to the slot id.
      
      The problem with this way is if QD exits before QE and
      then dispatches the same slot id in a new session, two
      different sessions on QE may share the same slot.
      
      In this commit, QD no longer dispatches slot id. Each QE
      alloc/free resource group slot from its own slot pool.
      Signed-off-by: Nxiong-gang <gxiong@pivotal.io>
      b5e1f43a
    • D
    • K
      Run tinc task as gpadmin in place of centos · e5fe076b
      Kris Macoskey 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      e5fe076b
    • K
      CI - update pipeline external worker tags · 76d59be1
      Kris Macoskey 提交于
      Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
      76d59be1
    • D
      Make the default user centos for run_tinc task · 5408ef0e
      Divya Bhargov 提交于
      The current user of the container is 'root'. This does not work for
      ssh'ing into CCP AWS clusters because 'root' is explicitly disabled for
      ssh. This is a standard pattern across any AWS AMI.
      
      This is a quick fix to unblock the pipeline. Some refactors may follow
      this that change the run_tinc PRETEST_SCRIPT pattern.
      Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
      5408ef0e
    • H
      Move tests on partition pruning after DROP COLUMN from TINC. · 170e70a6
      Heikki Linnakangas 提交于
      The indexes on these tests were quite useless. In 'alter_tab_drop_col_2',
      the DROP COLUMN also dropped the index, so we got no chance to use it in
      any queries. In 'alter_tab_drop_col_3', ORCA doesn't know how to use the
      index for the IS FALSE query. In the translation from TINC, I removed the
      former case, but kept the latter one, and also added another query where
      ORCA also know how to use the index.
      
      This removes the last tests from the QP_optimizer-functional test suite,
      so remove all traces of that from the Makefiles and the Concourse pipeline.
      170e70a6
    • H
      Move tests on partition pruning from TINC. · 98c9cfb4
      Heikki Linnakangas 提交于
      We had very similar tests in the "partition_pruning" test, even with the
      same test table, so that's a natural home for this too.
      
      I left out the "alter_tab_add_pkey" tests, because they were identical to
      the "alter_tab_add_uniquekey" tests, except that the index was created for
      a PRIMARY KEY rather than a UNIQUE constraint. I don't see the point of
      that. Furthermore, the test queries in "alter_tab_add_pkey" didn't actually
      make use of the index.
      98c9cfb4
    • H
      Remove unnecessary TINC test. · 0e0744a0
      Heikki Linnakangas 提交于
      There are plenty of tests for the case that some but not all partitions
      have an index, in the "partition_pruning" pg_regress test case.
      0e0744a0
    • H
      Remove obsolete and broken executor tracing facility. · 8e58e725
      Heikki Linnakangas 提交于
      This #ifdef'd code would not compile, because GroupState isn't defined.
      And hasn't been for as long as the git history goes. That hints that no-one
      has used this facility for years, so it seems safe to remove it. You can
      use a debugger or custom temporary elog()s if you need to trace a particular
      query like this.
      8e58e725
    • H
      Create OID correctly when inserting toasted AO tuples. · 0ae7b86f
      Heikki Linnakangas 提交于
      UPDATE on an AO table still changes the OID of the tuple, which is wrong,
      but that's a separate issue and harder to fix.
      
      Fixes github issue #3702.
      0ae7b86f
    • H
      Don't bother inlining toasted values, before UPDATE on AO table. · 4b2616f3
      Heikki Linnakangas 提交于
      I see no reason to put together a MemTuple with all the values detoasted,
      before calling appendonly_update(). appendonly_update() calls
      appendonly_insert(), and it knows how to detoast and re-toast the values.
      4b2616f3
    • H
      Fix confusion on different kinds of "tuples". · 740f304e
      Heikki Linnakangas 提交于
      The code in ExecInsert, ExecUpdate, and CopyFrom was confused on what kind
      of a tuple the "tuple" variable might hold at different times. In
      particular, if you had a trigger on an append-only table, they would pass
      a MemTuple to Exec*Triggers() functions, which expect a HeapTuple.
      
      To fix, refactor the code so that it's always clear what kind of a tuple
      we're dealing with. The compiler will now throw warnings if they are
      conflated.
      
      We cannot, in fact, support ON UPDATE or ON DELETE triggers on AO tables
      in a sane way. GetTupleForTrigger() is hopelessly heap-specific. We could
      perhaps change it to do a lookup in the append only table's visibility map
      instead of looking at a heap tuple's xmin/xmax, but looking up the original
      tuple in an AO table would be fairly expensive anyway. As far as I can see,
      that never worked correctly, but let's add a check in CREATE TRIGGER to
      forbid that.
      
      ON INSERT triggers now work, also on AOCS tables. There was previously
      checks to throw an error if an AOCS table had a trigger, but I see no
      reason to forbid that particular case.
      
      Fixes github issue #3680.
      740f304e
    • H
      Assume that ORCA will fall back on BEFORE ROW triggers. · e5fbc544
      Heikki Linnakangas 提交于
      Put an assertion for that, and re-indent the code to match upstream.
      e5fbc544
    • H
      Don't pass around MemTuples as HeapTuples. · c02f93da
      Heikki Linnakangas 提交于
      Invent a new pointer type, GenericTuple, for when we might be dealing with
      either a MemTuple or a HeapTuple. The old practice of holding a MemTuple
      in a HeapTuple-typed variable, or passing a MemTuple to a function that's
      declared to take a HeapTuple parameter, seemed dangerous.
      c02f93da
    • L
      c1b579bf
  4. 01 11月, 2017 11 次提交
    • S
      Adding Conduct, Contribution and Issues template to fit community practices (#3687) · 99b4d3ca
      Scott Kahler 提交于
      * Adding Conduct, Contribution and Issues template to fit community best practices
      
      * adding more information to be gathered and ways to gather them
      99b4d3ca
    • H
      Remove over-zealous Assertion in workfile manager. · 448aaa2c
      Heikki Linnakangas 提交于
      During abort processing, AbortTransaction releases the resources in each
      resource owner, one resource owner at a time. It's true that after having
      processed all of them, there should not be any open workfile sets left,
      which is what this assertion tried to test. But it only holds after having
      released all resource owners.
      
      I just happened to trip this assertion while hacking on something else.
      448aaa2c
    • H
      Remove "boundary" TINC tests with 10 MB datums, improve "toast" test. · 0ce11e42
      Heikki Linnakangas 提交于
      These so-called boundary tests didn't actually go very close to the max size
      of a text/bytea/varchar column. The maximum of a single datum is 1 GB, and
      the value used here was only 10 MB.
      
      I was tempted to just remove the tests, but since someone felt that we need
      to test largish values, let's try to keep the coverage. So to compensate
      for the removal of the TINC tests, add similar test queries to the "toast"
      pg_regress test. Also make one of the large columns the distribution key,
      so that we test detoasting of the distribution key.
      
      I don't think using the subquery in the INSERT and UPDATE was the point of
      the test, so I replaced that with a much faster function to generate test
      data.
      
      We had tests of a varchar's length limit in the 'varchar' test already, so
      that aspect of the dml_boundary_varchar test also didn't seem interesting.
      
      This removes the last tests from the "optimizer_functional_part2" set of
      tests. Remove that from the Makefile and concourse pipeline file.
      0ce11e42
    • P
      6609c512
    • N
      resgroup: be more verbose in cpu test. · e1f1b0df
      Ning Yu 提交于
      Save the raw cpu status information in the result file for debugging.
      e1f1b0df
    • P
      Move mpp-interconnect from pulse to CCP · 39d3a593
      Pengzhou Tang 提交于
      * abandon pulse and use CCP framework
      * add new tinc target for mpp_interconnect
      * put source code of ickm along with interconnect test scripts.
      39d3a593
    • D
      Fix compilation of backend on getpeereid() enabled platforms · 8aa21268
      Daniel Gustafsson 提交于
      When getpeereid() is available in the platform we don't need it
      from ports, always invoking it cause a compilation failure since
      the object file will be missing. Fix by using the proper lookup
      list from autoconf which will contain the list of required objs.
      8aa21268
    • Z
      Add more dumps for resgroup. · bb018413
      Zhenghua Lyu 提交于
      * Add some fields in ResGroupControl into dump json.
      * Add key validation in tests.
      * Improve code style.
      bb018413
    • D
      Docs: Updating/removing "experimental" status for resource groups feature (#3705) · 7a0415ba
      David Yozie 提交于
      * Docs - editing resource groups warning for RHEL 6
      
      * removing most RG experiental warnings; addressing remaining SuSE issue and experimental status on that platform
      7a0415ba
    • H
      Minirepro: Dump statistics information for catalog table · 0854d2c7
      Haisheng Yuan 提交于
      Previously, if the input query contains catalog table, the minirepro will not
      dump statistics for catalog table. We may generate different plan with
      customer's environment because lack of statistics. With this patch, the
      statistics for catalog table used in the input query will also be dumped.
      0854d2c7
    • F
      9537bfa7