1. 27 4月, 2017 5 次提交
  2. 26 4月, 2017 20 次提交
  3. 25 4月, 2017 15 次提交
    • H
      Remove unused field. · 175f1f3b
      Heikki Linnakangas 提交于
      Commit fb93e7e7 removed this field, but commit 9a817d45 accidentally
      resurrected it. Remove it again.
      175f1f3b
    • A
      c2beefe7
    • O
      Fixing partition pruning test · 16aac4b4
      Omer Arap 提交于
      16aac4b4
    • A
      Check for empty exclude list before freeing the memory. · 57e3989f
      Abhijit Subramanya 提交于
      `build_exclude_list()` will return a statically allocated empty string if the
      number of exlude arguments is zero. We should check for this in the caller
      before freeing the pointer returned by it.
      57e3989f
    • T
      Add environment for kerberos test for SLES11 · 00f1ab9d
      Tushar Dadlani 提交于
      Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
      00f1ab9d
    • J
      Performance tests for loading into AO/CO tables · 4ad401d6
      Jimmy Yih 提交于
      Removing old defunct performance directory and replace with a simple
      way to test AO/CO table loading. This will help measure performance
      changes when we start generating XLOGs for AO/CO tables and implement
      WAL replication between primaries and mirrors. This current
      performance implementation is a bit hacky currently but it gets the
      job done and does not rely on TPC (which we can't have in our source
      unfortunately).
      
      Current implementation:
      1. Create csv data file (specified by NUM_COPIES Makefile variable)
      2. Host the csv data file in an external table to load data into a
         base table faster (and to work around a known memory leak bug when
         doing many inserts in a LOOP)
      3. Create the AO and AOCO tables and start testing loading times
         sequentially and concurrently
      4. Parse the output into a csv to allow separate comparison analysis
      4ad401d6
    • A
      Prevent potential overruns of fixed-size buffers. · bbbe3042
      Abhijit Subramanya 提交于
      Coverity identified a number of places in which it couldn't prove that a
      string being copied into a fixed-size buffer would fit.  We believe that
      most, perhaps all of these are in fact safe, or are copying data that is
      coming from a trusted source so that any overrun is not really a security
      issue.  Nonetheless it seems prudent to forestall any risk by using
      strlcpy() and similar functions.
      
      Fixes by Peter Eisentraut and Jozef Mlich based on Coverity reports.
      
      In addition, fix a potential null-pointer-dereference crash in
      contrib/chkpass.  The crypt(3) function is defined to return NULL on
      failure, but chkpass.c didn't check for that before using the result.
      The main practical case in which this could be an issue is if libc is
      configured to refuse to execute unapproved hashing algorithms (e.g.,
      "FIPS mode").  This ideally should've been a separate commit, but
      since it touches code adjacent to one of the buffer overrun changes,
      I included it in this commit to avoid last-minute merge issues.
      This issue was reported by Honza Horak.
      
      Security: CVE-2014-0065 for buffer overruns, CVE-2014-0066 for crypt()
      
      (cherry picked from commit e3208fec32586076cd1a24cae19ded158a60e15a)
      Signed-off-by: NXin Zhang <xzhang@pivotal.io>
      bbbe3042
    • A
      Fix coverity issues in pg_basebackup.c · 6d944693
      Abhijit Subramanya 提交于
      CID 130131: Copy into fixed size buffer (STRING_OVERFLOW)
      Coverity reported that we might overrun the buffer `current_path` while trying
      to copy the return value of `PQgetvalue()` since we don't check its length.
      Fix this by using `strlcpy()` in order avoid buffer overruns. This is also how
      it has been fixed upstream.
      
      CID 157318: Resource leak (RESOURCE_LEAK)
      `build_exclude_list()` returns a string which has been allocated memory using
      malloc and since we never store the return value, the allocated memory can
      never be freed.  Fix this by storing the results of the function in a separate
      variable and then freeing it once it has been used.
      6d944693
    • H
      Transform small Array constants to ArrayExprs. · 9a817d45
      Heikki Linnakangas 提交于
      ORCA can do some optimizations - partition pruning at least - if it can
      "see" into the elements of an array in a ScalarArrayOpExpr. For example, if
      you have a qual like "column IN (1, 2, 3)", and the table is partitioned on
      column, it can eliminate partitions that don't hold those values. The
      IN-clause is converted into an ScalarArrayOpExpr, so that is really
      equivalent to "column = ANY <array>"
      
      However, ORCA doesn't know how to extract elements from an array-typed
      Const, so it can only do that if the array in the ScalarArrayOpExpr is
      an ArrayExpr. Normally, eval_const_expressions() simplifies any ArrayExprs
      into Const, if all the elements are constants, but we had disabled that
      when ORCA was used, to keep the ArrayExprs visible to it.
      
      There are a couple of reasons why that was not a very good solution. First,
      while we refrain from converting an ArrayExpr to an array Const, it doesn't
      help if the argument was an array Const to begin with. The "x IN (1,2,3)"
      construct is converted to an ArrayExpr by the parser, but we would miss the
      opportunity if it's written as "x = ANY ('{1,2,3}'::int[])" instead.
      Secondly, by not simplifying the ArrayExpr, we miss the opportunity to
      simplify the expression further. For example, if you have a qual like
      "1 IN (1,2)", we can evaluate that completely at plan time to 'true', but
      we would not do that with ORCA because the ArrayExpr was not simplified.
      
      To be able to also optimize those cases, and to slighty reduce our diff
      vs upstream in clauses.c, always simplify ArrayExprs to Consts, when
      possible. To compensate, so that ORCA still sees ArrayExprs rather than
      array Consts (in those cases where it matters), when a ScalarArrayOpExpr
      is handed over to ORCA, we check if the argument array is a Const, and
      convert it (back) to an ArrayExpr if it is.
      Signed-off-by: NJemish Patel <jpatel@pivotal.io>
      9a817d45
    • T
      Use sles11_x86_64 instead of rhel6_x86_64 for SLES · 29777c62
      Tom Meyer 提交于
      Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
      29777c62
    • K
    • K
      Similar to optimizer plan, make executor code to free the bitmap for planner plan. · 08112e2f
      Karthikeyan Jambu Rajaraman 提交于
      Similar to the optimizer, in planner with this
      change`BitmapIndexScanState` is responsible for freeing the bitmaps
      during `ExecEnd` or `ExecRescan`. The parent node doesn't need to worry
      about freeing the bitmaps.
      
      This also fix the following error for planner plan(bitmapscan_ao.sql).
      ```
      select count(1) from bmcrash b1, bmcrash b2 where b1.bitmap_col =
      b2.bitmap_col or b1.bitmap_col = '999' and b1.btree_col1 = 'abcdefg999';
      ERROR:  non hash bitmap (nbtree.c:572)  (seg1 slice2 127.0.0.1:25433
      pid=83873)
      ```
      08112e2f
    • K
      Fix HashStreamOpaque from bad access / double free'ing HashBitMap. · 57b9aeb3
      Karthikeyan Jambu Rajaraman 提交于
      `HashBitMap` is owned by `BitmapIndexScanState`(`node->bitmap`).
      `BitmapIndexScanState` will free the `HashBitMap` when `ExecEnd` is
      called. So it is not the responsibility of `HashStreamOpaque` to free it
      which just has the reference to it.
      Also, it should never try to access
      the `HashBitMap` when `HashStreamOpaque` gets freed. `HashBitMap` might
      have been freed even before the `HashStreamOpaque` gets freed.
      57b9aeb3
    • K
      Revert(except test) "[#129402309] Fix double free when using stream bitmaps with hash bitmaps." · 3d69ae41
      Karthikeyan Jambu Rajaraman 提交于
      This reverts commit b5d7b30f.
      
      `HashStreamOpaque` will never own the `HashBitmap` and hence there is no
      need of variable `free` in `HashStreamOpaque` to denote if it owns the
      `HashBitmap` or not. Hence we don't need `tbm_create_stream_node_ref`
      API at all. Given that `HashStreamOpaque` is not owning the `HashBitMap`
      it should never try to access the `HashBitMap` during the free of
      `HashStreamOpaque`. There is no guarantee that `HashBitmap` is valid at
      that time. `ExecEndBitmapIndexScan` might have freed the `HashBitmap`
      that is referenced in `HashStreamOpaque`.
      3d69ae41
    • M
      GPDB DOCS - Fix issues found when validating files in gpdb-webhelp.di… (#2278) · eef6991e
      mkiyama 提交于
      * GPDB DOCS - Fix issues found when validating files in gpdb-webhelp.ditamap.
      
      * fix for GUC gp_connections_per_thread update to GPDB 5 description.
      eef6991e