1. 17 5月, 2018 7 次提交
    • A
      COPY: expand the type of numcompleted to 64 bits · 8d40268b
      Adam Lee 提交于
      Integer overflow occurs without this when copied more than 2^31 rows,
      under the `COPY ON SEGMENT` mode.
      
      Errors happen when it is casted to uint64, the type of `processed` in
      `CopyStateData`, third-party Postgres driver, which takes it as an
      int64, fails out of range.
      8d40268b
    • L
      docs - resgroup memory_auditor updates to cat/view/gptoolkit (#4871) · a1fcd70e
      Lisa Owen 提交于
      * docs - resgroup memory_auditor to cat/view/gptoolkit
      
      * cgroup mem usage output - used and limit_granted
      
      * updates for code refactor that was just merged
      
      * type to text
      a1fcd70e
    • M
      docs: update cast information for GPDB 5/6 (#4969) · 44d4c948
      Mel Kiyama 提交于
      * docs: update cast information for GPDB 5/6
      
      Update cast information. Add information about limited text casts.
      See section "Type Casts"
      
      * docs: review comments for GPDB 5/6 updated cast information.
      
      * docs: fix typos in updated CAST info
      44d4c948
    • T
      Remove mpp-package tinc tests (#4999) · c299405a
      Todd Sedano 提交于
      Authored-by: NTodd Sedano <tsedano@pivotal.io>
      c299405a
    • J
      Ensure failover is complete before bringing up the mirror (#4963) · 8f0a0fa5
      Jesse Zhang 提交于
      To "rebalance" a primary-mirror pair, gprecoverseg -r performs the following steps:
      
      1.  bring down the acting primary
      2.  issue a query that triggers the failover
      3.  bring up the mirror (gprecoverseg -F)
      
      Currently these 3 steps are happening in close succession. However, there is a chance that between step 2 and step 3, the mirror promotion happens slower than we expect. The implicit assumption here is that the acting mirror has finished transitioning to the primary role before step 3 is performed.
      
      This patch adds a retry in "sort of step 2, definitely before step 3", to ensure a good state before we can bring up the mirror.
      
      Co-authored-by: Jesse Zhang sbjesse@gmail.com
      Co-authored-by: David Kimura dkimura@pivotal.io
      8f0a0fa5
    • C
      Do not dump protocols in filtered pg_dump · ac74f825
      Chris Hajas 提交于
      Since protocols do not belong to a namespace, we do not want to dump
      them in table or schema filtered backups. They will only be dumped in a
      full backup.
      Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
      Co-authored-by: NChris Hajas <chajas@pivotal.io>
      ac74f825
    • O
      Create dummy stats for type mismatch · 88b2ab36
      Omer Arap 提交于
      If the column statistics in `pg_statistic` has values with type
      different than column type, metadata accessor should not translate the
      stats and create a dummy stats instead.
      
      This commit also reorders stats collection from the `pg_statistic` to
      align with how analyze generates stats. MCV and Histogram translation is
      moved to the end after NDV, nullfraction and column width extraction.
      Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
      88b2ab36
  2. 16 5月, 2018 9 次提交
    • D
      Concourse: refactor reporting after debuild error · 282881c0
      David Sharp 提交于
      Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
      Co-authored-by: NLarry Hamel <lhamel@pivotal.io>
      282881c0
    • J
      Correctly set pg_exttable.logerrors (#4985) · a33b8fc6
      Jesse Zhang 提交于
      Consider the following SQL, we expect logging to be turned off for table
      `ext_error_logging_off`
      
      ```sql
      create external table ext_error_logging_off (a int, b int)
          location ('file:///tmp/test.txt') format 'text'
          segment reject limit 100;
      \d+ ext_error_logging_off
      ```
      And then in this next case we expect error logging to be turned on for
      table `ext_t2`:
      
      ```sql
      create external table ext_t2 (a int, b int)
          location ('file:///tmp/test.txt') format 'text'
          log errors segment reject limit 100;
      \d+ ext_t2
      ```
      
      Before this patch, we are making two mistakes in handling these external
      table DDL:
      
      1. We intend to enable error logging *whenever* the user specifies
      `SEGMENT REJECT` clause, completely ignoring whether he or she specifies
      `LOG ERRORS` 1. Even then, we make the mistake of implicitly coercing
      the OID (an unsigned 32-bit integer) to a bool (which is really just a C
      `char`): that means, 255/256 of the time (99.6%) the result is `true`,
      and 0.4% of the time we get a `false` instead.
      
      The `OID` to `bool` implicit conversion could have been caught by a
      `-Wconversion` GCC/Clang flag. It's most likely a leftover from commit
      8f6fe2d6.
      
      This bug manifests itself in the `dsp` regression test mysteriously
      failing about once every 200 runs -- with the only diff on a `\d+` of an
      external table that should have error logging turned on, but the
      returned definition has it turned off.
      
      While working on this we discovered that all of our existing external
      tables have both `LOG ERRORS` and `SEGMENT REJECT`, which is why this
      bug wasn't caught in the first place.
      
      This patch fixes the issue by properly setting the catalog column
      `pg_exttable.logerrors` according to the user input.
      
      While we were at it, we also cleaned up a few dead pieces of code and
      made the `dsp` test a bit friendlier to debug.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      a33b8fc6
    • A
      Bump ORCA version to 2.58.0 · ae46aac9
      Abhijit Subramanya 提交于
      ae46aac9
    • A
      Bump ORCA version to 2.57.0 · 1924a403
      Abhijit Subramanya 提交于
      1924a403
    • J
      Replace INT_MAX, INT_MIN, ULONG_MAX, ULLONG_MAX with enums · a17922d0
      Jesse Zhang 提交于
      Fixes greenplum-db/gporca#358
      a17922d0
    • A
      Readers should check abort status of their subxids · b8d95865
      Asim R P 提交于
      QE readers incorrectly return true for TransactionIdIsCurrentTransactionId()
      when passed with an xid that is an aborted subtransaction of current
      transaction.  The end effect is wrong results because tuples inserted by the
      aborted subtransaction are seen (treated as visible according to MVCC rules) by
      a reader.  Current patch fixes the bug by looking up abort status of an XID
      from pg_clog.  In a QE writer, just like in upstream PostgreSQL, subtransaction
      information is available in CurrentTransactionState (even when subxip cache has
      overflown).  This information is not maintained in shared memory, making it
      unavailable to a reader.  Readers must resort to a longer route to get the same
      information - pg_subtrans and pg_clog.
      
      The patch does not use TransactionIdDidAbort() to check abort status.  The
      interface is designed to work with all transaction IDs.  It walks up the
      transaction hierarchy to look for an aborted parent if status of the given
      transaction is found to be SUB_COMMITTED.  This is a wasted effort when a QE
      reader wants to test if its own subtransaction has aborted or not.  A new
      interface is introduced to avoid this wasted effort for QE readers.  We choose
      to rely on AbortSubTransaction()'s behavior to mark entire subtree under the
      aborted subtransaction to be aborted in pg_clog.  A SUB_COMMITTED status in
      pg_clog, therefore, allows us to conclude that the subtransaction is not
      aborted without having to walk up the hierarchy, provided, the subtransaction
      is child of our own transaction.
      
      The test case also needed a fix because the SQL query (insert into select *)
      didn't result in a reader gang being created.  The SQL is changed to a join on
      a non-distribution column so as to achieve reader gang creation.
      b8d95865
    • A
      Avoid generating XLOG_APPENDONLY_INSERT for temp AO/CO tables. · a17f8028
      Ashwin Agrawal 提交于
      Temp tables don't have to be replicated neither crash safe and hence avoid
      generating xlog records for them. Heap already avoids, this patch skips for
      AO/CO tables as well. Adding new variable `isTempRel` to `BufferedAppend` to
      help perform the check for temp tables and skip generating xlog records.
      a17f8028
    • S
      Do not translate printable filters of PartitionSelector (#4959) · c38c72ff
      Shreedhar Hardikar 提交于
      Printable filters are used to produce an expression for the Partition
      Selector node that is printed during EXPLAIN to give a hint to the
      general nature of the filter used by the node. It is not used by the
      executor in any way which actually uses levelEqExpressions,
      levelExpressions & residualPredicate instead. These usually contain
      completely different set of expressions such as PartBounExpr etc. which
      are not printed during EXPLAIN.
      
      Also, with dynamic partition elimination, the partition selector's
      printable filter may contain VARs that are not in its subtree and
      instead refer to a DynamicTableScan node on the other side of a Join.
      This means that it becomes tricky to extract the correct printable
      filter expression during DXL to PlStmt translation since that occurs
      bottom-up.
      
      Since it is misleading and sometimes incorrect, it's better to remove it
      altogether.
      Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
      c38c72ff
    • A
      Fix coverity CID 185522 and CID 185520. · 7a149323
      Ashwin Agrawal 提交于
      *** CID 185522:  Security best practices violations  (STRING_OVERFLOW)
      /tmp/build/0e1b53a0/gpdb_src/src/backend/cdb/cdbtm.c: 2486 in gatherRMInDoubtTransactions()
      
      and
      
      *** CID 185520:  Null pointer dereferences  (FORWARD_NULL)
      /tmp/build/0e1b53a0/gpdb_src/src/backend/storage/ipc/procarray.c: 2251 in GetSnapshotData()
      
      This condition cannot happen as `GetDistributedSnapshotMaxCount()` doesn't
      return 0 for DTX_CONTEXT_QD_DISTRIBUTED_CAPABLE and hence `inProgressXidArray`
      will always be initialzed. hence marked as ignore in coverity but still worth
      adding Assert for the same.
      7a149323
  3. 15 5月, 2018 6 次提交
  4. 12 5月, 2018 3 次提交
    • A
      Remove Gp_segment, replace with GpIdentity.segindex. · 660d009a
      Ashwin Agrawal 提交于
      Code had these two variables (GUCs), serving same purpose. GpIdentity.segindex
      is set to content-id, based on command line argument at start-up and inherited
      by all processes from postmaster. Whereas Gp_segment, was session level guc only
      set for backends, by dispatching the same from QD. So. essentially Gp_segment
      was not available and had incorrect value in auxiliary processes.
      
      Hence replaced all usages with GpIdentity.segindex. As a side effect log files
      now have correct value reported for segment number (content-id) in each and
      every line of file, irrespective of which process generated the log message.
      
      Discussion:
      https://groups.google.com/a/greenplum.org/forum/#!msg/gpdb-dev/Yr8-LZIiNfA/ob4KLgmkAQAJ
      660d009a
    • A
      Use IS_QUERY_DISPATCHER() wherever relevant. · fa511cab
      Ashwin Agrawal 提交于
      fa511cab
    • A
      Speedup gpinitsystem by few secs. · f7677518
      Ashwin Agrawal 提交于
      This patch helps cut-down 8 secs out of 30 secs (for single node gpinitsystem as
      part of gpdemo), mostly more for multinode setups.
      
      - Removes explicit 2 sec and 1 sleep.
      - Removes explicit call to CHECKPOINT, not required.
      - Avoid starting primaries after initdb. As they were started and then shutdown
        pretty soon to start back cluster with required cmdline arguments.
      - Add -w to mirror start.
      
      Still more room for improvement and speedup, this just gets us started.
      f7677518
  5. 11 5月, 2018 14 次提交
    • T
      updates python pycrypto to version 2.6.1 · 54f53c47
      Todd Sedano 提交于
      pycrypto version is now identical with the version in
      python-dependencies.txt
      Authored-by: NTodd Sedano <tsedano@pivotal.io>
      54f53c47
    • T
      Update parse version to 1.8.2 · ff1ea1c6
      Todd Sedano 提交于
      Parse version is now identical with the version in
      python-dependencies.txt
      Authored-by: NTodd Sedano <tsedano@pivotal.io>
      ff1ea1c6
    • N
      ci: fix aggregate in gate_resource_groups_start. · c5cf6968
      Ning Yu 提交于
      We should pass binary_swap_gpdb_centos6 instead of gpdb_src_binary_swap.
      c5cf6968
    • N
      resgroup: refactor memory auditor implementation. · 86b0c56b
      Ning Yu 提交于
      We used to implement the memory auditor feature differently on master
      and 5X, on master the attribute is stored in pg_resgroup while on 5X
      it's stored in pg_resgroupcapability.  This increases the maintenance
      effort significantly.  So we refactor this feature on master to minimize
      the difference between these two branches.
      
      - Revert "resgroup: fix an access to uninitialized address."
        This reverts commit 56c20709.
      - Revert "ic: Mark binary_swap_gpdb as optional input for resgroup jobs."
        This reverts commit 9b3d0cfc.
      - Revert "resgroup: fix a boot failure when cgroup is not mounted."
        This reverts commit 4c8f28b0.
      - Revert "resgroup: backward compatibility for memory auditor"
        This reverts commit f2f86174.
      - Revert "Show memory statistics for cgroup audited resource group."
        This reverts commit d5fb628f.
      - Revert "Fix resource group test failure."
        This reverts commit 78b885ec.
      - Revert "Support cgroup memory auditor for resource group."
        This reverts commit 6b3d0f66.
      - Apply "resgroup: backward compatibility for memory auditor"
        This cherry picks commit 23cd8b1e
      - Apply "ic: Mark binary_swap_gpdb as optional input for resgroup jobs."
        This Cherry picks commit c86652e6
      - Apply "resgroup: fix an access to uninitialized address."
        This Cherry picks commit b257b344
      86b0c56b
    • N
      resgroup: add back guc gp_resgroup_memory_policy. · f0d268e7
      Ning Yu 提交于
      This guc was removed by accident in
      5cc0ec50.
      
      In case we remove the gucs by accident in the future, we added a
      testcase to check for the existence of the resgroup gucs.
      f0d268e7
    • A
      Test for multiple truncates to AO/CO table in same transaction. · 2a292dde
      Ashwin Agrawal 提交于
      Same transaction truncate performs unsafe truncation, hence adding test to cover
      for the same.
      2a292dde
    • A
      Teach heap_truncate_one_rel to handle AO tables as well · f490c566
      Ashwin Agrawal 提交于
      Upstream commit cab9a065 introduced an optimization to truncate tables
      in scenarios that permit "unsafe" operations where we don't have to
      churn on the relfilenode for the underlying tables. AO table got a free
      ride but for the wrong reason.
      
      This patch teaches heap_truncate_one_rel() to perform the unsafe /
      optimal truncation on AO tables. This allows us to converge the callers
      back to how they look like in Postgres 9.0.
      
      Specifically, we're now able to inline TruncateRelfiles() back into
      ExecuteTruncate() .
      
      One caveat introduced by this patch though, is the "optimal" / unsafe
      truncation of an AO table can potentially leak some disk space: we are
      not performing a real file-level truncate, merely seeking back to offset
      0 -- because the aoseg auxiliary table is truncated -- on the next
      write, therefore the space after the EOF mark is wasted in some sense.
      Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
      f490c566
    • T
      Bumps nokogiri to 1.8.2 · afaed85f
      Todd Sedano 提交于
      Missed one reference
      
      [ci skip]
      Authored-by: NTodd Sedano <tsedano@pivotal.io>
      afaed85f
    • O
      Bump ORCA version to 2.56.3 · 194fde76
      Omer Arap 提交于
      194fde76
    • T
      Bumps nokogiri to 1.8.2 · 3acd5f40
      Todd Sedano 提交于
      [ci skip]
      Authored-by: NTodd Sedano <tsedano@pivotal.io>
      3acd5f40
    • J
      Add guards to prevent negative and duplicate partition rule order values · fd9a83b1
      Jimmy Yih 提交于
      There were scenarios where adding a new partition to a partition table would
      cause a negative or duplicate partition rule order (parruleord) value to show
      up in the pg_partition_rule catalog table.
      
      1. Negative parruleord values could show up during parruleord gap closing when
         the new partition is inserted above a parruleord gap.
      2. Negative parruleord values could show up when the max number of partitions
         for that level has been reached (32767), and there is an attempt to add a
         new partition that would have been the highest ranked partition in that
         partition's partition range.
      3. Duplicate parruleord values could show up when the max number of partitions
         for that level has been reached (32767), and there is an attempt to add a
         new partition that would have been inserted between the partition table's
         sequence of parruleord values.
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      fd9a83b1
    • A
      Avoid core file generation for faultinjector generated PANICs. · d3fe56a5
      Ashwin Agrawal 提交于
      To avoid disk space spamming while running tests and also save some time, patch
      intends to avoid core file generation for intentional PANICs caused by tests.
      
      Using `setrlimit()` to achieve the same is output of discussion with Jacob
      Champion and Asim Praveen. Alternative option considered was calling
      `quickdie()` instead of PANIC, but that suffers from not providing feedback
      PANIC string used by test to validate the reason for PANIC.
      d3fe56a5
    • T
      Remove workaround for ancient incompatibility between readline and libedit. · bc2643b1
      Tom Lane 提交于
      GNU readline defines the return value of write_history() as "zero if OK,
      else an errno code".  libedit's version of that function used to have a
      different definition (to wit, "-1 if error, else the number of lines
      written to the file").  We tried to work around that by checking whether
      errno had become nonzero, but this method has never been kosher according
      to the published API of either library.  It's reportedly completely broken
      in recent Ubuntu releases: psql bleats about "No such file or directory"
      when saving ~/.psql_history, even though the write worked fine.
      
      However, libedit has been following the readline definition since somewhere
      around 2006, so it seems all right to finally break compatibility with
      ancient libedit releases and trust that the return value is what readline
      specifies.  (I'm not sure when the various Linux distributions incorporated
      this fix, but I did find that OS X has been shipping fixed versions since
      10.5/Leopard.)
      
      If anyone is still using such an ancient libedit, they will find that psql
      complains it can't write ~/.psql_history at exit, even when the file was
      written correctly.  This is no worse than the behavior we're fixing for
      current releases.
      
      Back-patch to all supported branches.
      
      (cherry picked from commit df9ebf1e)
      Fixes #4437.
      bc2643b1
    • A
      Fix crash_recovery_redundant_dtx test. · b36210b1
      Ashwin Agrawal 提交于
      The test can fail if PANIC happens before insert has even reached the intended
      point of finishing commit on master. In this case select rightfully returns zero
      rows compared to expected 1 row.
      b36210b1
  6. 10 5月, 2018 1 次提交