1. 26 9月, 2018 12 次提交
  2. 25 9月, 2018 10 次提交
    • A
      Disable 'emergency mode' autovacuum worker. · 5ce7f06d
      Adam Berlin 提交于
      In GPDB, we only want an autovacuum worker to start once we know
      there is a database to vacuum.
      
      When we changed the default value of the `autovacuum_start_daemon` from
      `true` to `false` for GPDB, we made the behavior of the AutoVacuumLauncherMain()
      be to immediately start an autovacuum worker from the launcher and exit,
      which is called 'emergency mode'.  When the 'emergency mode' is running it is possible
      to continuously start an autovacuum worker. Within the worker, the
      PMSIGNAL_START_AUTOVAC_LAUNCHER signal is sent when a database is found that is old
      enough to be vacuumed, but because we only autovacuum non-connectable
      databases (template0) in GPDB and we do not have logic to filter out
      connectable databases in the autovacuum worker.
      
      This change allows the autovacuum launcher to do more up-front decision making
      about whether it should start an autovacuum worker, including GPDB specific rules.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      5ce7f06d
    • P
      Allow to add motion to unique-ify the path in create_unique_path(). (#5589) · e9fe4224
      Paul Guo 提交于
      create_unique_path() could be used to convert semi join to inner join.
      Previously, during the Semi-join refactor in commit d4ce0921, creating unique
      path was disabled for the case where duplicats might be on different QEs.
      
      In this patch we enable adding motion to unique_ify the path, only if unique
      mothod is not UNIQUE_PATH_NOOP. We don't create unique path for that case
      because if later on during plan creation, it is possible to create a motion
      above this unique path whose subpath is a motion. In that case, the unique path
      node will be ignored and we will get a motion plan node above a motion plan
      node and that is bad. We could further improve that, but not in this patch.
      Co-authored-by: NAlexandra Wang <lewang@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      e9fe4224
    • D
      Remove bkuprestore test · d6409042
      Daniel Gustafsson 提交于
      The bkuprestore test was imported along with the source code during the
      initial open sourcing, but has never been used and hasn't worked in a
      long time. Rather than trying to save this broken mess, let's remove it
      and start fresh with a pg_dump TAP test which is a much better way to
      test backup/restore.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Reviewed-by: NJimmy Yih <jyih@pivotal.io>
      d6409042
    • D
      Update ORCA output file for update_gp ICG test · 6d61b3fe
      Dhanashree Kashid 提交于
      6d61b3fe
    • S
      Use PXF server from apache/hawq to the new greenplum/pxf repo (#5798) · 54dee6ce
      Shivram Mani 提交于
      PXF client in gpdb uses pxf libraries from apache hawq repo. These pxf libraries will continue being developed in a new PXF repo greenplum-db/pxf and is in the process of getting open sourced in the next few days. The PXF extension and gpdb-pxf client code will continue to remain in gpdb repo.
      
      The following changes are included in this PR:
      
      Transition from the old PXF namespace org.apache.hawq.pxf to org.greenplum.pxf
      (there is a separate PR in the PXF repo to address the package namespace refactor
      greenplum-db/pxf#5)
      
      Doc updates to reflect the new PXF repo and the new package namespace
      54dee6ce
    • A
      Delete SIGUSR2 based fault injection logic in walreceiver. · fc008690
      Ashwin Agrawal 提交于
      Regular fault injection doesn't work for mirrors. Hence, using SIGUSR2 signal
      and on-disk file coupled with it just for testing a fault injection mechanism
      was coded. This seems very hacky and intrusive, hence plan is to get rid of the
      same. Most of the tests using this framework are found not useful as majority of
      code is upstream. Even if needs testing, better alternative would be explored.
      fc008690
    • A
      Remove remaining unused pieces of wal_consistency_checking. · c9dee15b
      Ashwin Agrawal 提交于
      Most of the backup block related modification for providing the
      wal_consistency_checking was removed as part of 9.3 merge. This was mainly done
      to avoid merge conflicts. The masking functions are still used by
      gp_replica_check tool to perform checking between primary and mirrors. But the
      online version of checking during each replay of record was let go. So, in this
      commit cleaning up remaining pieces which are not used. We will get back this in
      properly working condition when we catch up to upstream.
      c9dee15b
    • A
      Remove some unused and not implemented fault types. · c2bbca41
      Ashwin Agrawal 提交于
      Removing the fault types which do not have implementation. Or have
      implementation but doesn't seem usable. This will just help to have only working
      subset of faults. Like data corruption fault seems pretty useless. Even if
      needed then can be easily coded for specific usecase using the skip fault,
      instead of having special one defined for it.
      
      Fault type "fault" is redundant with "error" hence removing the same as well.
      c2bbca41
    • A
      Add gpdb specific files to .gitignore · 36d33485
      Ashwin Agrawal 提交于
      36d33485
    • D
      Fix volatile functions handling by ORCA · e17c6f9a
      Dhanashree Kashid 提交于
      Following commits have been cherry-picked again:
      
      b1f543f3.
      
      b0359e69.
      
      a341621d.
      
      The contrib/dblink tests were failing with ORCA after the above commits.
      The issue has been fixed now in ORCA v3.1.0. Hence we re-enabled these
      commits and bumping the ORCA version.
      e17c6f9a
  3. 24 9月, 2018 3 次提交
    • H
      Remove FIXME, accept that we won't have this assertion anymore. · 1d254cf1
      Heikki Linnakangas 提交于
      I couldn't find an easy way to make this assertion work, with the
      "flattened" range table in 9.3. The information needed for this is zapped
      away in add_rte_to_flat_rtable(). I think we can live without this
      assertion.
      1d254cf1
    • H
      Fix UPDATE RETURNING on distribution key columns. · 306b114b
      Heikki Linnakangas 提交于
      Updating a distribution key column is performed as a "split update", i.e.
      separate DELETE and INSERT operations, which may happen on different nodes.
      In case of RETURNING, the DELETE operation was also returning a row, and it
      was also incorrectly counted in the row count returned to the client, in
      the command tag (e.g. "UPDATE 2"). Fix, and add a regression test.
      
      Fixes https://github.com/greenplum-db/gpdb/issues/5839
      306b114b
    • H
      Refactor code in ProcessRepliesIfAny() to match upstream. · 9e5b20e8
      Heikki Linnakangas 提交于
      The reason we needed the FIXME pq_getmessage() call, marked with the
      FIXME comment, was that we were missing the pq_getmessage() call from
      ProcessStandbyMessage(), that the corresponding upstream version, at the
      point that we're caught up in the merge, had. I believe the reason it was
      missing from ProcessStandbyMessage() was that we had earlier backported
      upstream commit cd19848bd55. That commit removed the pq_getmessage() call
      from ProcessStandbyMessage(), and added one in ProcessRepliesIfAny(),
      instead.
      
      Clarify this by changing the code to match upstream commit cd19848bd55.
      (Except that we don't have pq_startmsgread() yet, that will arrive when
      we merge the rest of commit cd19848bd55.)
      9e5b20e8
  4. 23 9月, 2018 3 次提交
  5. 22 9月, 2018 6 次提交
    • J
      Revert "Add DEBUG mode to the explain_memory_verbosity GUC" · 984cd3b9
      Jesse Zhang 提交于
      Commit 825ca1e3 didn't seem to work well when we hook up ORCA's memory
      system to memory accounting. We are tripping multiple asserts in
      regression tests. The reg test failures seem to suggest we are
      double-free'ing somewhere (or incorrectly accounting). Reverting for now
      to get master back to green.
      
      This reverts commit 825ca1e3.
      984cd3b9
    • T
      Add DEBUG mode to the explain_memory_verbosity GUC · 825ca1e3
      Taylor Vesely 提交于
      The memory accounting system generates a new memory account for every
      execution node initialized in ExecInitNode. The address to these memory
      accounts is stored in the shortLivingMemoryAccountArray. If the memory
      allocated for shortLivingMemoryAccountArray is full, we will repalloc
      the array with double the number of available entries.
      
      After creating approximately 67000000 memory accounts, it will need to
      allocate more than 1GB of memory to increase the array size, and throw
      an ERROR, canceling the running query.
      
      PL/pgSQL and SQL functions will create new executors/plan nodes that
      must be tracked my the memory accounting system. This level of detail is
      not necessary for tracking memory leaks, and creating a separate memory
      account for every executor will use large amount of memory just to track
      these memory accounts.
      
      Instead of tracking millions of individual memory accounts, we
      consolidate any child executor account into a special 'X_NestedExecutor'
      account. If explain_memory_verbosity is set to 'detailed' and below,
      consolidate all child executors into this account.
      
      If more detail is needed for debugging, set explain_memory_verbosity to
      'debug', where, as was the previous behavior, every executor will be
      assigned its own MemoryAccountId.
      
      Originally we tried to remove nested execution accounts after they
      finish executing, but rolling over those accounts into a
      'X_NestedExecutor' account was impracticable to accomplish without the
      possibility of a future regression.
      
      If any accounts are created between nested executors that are not rolled
      over to an 'X_NestedExecutor' account, recording which accounts are
      rolled over can grow in the same way that the
      shortLivingMemoryAccountArray is growing today, and would also grow too
      large to reasonably fit in memory.
      
      If we were to iterate through the SharedHeaders every time that we
      finish a nested executor, it is not likely to be very performant.
      
      While we were at it, convert some of the convenience macros dealing with
      memory accounting for executor / planner node into functions, and move
      them out of memory accounting header files into the sole callers'
      compilation units.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
      Co-authored-by: NJoao Pereira <jdealmeidapereira@pivotal.io>
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      825ca1e3
    • T
      Move memoryAccountId out of PlannedStmt/Plan Nodes · 7c9cc053
      Taylor Vesely 提交于
      Functions using SQL and PL/pgSQL will plan and execute arbitrary SQL
      inside a running query. The first time we initialize a plan for an SQL
      block, the memory accounting system creates a new memory account for
      each Executor/Node.  In the case that we are executing a cached plan,
      (i.e.  plancache.c) the memory accounts will have already been assigned
      in a previous execution of the plan.
      
      As a result, when explain_memory_verbosity is set to 'detail', it is not
      clear what memory account corresponds to which executor. Instead, move
      the memoryAccountId into PlanState/QueryDesc, which will insure that
      every time we initialize an executor, it will be assigned a unique
      memoryAccountId.
      Co-authored-by: NMelanie Plageman <mplageman@pivotal.io>
      7c9cc053
    • H
      Remove FIXME in RemoveLocalLock, it's alright. · 9e57124b
      Heikki Linnakangas 提交于
      The FIXME was added to GPDB in commit f86622d9, which backported the
      local cache of resource owners attached to LOCALLOCK. I think the comment
      was added, because in the upstream commit that added the cache, the
      upstream didn't thave the check guarding the pfree() yet. It was added
      later in upstream, too, in commit 7e6e3bdd3c, and that had already been
      backported to GPDB. So it's alright, the guard on the pfree is a good thing
      to have, and there's nothing further to do here.
      9e57124b
    • H
      Change pretty-printing of expressions in EXPLAIN to match upstream. · 4c54c894
      Heikki Linnakangas 提交于
      We had changed this in GPDB, to print less parens. That's fine and dandy,
      but it hardly seems worth it to carry a diff vs upstream for this. Which
      format is better, is a matter of taste. The extra parens make some
      expressions more clear, but OTOH, it's unnecessarily verbose for simple
      expressions. Let's follow the upstream on this.
      
      These changes were made to GPDB back in 2006, as part of backporting
      to EXPLAIN-related patches from PostgreSQL 8.2. But I didn't see any
      explanation for this particular change in output in that commit message.
      
      It's nice to match upstream, to make merging easier. However, this won't
      make much difference to that: almost all EXPLAIN plans in regression
      tests are different from upstream anyway, because GPDB needs Motion nodes
      for most queries. But every little helps.
      4c54c894
    • H
      Remove commented-out block of macOS makefile stuff. · c5d875b5
      Heikki Linnakangas 提交于
      I don't understand what all this was about, but people have compiled GPDB
      successfully after the merge commit, where this was commented out, so
      apparently it's not needed.
      c5d875b5
  6. 21 9月, 2018 6 次提交
    • H
      Remove duplicated code to handle SeqScan, AppendOnlyScan and AOCSScan. · ff8161a2
      Heikki Linnakangas 提交于
      They were all treated the same, with the SeqScan code being duplicated
      for AppendOnlyScans and AOCSScans. That is a merge hazard: if some code
      is changed for SeqScans, we would have to remember to manually update
      the other copies. Small differences in the code had already crept up,
      although given that everything worked, I guess it had no effect. Or
      only had a small effect on the computed costs.
      
      To avoid the duplication, use SeqScan for all of them. Also get rid of
      TableScan as a separate node type, and have ORCA translator also create
      SeqScans.
      
      The executor for SeqScan node can handle heap, AO and AOCS tables, because
      we're not actually using the upstream SeqScan code for it. We're using the
      GPDB code in nodeTableScan.c, and a TableScanState, rather than
      SeqScanState, as the executor node. That's how it worked before this patch
      already, what this patch changes is that we now use SeqScan *before* the
      executor phase, instead of SeqScan/AppendOnlyScan/AOCSScan/TableScan.
      
      To avoid having to change all the expected outputs for tests that use
      EXPLAIN, add code to still print the SeqScan as "Seq Scan", "Table Scan",
      "Append-only Scan" or "Append-only Columnar Scan", depending on whether
      the plan was generated by ORCA, and what kind of a table it is.
      ff8161a2
    • H
      Move UnpackCheckPointRecord to xlogdesc.c, to avoid duplicating it. · 16343336
      Heikki Linnakangas 提交于
      As noted in the FIXME, having two copies of the function is bad. It's easy
      to avoid the duplication, if we just put it in xlogdesc.c, so that it's
      available to xlog_desc() in client programs, too.
      16343336
    • D
      Remove unused variable · 414531a6
      Daniel Gustafsson 提交于
      Fixes compiler warning on unused variable which was left over in the
      9.3 merge.
      Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      414531a6
    • A
      Avoid inconsistent type declaration · e52f9c2d
      Alvaro Herrera 提交于
      Clang 3.3 correctly complains that a variable of type enum
      MultiXactStatus cannot hold a value of -1, which makes sense.  Change
      the declared type of the variable to int instead, and apply casting as
      necessary to avoid the warning.
      
      Per notice from Andres Freund
      e52f9c2d
    • H
      Merge with PostgreSQL 9.3 (up to almost 9.3beta2) · c7649f18
      Heikki Linnakangas 提交于
      Merge with PostgreSQL, up to the point where the REL9_3_STABLE branch was
      created, and 9.4 development started on the PostgreSQL master branch. That
      is almost up to 9.3beta2.
      
      Notable upstream changes, from a GPDB point of view:
      
      * LATERAL support. Mostly works in GPDB now, although performance might not
        be very good. LATERAL subqueries, except for degenerate cases that can be
        made non-LATERAL during optimization, typically use nested loop joins.
        Unless the data distribution is the same on both sides of the join, GPDB
        needs to add Motion nodes, and cannot push down the outer query parameter
        to the inner side through the motion. That is the same problem we have
        with SubPlans and nested loop joins in general, but it happens frequently
        with LATERAL. Also, there are a couple of cases, covered by the upstream
        regression tests, where the planner currently throws an error. They have
        been disabled and marked with GPDB_93_MERGE_FIXME comments, and will need
        to be investigated later. Also, no ORCA support for LATERAL yet.
      
      * Materialized views. They have not been made to work in GPDB yet. CREATE
        MATERIALIZED VIEW works, but REFRESH MATERIALIZED VIEW does not. The
        'matviews' test has been temporarily disabled, until that's fixed. There
        is a GPDB_93_MERGE_FIXME comment about this too.
      
      * Support for background worker processes. Nothing special was done about
        them in the merge, but we could now make use of them for all the various
        GPDB-specific background processes, like the FTS prober and gpmon
        processes.
      
      * Support for writable foreign tables was introduced. I believe foreign
        tables now have all the same functionality, at a high level, as external
        tables, so we could start merging the two concepts. But this merge commit
        doesn't do anything about that yet, external tables and foreign tables
        are still two entirely different beasts.
      
      * A lot of expected output churn, thanks to a few upstream changes. We no
        longer print a NOTICE on implicitly created indexes and sequences (commit
        d7c73484), and the rules on when table aliases are printed were changed
        (commit 11e13185).
      
      * Caught up to a bunch of features that we had already backported from 9.3:
        data page checksums, numeric datatype speedups, COPY FROM/TO PROGRAM, and
        pg_upgrade as whole.
      
      A couple of other noteworthy changes:
      
      * contrib/xlogdump utility is removed, in favor of the upstream
        contrib/pg_xlogdump utility.
      
      * Removed "idle session timeout" hook. The current implementation was badly
        broken by upstream refactoring of timeout handling (commit f34c68f0).
        We'll probably need to re-introduce it in some form, but it will look
        quite different, to make it fit more nicely with the new timeout APIs.
      Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
      Co-authored-by: NAsim R P <apraveen@pivotal.io>
      Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
      Co-authored-by: NEkta Khanna <ekhanna@pivotal.io>
      Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
      Co-authored-by: NJacob Champion <pchampion@pivotal.io>
      Co-authored-by: NJinbao Chen <jinchen@pivotal.io>
      Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
      Co-authored-by: NPaul Guo <paulguo@gmail.com>
      Co-authored-by: NRichard Guo <guofenglinux@gmail.com>
      Co-authored-by: NShaoqi Bai <sbai@pivotal.io>
      c7649f18
    • A
      Fix travis build error, old apr tarball doesn't exist anymore · 27adcf92
      Adam Lee 提交于
      ```
      $ wget http://ftp.jaist.ac.jp/pub/apache/apr/${APR}.tar.gz
      --2018-09-21 07:16:24--  http://ftp.jaist.ac.jp/pub/apache/apr/apr-1.6.3.tar.gz
      Resolving ftp.jaist.ac.jp (ftp.jaist.ac.jp)... 150.65.7.130, 2001:df0:2ed:feed::feed
      Connecting to ftp.jaist.ac.jp (ftp.jaist.ac.jp)|150.65.7.130|:80... connected.
      HTTP request sent, awaiting response... 404 Not Found
      2018-09-21 07:16:25 ERROR 404: Not Found.
      ```
      27adcf92