1. 18 2月, 2017 1 次提交
    • D
      Moving graphics local to individual books (#1789) · 32453eb5
      David Yozie 提交于
      * adding graphics directory for adminguide
      
      * updating source to use local /graphics dir
      
      * adding missing graphics
      
      * adding ditamap for client docs; removing some extraneous landing pages; adding local graphics
      32453eb5
  2. 17 2月, 2017 14 次提交
    • H
      Move tests on 'demoprot' from TINC to contrib, where the source is. · f8734d9d
      Heikki Linnakangas 提交于
      I had to change some of the tests that used 4 tests rows, to have more (9),
      to make them pass on a 3 segment cluster. If you only insert 4 rows to an
      external table, some segments might not receive any rows, and reading from
      the same file later will fail because the file wasn't created. I also
      changed the queries that tested that rows are distributed evenly, to work
      with a different number of segments.
      
      Hook up the new tests to installcheck-world.
      f8734d9d
    • H
      Fix test case to pass with ORCA. · 6133a493
      Heikki Linnakangas 提交于
      The explain plan with ORCA spells Seq Scan as Table Scan. Add alternative
      expected output for that.
      6133a493
    • H
      Remove silly TINC tests. · c47ad451
      Heikki Linnakangas 提交于
      The main ExplainAnalyzeTestCase tests in here were running standard
      TPC-H-like test queries, and checking that the EXPLAIN ANALYZE output
      contains a "Memory: " line for every plan node. That seems a bit excessive,
      because looking at how those lines are printed, there is no reason to
      believe that you might have a Memory line for some nodes but not others.
      I don't think we need to test for that.
      
      In order to still have minimal coverage for the explain_memory_verbosity
      GUC, add a small test case to the main regression suite for that.
      
      The mpp20785 test was testing for an old bug where you got an out of
      memory error with this query. However, the test was broken, because the
      test schema was nowhere to be found, so it simply resulted in a "schema not
      found" error. Perhaps we could put it back if we could find the original
      schema somewhere, but it's useless as it is.
      c47ad451
    • K
      6fb8aebb
    • D
      Stop printing Ivy credentials during coverity scan · 451bea7f
      David Sharp 提交于
      Rely on the credentials being in the environment rather than explicitly
      passing them. Add a comment to document the requirement.
      
      See also: 0d840bb2
      451bea7f
    • B
      Handle dummy rel for Left Anti Semi Join [#134705163] · c74792fa
      Bhuvnesh Chaudhary 提交于
      When we create Left Anti Semi Join relations in case the left
      side is a dummy relation, it implies that the join relation
      is also a dummy relation.
      Signed-off-by: NOmer Arap <oarap@pivotal.io>
      c74792fa
    • A
      Error if AssignTransactionId is called by Reader. · 6845e104
      Ashwin Agrawal 提交于
      QE_READER and QE_ENTRY_DB_SINGLETON are not expected to allocate transaction
      IDs. They are just helper sub-processes to WRITER and should be using its xid.
      6845e104
    • A
      SET default_tablespace not assign xid by avoiding to acquire lock. · dda8f138
      Ashwin Agrawal 提交于
      SET default_tablespace ideally should just be read-only transaction and
      shouldn't allocate xid. Currently get_tablespace_oid() is acquiring lock for
      other purposes to protect for concurrency, ends up assigning transaction IDs to
      take the lock. Setting the GUC doesn't need to acquire the lock, plus it creates
      the problem as multiple readers and a writer in a gang all end-up assigning
      independent xid when dispatched this GUC. This happens because SET runs as
      QE_AUTO_COMMIT_IMPLICIT for writer and hence has deferred allocation of
      transaction ID. The filespace ICW tests detected this case since hits assert
      FailedAssertion("!(!((proc->xid) != ((TransactionId) 0)))", File: "procarray.c"
      without the fix.
      
      Simplest scenario the problem can be demostrated is:
      - DROP TABLE a;
      - CREATE TABLE a(a INT, b INT);
      - INSERT INTO a VALUES (generate_series(1,2), generate_series(3,4)); -- generates 1 reader and 1 writer gang
      - SET default_tablespace='pg_default';
      dda8f138
    • A
      Avoid checkpoint between commit record and persistent post-commit. · 987113c9
      Ashwin Agrawal 提交于
      The commit xlog record carries the information for objects which serves us for
      crash-recovery till post-commit persistent object work is done, hence cannot
      allow checkpoint in between. So, use inCommit flag to achieve the same. This
      makes it consistent with FinishPreparedTransaction() as well.
      
      Without the fix leaves out dangling persistent table entries and files on-disk
      for following sequence:
      
      - start transaction to drop a table
      - `RecordTransactionCommit()`
      - Checkpoint before `AtEOXact_smgr()`
      - Crash/Recover
      987113c9
    • C
      docs: add Enum datatype (#1782) · 43dd7315
      Chuck Litzell 提交于
      * docs: add Enum datatype
      
      * docs: add a release note for enum datatype
      
      * remove optimizer support statement.
      43dd7315
    • J
      Concourse pipeline remove commit trigger on cs-aoco-compression job. · a574b248
      Jimmy Yih 提交于
      The cs-aoco-compression job was added with intention to run on a CRON
      instead of on every commit batch. However, I didn't notice that the
      aggregate it points to contains a commit trigger... so the project
      needs its own aggregate. This aggregate can be turned into a reference
      later if any other test jobs become CRON triggered.
      a574b248
    • V
      Make ANALYZE behavior consistent with GUCs · f50a5555
      Venkatesh Raghavan 提交于
      ANALYZE should only produce leaf partitions stats when gucs
      optimizer_analyze_midlevel_partition and optimizer_analyze_root_partition
      are set to off
      
      This fix makes the behaviour of ANALYZE consistent with the intent of
      the GUCs.
      f50a5555
    • K
      Enable Sort operator to print sort method on explain analyze.(#1753) · 438328c2
      Karthikeyan Jambu Rajaraman 提交于
      This commit also resolves #1646.
      Porting only the function `tuplesort_get_stats` in tuplesort.c from the
      Postgres commit
      https://github.com/postgres/postgres/blob/9bd27b7c9e998087f390774bd0f43916813a2847/src/backend/utils/sort/tuplesort.c
      Link to gpdb-dev discussion - https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/V-zIshnNzyE
      438328c2
    • J
      Add back resultRelations in PlannedStmt · e4217b8d
      Jesse Zhang 提交于
      This commit adds back `resultRelations` to the out func for
      `PlannedStmt`. The omission affects `debug_print_plan` and `explain
      verbose` (Until 8.4+ I guess)
      
      This seems to be an unintended omission introduced during the
      back-and-forth between 4e01c159 and 35f25e89.
      e4217b8d
  3. 16 2月, 2017 16 次提交
  4. 15 2月, 2017 9 次提交
    • H
      Tighten the checks for invalid "flavor" in url_* interface functions. · f71a061a
      Heikki Linnakangas 提交于
      There is no legitimate reason for these to be called with an invalid type.
      Turn the checks into ERRORs.
      f71a061a
    • H
      No need to destroy tuplesort state on error. · 84f5771d
      Heikki Linnakangas 提交于
      Tuplesorts take care of cleaning up any memory and temporary files on
      abort, no need to have PG_TRY()-PG_CATCH() blocks to do that. These blocks
      were introduced a long time ago, in a patch related to workfile caching,
      but we don't do workfile caching anymore anyway.
      84f5771d
    • H
      Remove unused field. · 39f58fea
      Heikki Linnakangas 提交于
      39f58fea
    • H
      Cosmetic changes to match upstream better. · 625e412e
      Heikki Linnakangas 提交于
      625e412e
    • D
      Fix optimizer output for trigger test ported from Bugbuster · d793e7f4
      Daniel Gustafsson 提交于
      Commit 74a1efd7 missed including the _optimizer expected output for
      the test ported from Bugbuster to ICW. This fixes by including the
      same hunks remembered in the planner output file.
      d793e7f4
    • D
      Port Bugbuster trigger test to ICW · 74a1efd7
      Daniel Gustafsson 提交于
      Since there is no GPDB specific trigger suite in ICW, and this bug
      was originally part of the Rio sprint back in the day, I added it
      to qp_misc_jiras. The suite already contained a version of the test
      so extended it to to include the segment redistribution check as
      well rather than duplicating for essentially the same test. This
      test relies on the rows being allocated to segment id 0, which
      should hold true for test setups with 2/3 segments. Should this be
      a problem going forward the test should be rewritten to check for
      all rows being co-located on a single segment but for now the more
      readable version is kept.
      
      It's worth noting that the expected output in the Bugbuster suite
      has the incorrect behaviour remembered, but inside ignore blocks
      so this bug has never actually been tested for.
      74a1efd7
    • H
      Simplify error handling while setting up CURL handle. · cfa446cb
      Heikki Linnakangas 提交于
      Use a macro to encapsulate the pattern of calling curl_easy_setopt(), and
      ereporting if it fails.
      cfa446cb
    • H
      Use Resource Owners to close external tables on abort. · df050f9b
      Heikki Linnakangas 提交于
      There was a rather ugly mechanism in fileam.c, to ensure that the
      utl_fclose() function was called for any still-open external table, on
      abort. That relied on a global variable and assumed that there can be only
      one external table open at a time, which seems like a shaky assumption.
      
      With this patch, utl_file() is no longer guaranteed to be called on abort.
      Instead, make the UTL_FILE implementations responsible for cleaning up any
      low-level resources, like file descriptors or handles to external libraries
      (like libcurl), themselves. Use the ResourceOwner mechanism for that.
      
      For curl- and EXECUTE-style external tables, register a new ResourceOwner
      hook to close with the libcurl handles, and launched subprocesses, on
      abort. For file external tables, modify fstream.c to open the underlying
      file with OpenTransientFile(), so that it gets closed automatically on
      abort.
      
      This changes the url_fopen() API, so that it never returns a NULL. Instead
      of returning NULL, and reporting an error in the *response_code/string
      arguments, make url_fopen() always report errors directly with ereport().
      That makes the callers simpler, and allows the implementations to flexibly
      report whatever details of the error they have available.
      df050f9b
    • H
      Backport OpenTransientFile from PostgreSQL 9.3. · 8eede15f
      Heikki Linnakangas 提交于
      OpenTransientFile() opens an arbitrary file, like open(), but will track
      the file descriptor by the resource owner mechanism, so that it is
      automatically closed on abort. We already had a similar mechanism for
      temporary files, and for files opened with fopen(), this fills the gap of
      when you want a plain file descriptor rather than a FILE *.
      
      I need this for the next commit, which will use this to automatically close
      files opened by the fstream.c on abort.
      
      This is a backport of upstream commit 1f67078e. I only backported the
      addition of the new function, not the changes to the various callers to
      use the new function, to minimize merge conflicts.
      8eede15f