1. 09 11月, 2017 2 次提交
    • L
      docs - qualify some resource-queue specific content (part 1) (#3695) · a8a8df29
      Lisa Owen 提交于
      * docs - qualify some resource-queue specific content (part 1)
      
      * explicitly state resource groups do not use gp_vmem_protect_limit
      
      * RQ/RG qualify some gucs and system tables/views
      
      * qualify gp_toolkit RQ content
      
      * add RG segment memory calculation
      
      * clarify resgroup perseg memory based on active primary segs on host
      
      * remove max_resource_groups guc again
      a8a8df29
    • A
      Fix cases which are unpredictable (#3797) · 195aaf54
      Adam Lee 提交于
      * Several small fixes of the tests
      
      1, ignore two generated test files.
      2, remove the string containing unpredictable segment numbers.
      3, drop tables in external_table case, so we could run multiple times of it once.
      
      * Fix cases which are unpredictable
      
      > commit 3bbedbe9
      > Author: Heikki Linnakangas <hlinnakangas@pivotal.io>
      > Date:   Thu Nov 2 10:04:58 2017 +0200
      >
      >     Wake up faster, if a segment returns an error.
      >     Previously, if a segment reported an error after starting up the
      >     interconnect, it would take up to 250 ms for the main thread in the QD
      >     process to wake up and poll the dispatcher connections, and to see that
      >     there was an error. Shorten that time, by waking up immediately if the
      >     QD->QE libpq socket becomes readable while we're waiting for data to
      >     arrive in a Motion node.
      >     This isn't a complete solution, because this will only wake up if one
      >     arbitrarily chosen connection becomes readable, and we still rely on
      >     polling for the others. But this greatly speeds up many common scenarios.
      >     In particular, the "qp_functions_in_select" test now runs in under 5 s
      >     on my laptop, when it took about 60 seconds before.
      
      > Before this commit, the master would only check every 250 ms if one of the
      > segments had reported an error. Now it wakes up and cancels the whole query as
      > soon as it receives an error from the first segment. That makes it more likely
      > that the other segments have not yet reached the same number of errors as what
      > is memorized in the expected output.
      
      These two cases check:
      
      1, when selecting from a cte fails, one of the external table of the cte
      reached the error limit, how many errors happened in the other external
      table of the cte, which would not reached the limit.
      
      2, when selecting from an external table with two locations mapped to
      two segments each, one segment reached the reject limit, the other also
      reached the same.
      
      We could not predict these two results without special test files, even
      without that commit actually. This commit removes the cte case and
      checks at least one segment failed in case readable_query26.
      195aaf54
  2. 08 11月, 2017 38 次提交