1. 16 12月, 2017 7 次提交
  2. 14 12月, 2017 7 次提交
    • T
      Support gptransfer only schema of databases or tables · 96c17141
      Tingfang Bao 提交于
      This is to make gptransfer able to transfer only schema of databases or
      tables, like "--schema-only -d foo" or "--schema-only -t bar.public.t1".
      It could do that before actually but forgot to set the success flag.
      Signed-off-by: NAdam Lee <ali@pivotal.io>
      (cherry picked from commit d5852e91)
      96c17141
    • P
      Fix use-after-free in PyGreSQL · e4aab1ce
      Peifeng Qiu 提交于
      pg_query function is the underlying workhorse for db.query in
      python. For INSERT queries, it will return a string containing
      the number of rows successfully inserted.
      
      PQcmdTuples() parses a PGresult return by PQExec, if it's an insert
      count result, return a pointer to the count. However this pointer
      is the internal buffer of PGresult, it shouldn't be used after
      PQClear(), although most time its content remain accessible and
      unchanged. PyString_FromString will make a copy of the string, so
      move PQClear() after PyString_FromString is safe.
      
      This will fix the problem that gpload get a unprintable insert
      count sometimes.
      e4aab1ce
    • B
      Set the max size of join order threshold to 12 · 7992be03
      Bhuvnesh Chaudhary 提交于
      Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
      7992be03
    • S
      Correct typo · bf0c7c77
      Shreedhar Hardikar 提交于
      bf0c7c77
    • S
      Fix storage test failures caused by 916f460f · bf182cee
      Shreedhar Hardikar 提交于
      The default value of Gp_role is set to GP_ROLE_DISPATCH. Which means
      auxiliary processes inherit this value. FileRep does the same, but also
      executes queries using SPI on the segment. Which means Gp_role ==
      GP_ROLE_DISPATCH is not a sufficient check for master QD.
      
      So, bring back the check on GpIdentity.
      
      Author: Asim R P <apraveen@pivotal.io>
      Author: Shreedhar Hardikar <shardikar@pivotal.io>
      bf182cee
    • S
      Ensure that ORCA is not called on any process other than the master QD · 8132c8df
      Shreedhar Hardikar 提交于
      We don't want to use the optimizer for planning queries in SQL, pl/pgSQL
      etc. functions when that is done on the segments.
      
      ORCA excels in complex queries, most of which will access distributed
      tables. We can't run such queries from the segments slices anyway
      because they require dispatching a query within another - which is not
      allowed in GPDB. Note that this restriction also applies to non-QD
      master slices.  Furthermore, ORCA doesn't currently support pl/*
      statements (relevant when they are planned on the segments).
      
      For these reasons, restrict to using ORCA on the master QD processes
      only.
      
      Also revert commit d79a2c7f ("Fix pipeline failures caused by 0dfd0ebc.")
      and separate out gporca fault injector tests in newly added
      gporca_faults.sql so that the rest can run in a parallel group.
      Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
      8132c8df
    • L
      df7aa223
  3. 13 12月, 2017 3 次提交
  4. 12 12月, 2017 11 次提交
    • H
      Bump Orca version to 2.51.3 · 07e13721
      Haisheng Yuan 提交于
      07e13721
    • J
      Fix gpfdist regression bug. (#4106) · 18866480
      Jialun 提交于
      Update grep keywords to filter the unrelated program.
      18866480
    • C
      behave: allow tests to run at midnight without flaking · 62d52206
      C.J. Jameson 提交于
      These two tests (gpcheckcat and gptransfer) used a step that looked for
      a logfile with a date in the name. If that logfile existed at 11:59PM on
      the day before, and the test looked for it at 12:00AM on the next day,
      it "wouldn't be there"
      
      `Exception: Log "/home/gpadmin/gpAdminLogs/gpcheckcat_20171122.log" was not created`
      
      Refactor the tests so that assertions about using the typical
      gpAdminLogs directory are as banal as possible; emphasize the gptransfer
      tests of the user option to specify a log directory
      
      Author: C.J. Jameson <cjameson@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      (cherry picked from commit 1de55903)
      62d52206
    • C
      gpstop: Document --host flag. · 0fc332c6
      C.J. Jameson 提交于
      Author: C.J. Jameson <cjameson@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      (cherry picked from commit 5f6c036e)
      0fc332c6
    • S
      gpstop: Recognize downed segments before exiting · 73a27f6b
      Shoaib Lari 提交于
      Run a distributed query across all segments to force FTS to detect and mark all
      downed segments.
      
      Author: Nadeem Ghani <nghani@pivotal.io>
      Author: Marbin Tan <mtan@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      Author: C.J. Jameson <cjameson@pivotal.io>
      (cherry picked from commit 8d2a56a4)
      73a27f6b
    • C
      gpstop: prevent `gpstop --host` if system doesn't have mirrors · 4549e7c3
      C.J. Jameson 提交于
      If we did stop all primaries on that host, the cluster would be down
      anyway. Best to just do a full-cluster gpstop, then bring it all back up
      together.
      
      (cherry picked from commit 4f96c774)
      4549e7c3
    • C
      gpstop: prevent `gpstop --host` if host has a master or standby on it · 7bf8a93a
      C.J. Jameson 提交于
      underlying pylib code identifies master and standby by content id
      
      `gpstop --host localhost` will fail differently: it will simply not find
      the host in the set of hostnames (unless that's how you configured
      things at first)
      
      (cherry picked from commit 2f1d9d56)
      7bf8a93a
    • S
      gpstop: Disallow flags incompatible with the --host flag. · dd3d46cb
      Shoaib Lari 提交于
      For interaction with `-r`: Since we don't stop the master with --host, restart
      will fail anyways, so we don't allow it from the get go.
      
      For interaction with `-m`: If someone is using `--host` and then thinking they
      want to stop the master but not the segments on a particular host, they should
      just do a full gpstop and then bring everything back up. If someone is using
      `-m` and then thinking they need to specify the host for the `-m` flag, they
      don't need to -- the tool infers from the system and shell state.
      
      Author: C.J. Jameson <cjameson@pivotal.io>
      Author: Shoaib Lari <slari@pivotal.io>
      Author: Marbin Tan <mtan@pivotal.io>
      (cherry picked from commit 70f15158)
      dd3d46cb
    • C
      gpstop: prevent `gpstop --host` if both primary and mirror are there · d6dd8c3f
      C.J. Jameson 提交于
      Signed-off-by: NMarbin Tan <mtan@pivotal.io>
      (cherry picked from commit 990b7518)
      d6dd8c3f
    • M
      gpstop: add ability to stop mirrors and primaries on a specific host · 1e21c899
      Marbin Tan 提交于
      Add a flag `--host` which stops all segments
      on the specified host. An easy way to take down a set of segments
      without having to ssh and kill processes.
      
      Refuse to stop specific host if any primary isn't synched
      Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
      Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
      (cherry picked from commit fa2ef5e7)
      1e21c899
    • N
      gparray: fix constructor setup · 5c7550ce
      Nadeem Ghani 提交于
      ~~This class was handling the hasMirrors field incorrectly. Now we're
      correctly setting the hasMirrors flag.~~
      
      This change broke the gpaddmirrors test, so fixed it.
      
      (cherry-picked from 5dfb9814)
      
      Edited this upon backporting to 5X_STABLE to use Fault Strategy to
      determine if the cluster has mirrors. This should now provide a fault
      strategy attribute on `gpArray`s that weren't initialized with
      initFromCatalog()
      5c7550ce
  5. 10 12月, 2017 2 次提交
    • H
      Fix expected output for the temp toast schema changes. · bbab9fd8
      Heikki Linnakangas 提交于
      gpcheckcat now knows how to drop orhpaned temp toast schemas, in addition
      to temp schemas. Fix the expected output, since the test now reports to
      drop 2 orphaned schemas instead of 1 (the temp schema, and temp toast
      schema), instead of 1.
      bbab9fd8
    • H
      Teach gpcheckcat to drop temp toast schemas along with temp schemas. · 8702b203
      Heikki Linnakangas 提交于
      And allow DROP SCHEMA on temp toast schemas, like we allow dropping temp
      schemas.
      
      These are fixups for commit 2619b329, in reaction to a test failure in
      the "gpcheckcat should drop leaked schemas" scenario in MU_gpcheckcat test
      suite. The test opens a session, creates a temp table in it, and then kills
      the server. That leaks the temp schema in the QD node, because the backend
      dies abruptly, but the QE backends exit cleanly when the QD-QE connection
      is lost, and they do remove the temp schema. That creates an inconsistency
      between the QD and QE nodes. gpcheckcat knows about that problem, and
      removes any orhpaned temp schemas; that's what the test tests.
      
      However, gpcheckcat didn't know about temp toast schemas. Until commit
      2619b329, the temp toast schemas were always leaked, whether the backend
      exited cleanly or not, so there was no inconsistency between the QD and QE
      nodes. After that commit, the temp toast schema behaves the same as the
      temp schema, and the test started failing. The fix is straightforward:
      teach gpcheckcat to clean up temp toast schemas, just like it cleans up
      temp schemas.
      
      Backpatch to 5X_STABLE, like commit 2619b329.
      8702b203
  6. 09 12月, 2017 6 次提交
    • A
      Remove flaky negative walrep test. · 705ac7cc
      Ashwin Agrawal 提交于
      This scenario intended to be tested is if primary (walsender) exits, walreceiver
      must exit as well. Then wal receiber should come-up and try to reconnect and
      fail again till primary is not brought back. Once primary is up, it should be
      able to connect.
      
      The way is coded today is extremely hacky logic as simulates via fault injection
      by creating file `wal_rcv_test` and providing option to suspend at what
      point. Then signal.SIGUSR2 is sent to notify standby to inject the fault. There
      exists no mechanism to check if fault was hit or not. So, tests tend to little
      unreliable at times since don't know if fault was hit or not, hence large sleeps
      are used in this test.
      
      Plus, also file wal_rcv.pid is created in code just for testing purpose to
      validate some behaviors.
      
      Hence, removing this flaky time-consuming test.
      
      (cherry picked from commit ff9c80cc)
      705ac7cc
    • A
      Register SIGUSR1 handler for startup process. · a88112e3
      Ashwin Agrawal 提交于
      Seems this was missed while back-porting wal replication and due to this, latch
      was never signaled to startup process. Instead replay currently always happened
      due to latch time-out of 5 seconds and never due to walreceiver signaling of wal
      arrival.
      
      Author: Xin Zhang <xzhang@pivotal.io>
      Author: Ashwin Agrawal <aagrawal@pivotal.io>
      (cherry picked from commit 42a7dbe7)
      a88112e3
    • H
      Clean up temp toast schema on backend exit. · 2359e057
      Heikki Linnakangas 提交于
      When a backend exits normally, the "pg_temp_<sessionid>" schema is dropped.
      In GPDB 5, with the 8.3 merge, there is now a "pg_temp_toast_<sessionid>"
      schema in addition to the temp schema, but it was not dropped. As a result,
      you would end up with a lot of unused pg_temp_toast_* schemas. To fix,
      also drop the temp toast schema at backend exit.
      
      We will still leak temp schemas, and temp toast schemas, if a backend exits
      abnormally, or if the server crashes. That's not a new issue, but we should
      probably do something about that in the future, too.
      
      Fixes github issue #4061. Backport to 5x_STABLE, where the toast temp
      namespaces were introduced.
      2359e057
    • F
      wrong err msg when file doesn't exists by restore · f8c1608a
      Faisal Ali 提交于
      Currently, when a backup doesn't exist, the gpdbrestore throws error 
      
      ```
      raise ExceptionNoStackTraceNeeded("No dump file on %s in %s" % (seg.getSegmentHostName(), seg.getSegmentDataDirectory()))
      ```
      
      There is an issue with the error message since when the user provides its own directory like using "-u" option and during the run when the dumpfile doesn't exist then the directory shown is also the default directory which is misleading, for eg.s
      
      ```
      [gpadmin@gpdb 20171201]$ gpdbrestore -t 20171201142523 -e -u /home/gpadmin/testbackup/
      [.....]
      Continue with Greenplum restore Yy|Nn (default=N):
      > y
      20171201:19:35:43:020121 gpdbrestore:gpdb:gpadmin-[ERROR]:-gpdbrestore error: No dump file on gpdb in /data/primary/gp_4.3.18.0_201712011453470
      ```
      
      This fix provides the correct location where it couldn't find the dump
      f8c1608a
    • S
      gpexpand: test parallelism more loosely to reduce flakes · 78267127
      Shoaib Lari 提交于
      Squash of 2 commits:
      gpexpand: Simplify test for parallelism.
      
          We only need check for parallelism until the number of tables redistributed
          reached the specified number of tables to be redistributed simultaneously.
      
          Author: Marbin Tan <mtan@pivotal.io>
          Author: Shoaib Lari <slari@pivotal.io>
      
      gpexpand: test parallelism more loosely to reduce flakes
      
          The parallelism tests were too stringent, wanting to observe maximum parallelism
          in order to go green. An example failure we saw was as follows, when 3 threads
          had been "In Progress" together, but not all four (because one thread finished
          really quickly):
      
          `Worker GpExpandTests.check_number_of_parallel_tables_expanded_case_1 failed
          execution: AssertionError: The specified value was never reached.`
      
          Now, we simply assert that some parallelism is observed at some point (2 or more
          "In Progress" at a time). If a test run "flakes" such that there never were two
          "In Progress" at a time, that would be indistinguishable from serial execution,
          so the test would still fail.
      
          Author: C.J. Jameson <cjameson@pivotal.io>
          Author: Shoaib Lari <slari@pivotal.io>
      78267127
    • G
      Add tests for the effect of --extra-version-suffix · 649ceaab
      Goutam Tadi 提交于
      - should append to both psql 'select version()' and also to 'gpssh --version'
      649ceaab
  7. 08 12月, 2017 2 次提交
  8. 07 12月, 2017 2 次提交