- 09 6月, 2017 11 次提交
-
-
由 Pengzhou Tang 提交于
A test output in extprotocol is undeterministic in which case all segments will report an error, you can not tell which segment will report it firstly. Normally it's segment 0 first report that, but we can not count on this.
-
由 Haisheng Yuan 提交于
Make sliceIndex, rootIndex, parentIndex and children near each other to see slice debug output info easily.
-
由 Pengzhou Tang 提交于
Formerly, GPDB do dispatch/interconnect cleanup on executor level which means once an error occurs within executor, it will be catched and dispatch/interconnect will be cleaned. The problem is if an error occurs after an executor started but before the executor run, dispatch/interconnect has no chance to be cleaned up. A problem is that outbound UDP interconnect packets still think the interconnect is active and will access the memory that has been freed. This commit add a few cleanup points on portal level, a higher call level than executor to cover more cases shown as above. mppExecutorCleanup() is reentrant, so it's ok to do double check on both level.
-
由 Pengzhou Tang 提交于
This statistic info is convenient for debugging and test cases
-
由 Pengzhou Tang 提交于
This reverts commit 9bea5037.
-
由 Pengzhou Tang 提交于
Formerly, GPDB do dispatch/interconnect cleanup on executor level which means once an error occurs within executor, it will be catched and dispatch/interconnect will be cleaned. The problem is if an error occurs after an executor started but before the executor run, dispatch/interconnect has no chance to be cleaned up. A problem is that outbound UDP interconnect packets still think the interconnect is active and will access the memory that has been freed. This commit add a few cleanup points on portal level, a higher call level than executor to cover more cases shown as above. mppExecutorCleanup() is reentrant, so it's ok to do double check on both level.
-
由 Daniel Gustafsson 提交于
Since Command creates a short-lived SSH session, we observe the PID given a throw-away remote process. Assume that the PID is unused and available on the remote in the near future. This pid is no longer associated with a running process and won't be recycled for long enough that tests have finished. Looking ahead introduces the risk of a time-of-check-time-of-use race since the pid might have been allocated by the operating system by the time the test would use the data.
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 08 6月, 2017 12 次提交
-
-
由 Chuck Litzell 提交于
* docs: correct the gpperfmon_install command syntax * Correct gpperfmon_install command syntax in help text * Remove references to SHA-256-FIPS * gpperfmon_install docs: Clarify options syntax Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Pengzhou Tang 提交于
In Orca, motion node used to assign it's child node with FLOW_SINGLETON if the child has only one sender, it works fine for multiple segments cluster, but for single segment cluster, all gangs that supposed to be GANGTYPE_PRIM ARY_READER/WRITER now become GANGTYPE_ENTRYDB_READER or GANGTYPE_SINGLETON_READER, no primary reader or writer gangs are created. A typical bug of this is singleton/entry-db readers will get stuck in obtaining shared snapshot that should have been created by writer gang. This commit fixes #2121, it takes one segment cluster into consideration and assigns flow type more accurately.
-
由 Pengzhou Tang 提交于
-
由 Andreas Scherbaum 提交于
* Make the 10 Gb optional
-
由 Asim R P 提交于
Original deadlock is caused by a reader waiting on a lock which is already held by the writer of the same MPP session, and another session is waiting for a conflicting mode on the same lock. The fix is to avoid checking waitMask conflicts for reader (i.e. MyProc is different from lockHolderProcPtr). Detailed discussion of the deadlock issue is at: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/OS1-ODIK0P4/ZIzayBbMBwAJ Two isloation2 tests are added. One to validate the deadlock does not occur and another to ensure that granting locks to readers does not starve existing waiters. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Daniel Gustafsson 提交于
The xmloption support for preserving whitespace was enabled in a previous commit, bump the feature compliance table to reflect the status of the code.
-
由 C.J. Jameson 提交于
- Only difference is `#if 0` / `#endif` around the whole file so that it's clear to developer utilities that these aren't compiled in
-
由 Todd Sedano 提交于
Partially addresses https://github.com/greenplum-db/gpdb/issues/2422Signed-off-by: NTodd Sedano <professor@gmail.com> Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Andreas Scherbaum 提交于
* Make the note about million files more explicit
-
由 Nadeem Ghani 提交于
gpmondb.c CID 160656: Resource leak in gpdb_check_partitions(). An early return meant that the connection was not always closed. Hence, calculate the return value, close the connection, then return. gpmon_agg.c CID 160655: Resource leak in write_dbmetrics(). Files were not being closed if dbmetrics were too long. Hence, fix the case where early return means files don't get closed. gpmondb.c CID 160654: Resource leak in gpdb_exec(). The pconn passed in was not always getting a PGconn assigned to it. But returning a failed connection and letting the caller of gpdb_exec clean it up is the usage pattern. Hence, always make the conn available before returning. gpmmon.c CID 160653: removed as part of remove_iterators work. gpmon_agg.c CID 160652: duplicate of CID 160655 Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 07 6月, 2017 17 次提交
-
-
由 Andreas Scherbaum 提交于
* Update HA documentation Mirrors must reside on different storage systems too. Master fail-over must be triggered externally. 10-Gb Ethernet is optional.
-
由 Pengzhou Tang 提交于
-
由 Pengzhou Tang 提交于
Under former TCP interconnect, after declaring a cursor for an invalid query like "declare c1 cursor for select c1/0 from foo", the following FETCH command can still fetch an empty row instead of an error. This is incorrect and does not consist with UDP interconnect. The RCA is senders of TCP interconnect always send an EOF message to their peers regardless of the errors on segments, so the receivers can not tell the difference of EOF or an error. The solution in this commit is do not send the EOF to the peers if senders is encountering an error and let the QD to check the whole segments status when it can not read data from interconnect for a long time.
-
由 Pengzhou Tang 提交于
This commit restore TCP interconnect and fix some hang issues. * restore TCP interconnect code * Add GUC called gp_interconnect_tcp_listener_backlog for tcp to control the backlog param of listen call * use memmove instead of memcpy because the memory areas do overlap. * call checkForCancelFromQD() for TCP interconnect if there are no data for a while, this can avoid QD from getting stuck. * revert cancelUnfinished related modification in 8d251945, otherwise some queries will get stuck * move and rename faultinjector "cursor_qe_reader_after_snapshot" to make test cases pass under TCP interconnect.
-
由 Pengzhou Tang 提交于
* Change the default level of gp_log_gang to off. * Log the query plan size in level TERSE, it's useful for debugging.
-
由 Pengzhou Tang 提交于
It is insecurity to run priviledged containers on public pipelines, so disable resource group jobs until a new priviledged VM mechanism can be provided.
-
由 David Yozie 提交于
-
由 Nadeem Ghani 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Nadeem Ghani 提交于
Adding tests to show that minimal functionality of gpcheck works. gpcheck checks for specific system requirements. As such, we had to add system configuration changes to control files at test runtime prior to behave. We explored running some tests as root to more realistically test how gpcheck is used, but don't have that at this time. Also adding back gpcheck_hostdump which is called by gpcheck. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Nadeem Ghani 提交于
- had been failing for lack of a help string -- can come back and provide more detail if people are interested Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
This test class was named the same, "GpTestCase", as the name of the superclass we use for all tests. Since the packaging was different, this was not a python conflict, but it was odd and misleading. This class can be named anything, so it seems easy to change it. Also modernized the superclass, optimized imports, and added a main() Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Jimmy Yih 提交于
The compilation job for the binary swap test portion of the pipeline used the same remote as our normal compilation job. If someone forked the pipeline, they may encounter issues if that tag or branch is missing or not updated in their forked remote of Greenplum. Adding an additional separate remote variable fixes this.
-
由 Nadeem Ghani 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Melanie Plageman 提交于
- Remove iteration specific members of qexec packet - Remove iterators_history table - Remove measures used to populate iterators_history - Remove iterator_aggregate flag Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-