- 13 6月, 2017 3 次提交
-
-
由 Marbin Tan 提交于
CID 170479: (Null pointer dereferenced in agg_put_qexec) We removed too much in c0c1897f Coverity detected that we were trying to access a pointer that is NULL for this codepath. It would come up if the key value pair didn't yet exist in the the apr_hash. Ensure that we allocate memory before doing a memcpy.
-
由 mkiyama 提交于
-
由 Todd Sedano 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 12 6月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
-
- 10 6月, 2017 8 次提交
-
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
-
由 dyozie 提交于
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
We need to bump to CentOS 7 before we can undo this, because Red Hat ships very old compilers :(
-
由 Chris Hajas 提交于
* Remove DDBoost Full and Incremental backup/restore TINC jobs * Add behave backup/restore DDBoost job The previous job took 3h, 50 mins and the new job takes 1h, 20 mins.
-
由 Chris Hajas 提交于
These tests were used to test backup/restore on Data Domain. They have been replaced with behave tests. Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Karen Huddleston 提交于
This is part of the effort to unify our backup/restore tests into a single suite. * Adds infrastructure to setup DDBoost on the client and cleanup server * after completion * Adds tests for DDBoost specific options * Adds test coverage from TINC suite that was not included in behave Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 09 6月, 2017 11 次提交
-
-
由 Pengzhou Tang 提交于
A test output in extprotocol is undeterministic in which case all segments will report an error, you can not tell which segment will report it firstly. Normally it's segment 0 first report that, but we can not count on this.
-
由 Haisheng Yuan 提交于
Make sliceIndex, rootIndex, parentIndex and children near each other to see slice debug output info easily.
-
由 Pengzhou Tang 提交于
Formerly, GPDB do dispatch/interconnect cleanup on executor level which means once an error occurs within executor, it will be catched and dispatch/interconnect will be cleaned. The problem is if an error occurs after an executor started but before the executor run, dispatch/interconnect has no chance to be cleaned up. A problem is that outbound UDP interconnect packets still think the interconnect is active and will access the memory that has been freed. This commit add a few cleanup points on portal level, a higher call level than executor to cover more cases shown as above. mppExecutorCleanup() is reentrant, so it's ok to do double check on both level.
-
由 Pengzhou Tang 提交于
This statistic info is convenient for debugging and test cases
-
由 Pengzhou Tang 提交于
This reverts commit 9bea5037.
-
由 Pengzhou Tang 提交于
Formerly, GPDB do dispatch/interconnect cleanup on executor level which means once an error occurs within executor, it will be catched and dispatch/interconnect will be cleaned. The problem is if an error occurs after an executor started but before the executor run, dispatch/interconnect has no chance to be cleaned up. A problem is that outbound UDP interconnect packets still think the interconnect is active and will access the memory that has been freed. This commit add a few cleanup points on portal level, a higher call level than executor to cover more cases shown as above. mppExecutorCleanup() is reentrant, so it's ok to do double check on both level.
-
由 Daniel Gustafsson 提交于
Since Command creates a short-lived SSH session, we observe the PID given a throw-away remote process. Assume that the PID is unused and available on the remote in the near future. This pid is no longer associated with a running process and won't be recycled for long enough that tests have finished. Looking ahead introduces the risk of a time-of-check-time-of-use race since the pid might have been allocated by the operating system by the time the test would use the data.
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 08 6月, 2017 12 次提交
-
-
由 Chuck Litzell 提交于
* docs: correct the gpperfmon_install command syntax * Correct gpperfmon_install command syntax in help text * Remove references to SHA-256-FIPS * gpperfmon_install docs: Clarify options syntax Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Pengzhou Tang 提交于
In Orca, motion node used to assign it's child node with FLOW_SINGLETON if the child has only one sender, it works fine for multiple segments cluster, but for single segment cluster, all gangs that supposed to be GANGTYPE_PRIM ARY_READER/WRITER now become GANGTYPE_ENTRYDB_READER or GANGTYPE_SINGLETON_READER, no primary reader or writer gangs are created. A typical bug of this is singleton/entry-db readers will get stuck in obtaining shared snapshot that should have been created by writer gang. This commit fixes #2121, it takes one segment cluster into consideration and assigns flow type more accurately.
-
由 Pengzhou Tang 提交于
-
由 Andreas Scherbaum 提交于
* Make the 10 Gb optional
-
由 Asim R P 提交于
Original deadlock is caused by a reader waiting on a lock which is already held by the writer of the same MPP session, and another session is waiting for a conflicting mode on the same lock. The fix is to avoid checking waitMask conflicts for reader (i.e. MyProc is different from lockHolderProcPtr). Detailed discussion of the deadlock issue is at: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/OS1-ODIK0P4/ZIzayBbMBwAJ Two isloation2 tests are added. One to validate the deadlock does not occur and another to ensure that granting locks to readers does not starve existing waiters. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Daniel Gustafsson 提交于
The xmloption support for preserving whitespace was enabled in a previous commit, bump the feature compliance table to reflect the status of the code.
-
由 C.J. Jameson 提交于
- Only difference is `#if 0` / `#endif` around the whole file so that it's clear to developer utilities that these aren't compiled in
-
由 Todd Sedano 提交于
Partially addresses https://github.com/greenplum-db/gpdb/issues/2422Signed-off-by: NTodd Sedano <professor@gmail.com> Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Andreas Scherbaum 提交于
* Make the note about million files more explicit
-
由 Nadeem Ghani 提交于
gpmondb.c CID 160656: Resource leak in gpdb_check_partitions(). An early return meant that the connection was not always closed. Hence, calculate the return value, close the connection, then return. gpmon_agg.c CID 160655: Resource leak in write_dbmetrics(). Files were not being closed if dbmetrics were too long. Hence, fix the case where early return means files don't get closed. gpmondb.c CID 160654: Resource leak in gpdb_exec(). The pconn passed in was not always getting a PGconn assigned to it. But returning a failed connection and letting the caller of gpdb_exec clean it up is the usage pattern. Hence, always make the conn available before returning. gpmmon.c CID 160653: removed as part of remove_iterators work. gpmon_agg.c CID 160652: duplicate of CID 160655 Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 07 6月, 2017 5 次提交
-
-
由 Andreas Scherbaum 提交于
* Update HA documentation Mirrors must reside on different storage systems too. Master fail-over must be triggered externally. 10-Gb Ethernet is optional.
-
由 Pengzhou Tang 提交于
-
由 Pengzhou Tang 提交于
Under former TCP interconnect, after declaring a cursor for an invalid query like "declare c1 cursor for select c1/0 from foo", the following FETCH command can still fetch an empty row instead of an error. This is incorrect and does not consist with UDP interconnect. The RCA is senders of TCP interconnect always send an EOF message to their peers regardless of the errors on segments, so the receivers can not tell the difference of EOF or an error. The solution in this commit is do not send the EOF to the peers if senders is encountering an error and let the QD to check the whole segments status when it can not read data from interconnect for a long time.
-
由 Pengzhou Tang 提交于
This commit restore TCP interconnect and fix some hang issues. * restore TCP interconnect code * Add GUC called gp_interconnect_tcp_listener_backlog for tcp to control the backlog param of listen call * use memmove instead of memcpy because the memory areas do overlap. * call checkForCancelFromQD() for TCP interconnect if there are no data for a while, this can avoid QD from getting stuck. * revert cancelUnfinished related modification in 8d251945, otherwise some queries will get stuck * move and rename faultinjector "cursor_qe_reader_after_snapshot" to make test cases pass under TCP interconnect.
-
由 Pengzhou Tang 提交于
* Change the default level of gp_log_gang to off. * Log the query plan size in level TERSE, it's useful for debugging.
-