- 13 6月, 2017 8 次提交
-
-
由 Chuck Litzell 提交于
* Update text for deprecated LOG ERRORS <table> * Remove deprecation note for gpdb5.
-
由 Adam Lee 提交于
gpfdist and gpcloud were shifted to top level by below commits, should be moved out of gpAux/Makefile. commit 6125ac85cae720f484d0d45042131f4b859779d2 Author: Marbin Tan <mtan@pivotal.io> Date: Fri Feb 5 11:23:06 2016 -0800 Move gpfdist to gpdb core. commit 4e34d8bb Author: Adam Lee <ali@pivotal.io> Date: Wed May 24 16:53:15 2017 +0800 Add a build flag for gpcloud
-
由 Ashwin Agrawal 提交于
Change tracking files are used to capture information on what's changed while mirror was down, to help incrementally bring it back to sync. In some instances mostly due to disk issues / full situations, if change tracking log was partial written or got messed up, resulted in rolling PANIC of segment and there by DB unavailable due to double fault. Only way out was manual intervention to remove changetracking files and run full resync. So, instead this commit now adds checksum protection to auto detect any problem with change tracking files during recovery / incremental resync. If checksum miss-match gets detected it takes preventive action to mark segment into ChangeTrackingDisabled state and keeps DB available. Plus, explicitely enforces only full recovery is allowed to bring mirror up in sync as changetracking info doesn't exist. Any attenpt for incremental resync clearly communicates out full resync has to be performed. So, this eliminates need for manual intervention to get DB back to availble state if changetracking files get corrupted.
-
由 Marbin Tan 提交于
Removing unused function that got left over from cleanup.
-
由 Marbin Tan 提交于
CID 170478: Control flow issues (DEADCODE) the "goto" function was dead code. We are closing the currently opened file right before we open up a new file already, so the FILE pointer was always NULL when "goto bail" occurred.
-
由 Marbin Tan 提交于
CID 170479: (Null pointer dereferenced in agg_put_qexec) We removed too much in c0c1897f Coverity detected that we were trying to access a pointer that is NULL for this codepath. It would come up if the key value pair didn't yet exist in the the apr_hash. Ensure that we allocate memory before doing a memcpy.
-
由 mkiyama 提交于
-
由 Todd Sedano 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 12 6月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
-
- 10 6月, 2017 8 次提交
-
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
-
由 dyozie 提交于
-
由 Jesse Zhang 提交于
-
由 Jesse Zhang 提交于
We need to bump to CentOS 7 before we can undo this, because Red Hat ships very old compilers :(
-
由 Chris Hajas 提交于
* Remove DDBoost Full and Incremental backup/restore TINC jobs * Add behave backup/restore DDBoost job The previous job took 3h, 50 mins and the new job takes 1h, 20 mins.
-
由 Chris Hajas 提交于
These tests were used to test backup/restore on Data Domain. They have been replaced with behave tests. Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Karen Huddleston 提交于
This is part of the effort to unify our backup/restore tests into a single suite. * Adds infrastructure to setup DDBoost on the client and cleanup server * after completion * Adds tests for DDBoost specific options * Adds test coverage from TINC suite that was not included in behave Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 09 6月, 2017 11 次提交
-
-
由 Pengzhou Tang 提交于
A test output in extprotocol is undeterministic in which case all segments will report an error, you can not tell which segment will report it firstly. Normally it's segment 0 first report that, but we can not count on this.
-
由 Haisheng Yuan 提交于
Make sliceIndex, rootIndex, parentIndex and children near each other to see slice debug output info easily.
-
由 Pengzhou Tang 提交于
Formerly, GPDB do dispatch/interconnect cleanup on executor level which means once an error occurs within executor, it will be catched and dispatch/interconnect will be cleaned. The problem is if an error occurs after an executor started but before the executor run, dispatch/interconnect has no chance to be cleaned up. A problem is that outbound UDP interconnect packets still think the interconnect is active and will access the memory that has been freed. This commit add a few cleanup points on portal level, a higher call level than executor to cover more cases shown as above. mppExecutorCleanup() is reentrant, so it's ok to do double check on both level.
-
由 Pengzhou Tang 提交于
This statistic info is convenient for debugging and test cases
-
由 Pengzhou Tang 提交于
This reverts commit 9bea5037.
-
由 Pengzhou Tang 提交于
Formerly, GPDB do dispatch/interconnect cleanup on executor level which means once an error occurs within executor, it will be catched and dispatch/interconnect will be cleaned. The problem is if an error occurs after an executor started but before the executor run, dispatch/interconnect has no chance to be cleaned up. A problem is that outbound UDP interconnect packets still think the interconnect is active and will access the memory that has been freed. This commit add a few cleanup points on portal level, a higher call level than executor to cover more cases shown as above. mppExecutorCleanup() is reentrant, so it's ok to do double check on both level.
-
由 Daniel Gustafsson 提交于
Since Command creates a short-lived SSH session, we observe the PID given a throw-away remote process. Assume that the PID is unused and available on the remote in the near future. This pid is no longer associated with a running process and won't be recycled for long enough that tests have finished. Looking ahead introduces the risk of a time-of-check-time-of-use race since the pid might have been allocated by the operating system by the time the test would use the data.
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 08 6月, 2017 12 次提交
-
-
由 Chuck Litzell 提交于
* docs: correct the gpperfmon_install command syntax * Correct gpperfmon_install command syntax in help text * Remove references to SHA-256-FIPS * gpperfmon_install docs: Clarify options syntax Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Pengzhou Tang 提交于
In Orca, motion node used to assign it's child node with FLOW_SINGLETON if the child has only one sender, it works fine for multiple segments cluster, but for single segment cluster, all gangs that supposed to be GANGTYPE_PRIM ARY_READER/WRITER now become GANGTYPE_ENTRYDB_READER or GANGTYPE_SINGLETON_READER, no primary reader or writer gangs are created. A typical bug of this is singleton/entry-db readers will get stuck in obtaining shared snapshot that should have been created by writer gang. This commit fixes #2121, it takes one segment cluster into consideration and assigns flow type more accurately.
-
由 Pengzhou Tang 提交于
-
由 Andreas Scherbaum 提交于
* Make the 10 Gb optional
-
由 Asim R P 提交于
Original deadlock is caused by a reader waiting on a lock which is already held by the writer of the same MPP session, and another session is waiting for a conflicting mode on the same lock. The fix is to avoid checking waitMask conflicts for reader (i.e. MyProc is different from lockHolderProcPtr). Detailed discussion of the deadlock issue is at: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/OS1-ODIK0P4/ZIzayBbMBwAJ Two isloation2 tests are added. One to validate the deadlock does not occur and another to ensure that granting locks to readers does not starve existing waiters. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Daniel Gustafsson 提交于
The xmloption support for preserving whitespace was enabled in a previous commit, bump the feature compliance table to reflect the status of the code.
-
由 C.J. Jameson 提交于
- Only difference is `#if 0` / `#endif` around the whole file so that it's clear to developer utilities that these aren't compiled in
-
由 Todd Sedano 提交于
Partially addresses https://github.com/greenplum-db/gpdb/issues/2422Signed-off-by: NTodd Sedano <professor@gmail.com> Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Andreas Scherbaum 提交于
* Make the note about million files more explicit
-
由 Nadeem Ghani 提交于
gpmondb.c CID 160656: Resource leak in gpdb_check_partitions(). An early return meant that the connection was not always closed. Hence, calculate the return value, close the connection, then return. gpmon_agg.c CID 160655: Resource leak in write_dbmetrics(). Files were not being closed if dbmetrics were too long. Hence, fix the case where early return means files don't get closed. gpmondb.c CID 160654: Resource leak in gpdb_exec(). The pconn passed in was not always getting a PGconn assigned to it. But returning a failed connection and letting the caller of gpdb_exec clean it up is the usage pattern. Hence, always make the conn available before returning. gpmmon.c CID 160653: removed as part of remove_iterators work. gpmon_agg.c CID 160652: duplicate of CID 160655 Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Larry Hamel 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-