- 31 8月, 2017 12 次提交
-
-
由 David Yozie 提交于
-
由 Lav Jain 提交于
* Refactor GPHDFS regression to run for pxf * remove customized Hadoop home location * PXF tarball creation inside GPDB pipeline * Remove legacy directory * Use enable_pxf instead of with_pxf
-
由 Daniel Gustafsson 提交于
-
由 Bhuvnesh 提交于
* Change the directory location and update readme When conan is used to build the dependencies (orca & xerces) of gpdb, it copies the headers and libraries to the path specified in the imports section of the conanfile.txt. Changing the target copy location to /usr/local/include and /usr/local/lib as its the default for gpdb. In case the user prefers to have a different directory, they can change the location accordingly.
-
由 Heikki Linnakangas 提交于
Rather than appending to a StringInfo, return a string. The caller can append that to a StringInfo if he wants to. And instead of passing a prefix as argument, the caller can prepend that too. Both callers passed the same format string, so just embed that in the function itself. Don't append a trailing "; ". It's easier for the caller to append it, if it's preferred, than to remove it afterwards. Also add a regression test for the 'gp_enable_fallback_plan' GUC. There were none before. The error message you get with that GUC disabled uses the gp_guc_list_show function.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Plus other minor cleanup.
-
由 Larry Hamel 提交于
Previously, during gpinitsystem, the standby was instantiated in the middle of setting up the master. This ordering caused problems because initializing the standby could cause an exit when an error occurred. As a result of this early exit, the gp_toolkit and DCA gucs were not set properly. Instead, initialize the standby after the master is finished. ------------------------------------------ Previously the exit return code for gpinitsystem was always non-zero. Now, it is non-zero only in an error or warning case. The issue was due to SCAN_LOG interpretation of an empty string as a line count of one. Fixed by changing to word count. ------------------------------------------ Initializing a standby can no longer cause gpinitsystem to exit early. Added extra logging/output about standby master status. Tell user at the end of gpinitsystem if gpinitstandby failed. ------------------------------------------ Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Nadeem Ghani 提交于
This change adds a check to gpaddmirrors: First check if heap_checksum setting is consistent across cluster. If not, fail immediately, else continue with the normal workflow. Signed-off-by: NShoaib Lari <slari@pivotal.io> Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Shoaib Lari 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Jacob Champion 提交于
The locking contract to access LSN of a page is: 1. Content lock must be held in exclusive mode, OR 2. Content lock must be held in shared mode and buffer header spinlock must be held. PageGetLSN() and BufferGetLSNAtomic() now assert that this contract is maintained for shared buffers. To make the implementation for PageGetLSN() a little easier, move to a static inline function instead of a macro. Callers passing a PageHeader must now explicitly cast to Page. Signed-off-by: NAsim R P <apraveen@pivotal.io> Signed-off-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Jacob Champion 提交于
Certain callers of PageGetLSN weren't correctly holding the buffer spinlock; it is needed whenever the buffer content lock is not held in exclusive mode. For heapam.c, also ensure that we don't access the LSN after releasing the lock. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
- 30 8月, 2017 21 次提交
-
-
由 Heikki Linnakangas 提交于
Most, if not all, of the queries in the qp_olap_windowerr test, contained gpdiff "mvd" directives, to tell gpdiff what the expected order of output rows is. However, all of the queries in that test fail on purpose, because of varios errors. That means that the "mvd" directives didn't do anything, because there were not result sets in the output. However, commit de548159, added a few tests that return a result set, to the end of the test script. That caused the preceding "mvd" directives to be applied, incorrectly, to those new result sets. That produced a lot of messages like "specified MVD column out of range: 3 vs 1" in the console. While harmless, they didn't cause the test to fail, let's be tidy.
-
由 Heikki Linnakangas 提交于
'nuff said.
-
由 Adam Lee 提交于
It might cause conflicts, safe to remove once put `PG_MODULE_MAGIC` into `extern "C"` block.
-
由 Yuan Zhao 提交于
1. Add dependency packages to Ivy 2. Modify set_bld_arch.sh to correctly recongize aix7 3. Disable unsupported python libraries on aix7. 4. Disable gpmapreduce for aix7 5. Set ADDON_DIR for aix7 Signed-off-by: Peifeng Qiu pqiu@pivotal.io
-
由 Hubert Zhang 提交于
This commit fixed the regresstion introduced by commit WorkerPool: Error out if numWorkers is 0 or less. Details in https://github.com/greenplum-db/gpdb/pull/3036Signed-off-by: NXiang Sheng <stanly.sxiang@gmail.com>
-
由 Tom Meyer 提交于
Original commit message: This speeds up reassigning locks to the parent owner, when the transaction holds a lot of locks, but only a few of them belong to the current resource owner. This is particularly helps pg_dump when dumping a large number of objects. The cache can hold up to 15 locks in each resource owner. After that, the cache is marked as overflowed, and we fall back to the old method of scanning the whole local lock table. The tradeoff here is that the cache has to be scanned whenever a lock is released, so if the cache is too large, lock release becomes more expensive. 15 seems enough to cover pg_dump, and doesn't have much impact on lock release. Jeff Janes, reviewed by Amit Kapila and Heikki Linnakangas.
-
由 Shivram Mani 提交于
-
由 dyozie 提交于
-
由 dyozie 提交于
-
由 Nadeem Ghani 提交于
Remove global variable table_expand_error by checking the pool of done ExpandCommand(s). Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Shoaib Lari 提交于
This commit adds a check for cluster state, heap_checksum setting on all primary segments match heap_checksum setting on master, before doing the expansion. If all primaries match the master, gpexpand continues with setting up expansion segments. Otherwise, it logs the inconsistent primaries and exits. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Nadeem Ghani 提交于
gpexpand had a lot of code in the __main__ module method, along with global vars used by other methods and classes in the module. This commit introduces a main() method, which can be called from unit tests, and converts global vars to params and fields. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Shoaib Lari 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Mel Kiyama 提交于
* DOCS: gucs for interconnect debugging * docs: port fixes from COPY ON SEGMENT review.
-
由 Jacob Champion 提交于
The recipe didn't properly chain the $(MAKE) invocations; any failures when building or installing gpcloud were ignored.
-
由 Heikki Linnakangas 提交于
It was getting in the way of backporting commit 9b1b9446f5 from PostgreSQL, which added an '#include "storage/lock.h"' to resowner.h, forming a cycle. The include was only needed for the decalaration of awaitedOwner global variable. Replace "ResourceOwner" with the equivalent "struct ResourceOwnerData *" to avoid it. This revealed a bunch of other files that were relying on resowner.h being indirectly included through lock.h. Include resowner.h directly in those files. The ResPortalIncrement.owner field was not used for anything, so instead of including resowner.h in that file, just remove the field that needed it.
-
由 Xin Zhang 提交于
An FTS probe to the primaries in a mirrorless cluster will never result in the update of gp_segment_configuration. If a primary goes down, we must keep the primary marked as up so that gpstart can start the primary back up. All transactions will abort and nothing should work except for read-only queries to master-only catalog tables. Stopping the FTS probes for mirrorless cluster introduced an infinite loop in FtsNotifyProber in which the dispatcher waits in an infinite loop for fts_statusVersion to change. To break the infinite loop, we acknowledge the forced probe request as a no-op and update fts_statusVersion to break the loop for the dispatcher. The dispatcher should then act same as before this commit. We also add #define for character value of GpFaultStrategy. Authors: Xin Zhang, Ashwin Agrawal, and Jimmy Yih
-
由 Lav Jain 提交于
* Refactor GPHDFS regression to run for pxf * remove customized Hadoop home location
-
由 Todd Sedano 提交于
We believe the pipeline is red due to this commit: https://github.com/greenplum-db/gpdb/commits/fdc9e0a2812dbb01f0883c570f90d82397e2c573 The PR pipeline is red when it was pushed: https://github.com/greenplum-db/gpdb/pull/3089
-
由 Mel Kiyama 提交于
* DOCS: New GUC gp_enable_segment_copy_checking. COPY ON SEGMENT changes * docs: remove draft comment * docs: Edited text based on review comments. Reorganized notes on COPY ... ON SEGMENT information. * docs: clean up typos found in review.
-
由 David Yozie 提交于
* porting hstore contrib module docs from postgres 8.4 * adding link to hstore doc from data type refernce * adding placeholder install instructions * removing info about GIN index support
-
- 29 8月, 2017 7 次提交
-
-
由 Weinan WANG 提交于
Some unreentrant functions are invoked in signal handler. To fix this bug: change signal handler to asynchronous modle. using global variable "sig_flag" to store last signal state,every 1s polling or after failed happen in block IO function(such as send/ receive) check "sig_flag". fix bug : gpload does not stop after informatica sends exit call Some unreentrant functions are invoked in signal handler. To fix this bug: change signal handler to asynchronous modle. using global variable "sig_flag" to store last signal state,every 1s polling or after failed happen in block IO function(such as send/ receive) check "sig_flag".
-
由 Heikki Linnakangas 提交于
Introduced by commit 522c7c09, spotted by Coverity.
-
由 Heikki Linnakangas 提交于
In PostgresQL, the DO block is after RemoveFuncStmt. It was slightly misplaced when it was backported from PostgreSQL 9.0.
-
由 Daniel Gustafsson 提交于
Any error that the user is expected to see should be using the ereport() macro rather than its older cousin elog(). Also avoid closing resources just before erroring out as the error cleanup will be handled automatically, and move a long message to have an errhint instead.
-
由 Pengzhou Tang 提交于
In binary swap test, new binary is replaced by old binary and then run pg_dump, however, pg_dump will still try to load new regress.so, so if regress.so contain new symbols only belong to new binary, it will report an error. For resGroupPalloc() ifself, IsResGroupEnabled or IsResGroupActivated make no much difference, so to make binary swap pass, we still use IsResGroupEnabled.
-
由 Heikki Linnakangas 提交于
This means that hstore will be compiled and installed by a top level "make", and the regression tests are run as part of "make installcheck-world".
-
由 Heikki Linnakangas 提交于
-