- 28 6月, 2017 10 次提交
-
-
由 Lisa Owen 提交于
-
由 Asim R P 提交于
The pg_control change to bring in heap checksum from upstream is breaking binary compatibility. As soon as that is merged, binary swap test will start failing. Disable it now. It will be re-enabled once a new beta tag is generated. Thereafter, binary swap test will verify binary compatibility between the new beta tag and HEAD.
-
由 Asim R P 提交于
This patch pulls in the addition of checksum version information to pg_control and a GUC to report the checksum version. Heap data checksum feature will be pulled in its entirety as subsequent patches. Upstream commit that this patch pulls from: commit 96ef3b8f Author: Simon Riggs <simon@2ndQuadrant.com> Date: Fri Mar 22 13:54:07 2013 +0000 Allow I/O reliability checks using 16-bit checksums commit 44395174 Author: Simon Riggs <simon@2ndQuadrant.com> Date: Tue Apr 30 12:27:12 2013 +0100 Record data_checksum_version in control file. commit 5a7e75849cb595943fc605c4532716e9dd69f8a0 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Mon Sep 16 14:36:01 2013 +0300 Add a GUC to report whether data page checksums are enabled.
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
* Update gpdemo documentation Remove Solaris documentation Update port numbers Add environment variables
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
This was removed in upstream in c970292a, and is one step forward to make a number of counters 64 bit safe.
-
由 Andreas Scherbaum 提交于
-
由 David Yozie 提交于
* DOCS: Adding security guide source * Proposed updates from review
-
- 27 6月, 2017 9 次提交
-
-
由 Andreas Scherbaum 提交于
-
由 Ning Yu 提交于
Support ALTER RESOURCE GROUP SET CPU_RATE_LIMIT syntax. The new cpu rate limit take effect immediately at end of transaction. Example 1: CREATE RESOURCE GROUP g1 WITH (cpu_rate_limit=0.1,memory_limit=0.1); ALTER RESOURCE GROUP g1 SET CPU_RATE_LIMIT 0.2; The new cpu rate limit take effect immediately. Example 2: BEGIN; ALTER RESOURCE GROUP g1 SET CPU_RATE_LIMIT 0.2; The new cpu rate limit doesn't take effect unless the transaction is committed. Signed-off-by: NRichard Guo <riguo@pivotal.io> Signed-off-by: NGang Xiong <gxiong@pivotal.io>
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
-
由 dyozie 提交于
-
由 Jane Beckman 提交于
* Updated file for COPY ON SEGMENT * Extra note about FROM and STDOUT * Incorporate comments from David and Mel * Revise COPY FROM note * Copying note from line 65
-
由 Todd Sedano 提交于
- make distclean can not be called before configure
-
由 Todd Sedano 提交于
-
- 24 6月, 2017 15 次提交
-
-
由 Todd Sedano 提交于
-
由 Chris Hajas 提交于
The gp_statsistics prefix was not included in the list of files to restore from ddboost, causing restore to fail when gpdbrestore --restore-stats was used.
-
由 Chris Hajas 提交于
This functionality was included with pg_dump, but was missing from gpcrondump.
-
由 Jane Beckman 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
With --enable-segwalrep, mirror leverages replay of xl_mm_fs_obj records to delete files. Code was not correctly handling appendonly tables as was calling `smgrdounlink()`, which is for heap tables or indexes. For AO/CO tables, need to perform drop of specific single file mentioned in xlog record, which is performed by `MirroredAppendOnly_Drop()`. Without this code currently files like <relfilenode>.127, <relfilenode>.129,... etc get left behind on mirror. The problem was not seen so far as master never stores data for AO/CO tables hence these files are not created on master. Only when now we start enabling wal replication for segments this is required.
-
由 Ashwin Agrawal 提交于
Helpful for debugging to set GUC Debug_print_qd_mirroring but the message were with DEBUG1. Just enabling and disabling is guc enough to control the logging don't need second level.
-
由 Ashwin Agrawal 提交于
Incase of --enable-segwalrep, write-ahead logging should not be skipped for anything, as it relies on that mechanism to construct the things on mirror. Write-ahead logging for these pieces were only enabled performed for master, with this commit gets enabled for segments as well.
-
由 Ashwin Agrawal 提交于
Currently, get_filespaces_to_send() only works for QD. To enable pg_basebackup and wal replication for QEs, this function must also work on QEs. This function relies on pg_filespace_entry table to provide information, as its only available on QD currently can't be leveraged. Hence just enabling basic support for QEs for default filespace. Supporting user defined filespaces and non-default transaction filespace, will be dealt incrementally later.
-
由 Jimmy Yih 提交于
In this behave test, we delete some entries in pg_depend and in some relative catalog tables to simulate a corruption around pg_depend. The gpcheckcat tool should then flag these down.
-
由 Jimmy Yih 提交于
The current gpcheckcat dependency check only checked for extra pg_depend entries where a pg_depend entry's objid or refobjid did not exist as an OID of any catalog table with hasoids set. We also need to check the reverse scenario where a catalog entry is missing an entry in pg_depend. This particular scenario is difficult to flag due to catalog entries having multiple unique pg_depend references or are created later from a query that may add dependency (e.g. granting ownership of a database to a certain user). Therefore, we add a very basic check only against catalog tables that immediately create dependencies upon its relative query.
-
由 Jimmy Yih 提交于
We did not check for missing or extra pg_depend entries across the cluster during gpcheckcat. We would be unaware of scenarios where a pg_depend entry went missing and the object that used that dependency is dropped. Those scenarios could lead to leftover catalog entries and prevent some simple CREATE statements.
-
由 Jimmy Yih 提交于
As gpcheckcat builds its mapping of catalog issues, it can flag objects whose parents no longer exist (e.g. a toast table left over after dropping a table). When these get caught, gpcheckcat will unfortunately error out on the reporting step. To prevent erroring out, we just check for None in the RelationObject's vars during reporting. Another issue that is fixed is the repetitive reporting of issues on the testing's current database following testing of a different database. The catalog issues reported were invalid for the current database and were actually issues from the previous database that was checked. This was caused by the improper resetting of the GPObjects and GPObjectGraph global dictionaries. To fix the issue, we properly use the clear() function to reuse the global variables.
-
[#147538353]
-
由 Chris Hajas 提交于
When gptransfer is run with the gpfdist-verbose or gpfdist-very-verbose flags, the gpfdist logs will be kept. Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 23 6月, 2017 6 次提交
-
-
由 Karen Huddleston 提交于
This reverts commit 6a76c5d0. This commit caused gp_dump_agent to hang during backup. Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jane Beckman 提交于
* Clarify gpdb implementation * Minor updates * Add column storage info * Fix typo * Edits form Mel * Fix minor typo * Clarifying table compression * More on AO storage * Make append-optimized lower case
-
由 Jesse Zhang 提交于
Context: unlike upstream Postgres, not every operator in Greenplum is rescannable, noticeably motions. Semantically, a motion is not rescannable except for one very limited cases: 1. When the motion has been initialized, but it has never streamed out any tuples yet. Rescanning is permitted here, because it is as good as not rescanning. Historically, we've been checking for exactly this condition, until in 4.2 we added a check to allow rescanning when parameters changed. Correlated subquery was cited as the intent of commit ea867177 (private) that introduced the additional relaxing check . But come to think of it, motions are not rescannable regardless of parameter change. In fact, if execution reaches this point, the optimizer must have generated a wrong plan. This commit reinstates the original stricter check.
-
由 David Yozie 提交于
* DOCS: removing DDBoost info from OSS build * DOCS: Removing RHEL 5 reference
-
由 Heikki Linnakangas 提交于
Without this fix, the FILTER expression would be left out of the deparsed DDL of a view. Now it gets dumped as the CASE - WHEN expression that we tranform the FILTER to at parse analysis. Ideally, we would dump it using the original FILTER syntax, but that would be a much bigger patch. We'll get that when we merge the upstream FILTER implementation, in PostgreSQL 9.4. Fixes github issue #1854, reported by @water32.
-
由 foyzur 提交于
This PR adds SQL test to verify that memory consumption of alien nodes drops to zero after we set the guc execute_pruned_plan=on. Signed-off-by: NFoyzur Rahman <foyzur@gmail.com>
-