- 29 6月, 2017 24 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
This adds a testrunner to pg_upgrade intended to run at the end of ICW. The running gpdemo cluster is converted from the current GPDB version to the same version, which shall result in an identical cluster. The script first dumps the ICW cluster, the upgrades into a new gpdemo cluster and diffs the dump from that with the original dump. In case the cluster needs to be tweaked before the test, a _pre.sql file can be supplied which will be executed against the old cluster before dumping the schema of it. This file currently drops the relations which hold constraints not yet supported by pg_upgrade. An optional quicktest that the Oid synchronization is maintained for new objects is supported in a smoketest mode. The new cluster is brought up with fsync turned off to speed up the test. This is inspired by the upstream test runner for pg_upgrade.
-
由 Heikki Linnakangas 提交于
Instead of meticulously recording the OIDs of each object in the pg_dump output, dump and load all OIDs as a separate steps in pg_upgrade. We now only preserve OIDs of types, relations and schemas from the old cluster. Other objects are assigned new OIDs as part of the restore. To ensure the OIDs are consistent between the QD and QEs, we dump the (new) OIDs of all objects to a file, after upgrading the QD node, and use those OIDs when restoring the QE nodes. We were already using a similar mechanism for new array types, but we now do that for all objects.
-
由 Daniel Gustafsson 提交于
If a partitioned append-only table had an index created on the parent table, and sen subsequently a table without any indexes at all was exchanged into the hierarchy, then pg_upgrade will fail on AO blockdir synchronization. The DDL from pg_dump will recreate the index over the partitioned table, including the partition which before didn't have an index, and that will cause pg_upgrade to look for a preassigned Oid which doesn't exist. Check for this and abort the upgrade in case we find an offending relation.
-
由 Daniel Gustafsson 提交于
When querying the AO{CS} auxiliary relations, extrat the actual relnames from the catalog rather than assuming the names. Since we need to query the catalog for the bldkir relation anyways we might as well get all the aux tables in the query.
-
由 Daniel Gustafsson 提交于
The relation matching logic durin upgrades was very strict, which caused issues when for example a relation had a toast attribute which was subsequently dropped. In the new cluster there will be no toast table for this table. Also don't treat the existence of new toast tables as a fatal error since newer versions are free to create toasts where previous versions didn't. This is a partial backport of upstream commit 73b9952e.
-
由 Daniel Gustafsson 提交于
The constraints on children are handled in the dump and will be set on each individual child table manually, so allow to not recurse in binary upgrade mode.
-
由 Daniel Gustafsson 提交于
Commit 13216bfd backported fixes for dumping dropped attributes, but missed to block out the conislocal handling which we won't get until we merge 8.4. Properly block it out for now with a MERGE marker and implement dumping of inherited constraints in a way that works for 8.3 based Greenplum.
-
由 Daniel Gustafsson 提交于
Handling the override flag question when runninf pg_resetxlog programmatically is cumbersome for no reason. Add an override argument (undocumented) to make the code less complicated. The question could be circumvented by piping "y" so passing "-y" is equal in terms of manual intervention required.
-
由 Daniel Gustafsson 提交于
The heap page conversion is only applicable in upgrades from 4.3 to 5.0. Ensure that we aren't already on 5.0 when figuring out if to convert. Also initialize the flag to false for extra safety. Unless the queries find that we need to convert the underlying heap pages we really shouldn't attempt it.
-
由 Daniel Gustafsson 提交于
Allow to create a gpdemo cluster in a specified directory by over- riding the master data directory. This is needed for pg_upgrade testing where we need two individual gpdemo clusters at the same time.
-
由 Heikki Linnakangas 提交于
Backport this commit from upstream, needed for binary upgrade: commit 1fd9883f Author: Bruce Momjian <bruce@momjian.us> Date: Sat Dec 26 16:55:21 2009 +0000 Zero-label enums: Allow enums to be created with zero labels, for use during binary upgrade.
-
由 Heikki Linnakangas 提交于
Extensions do not live in schemas. pg_extension.extnamespace is *not* the schema that the extension belongs to, unlike most "*namespace" fields in catalog tables.
-
由 Daniel Gustafsson 提交于
The backport of the data checksum catalog changes backported the relevant GUC from a version which has struct config_bool defined differently than GPDB. The reason an extra NULL in the config_bool array initialization wasn't causing a compilation failure is that there is an extra bool member at the end which is only set during runtime, reset_val. The extra NULL was "overflowing" into this member and thus only raised a warning under -Wint-conversion: guc.c:1180:15: warning: incompatible pointer to integer conversion initializing 'bool' (aka 'char') with an expression of type 'void *’ Fix by removing the superflous NULL. Since it was setting reset_val to NULL (and for a GUC which is yet to "do something") there should be no effects by this.
-
由 Ning Yu 提交于
Implement resgroup memory limit. In a resgroup we divide the memory into several slots, the number depends on the concurrency setting in the resgroup. Each slot has a reserved quota of memory, all the slots also share some shared memory which can be acquired preemptively. Some GUCs and resgroup options are defined to adjust the exact allocation policy: resgroup options: - memory_shared_quota - memory_spill_ratio GUCs: - gp_resource_group_memory_limit Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Alex Diachenko 提交于
Added cdbvars.h to be installed so extensions can import it.
-
由 Marbin Tan 提交于
This code is not correctly recording the data. There seems to be a bug with how we're modifying the values. Most likely, that we're trying to change the address instead of the actual value it's pointing to. We will need to fully run the test this part of the code again and create a new PR. This reverts commit 411d3c82.
-
由 Jimmy Yih 提交于
This gpsegwalrep tool is meant for assisting in segment WAL replication development which is why it is not placed in gpMgmt/bin/. The tool is used to initialize, start, stop, and destroy WAL replication mirror segments. It can also be said that this tool is a rough example of what a segment walrep tool would look like in Greenplum 6.x.
-
由 Jimmy Yih 提交于
There are times where a developer may want to bring up a Greenplum demo cluster without mirrors. This change should make this possible now. Example: WITH_MIRRORS=false make create-demo-cluster
-
由 David Yozie 提交于
-
由 Marbin Tan 提交于
compiler is complaining about prototype not being there, even if it's already there. It seems that adding a void into the prototype silences this compiler warning. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Coefficient of Variation Calculation Coefficient of variation is the standard deviation divided by the mean. We're using the term skew very loosely in our description, as we're actually calculating coefficient of variation. With coefficient of variation, we can tell how dispersed the data points are across the segments. The higher the coefficient of variation, the more non-uniform the distribution of the data in cluster. Coefficient of variation is unitless so it could be used for comparing different clusters and how they are performing relative to each other. CPU Skew calculation: mean(cpu) = sum(per segment cpu cycle) / sum(segments) variance(cpu) = sqrt( sum(((cpu(segment) - mean(cpu))^2) ... ) / sum(segments) ) std_dev(cpu) = sqrt(variance(cpu)) skew(cpu) = coeffecient of variation = std_dev(cpu)/mean(cpu) Row out Skew calculation: mean(row) = sum(per segment row) / sum(segments) variance(row) = sqrt( sum(((row(segment) - mean(row))^2) ... ) / sum(segments) ) std_dev(row) = sqrt(variance(row)) skew(row) = coeffecient of variation = std_dev(row)/mean(row) Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Marbin Tan 提交于
The behave test is intermittently failing because the substep is trying to install gpperfmon eventhough gpperfmon is already installed. Trying to figure out which part of failing the correct value is hard with the current setup. Decouple the checks to determine which part is actually failing. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
- 28 6月, 2017 14 次提交
-
-
由 Ming LI 提交于
Any http request to gpfdist with pipe will lead original pipe reader process hung. In order to avoid the situation that random http request to the working gpfdist instance, gpfdist will check request header. Any manual request should run as: wget --header='X-GP-PROTO:0' http://host:port/file
-
由 Kenan Yao 提交于
If QD receives a SIGINT and calls CHECK_FOR_INTERRUPTS after finishing Gang creation, but before recording this Gang in global variables like primaryWriterGang, this Gang would not be destroyed, hence next time QD wants to create a new writer Gang, it would find existing writer Gang on segments, and report snapshot collision error.
-
由 Bhuvnesh Chaudhary 提交于
Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
-
由 Asim R P 提交于
The pg_control change to bring in heap checksum from upstream is breaking binary compatibility. As soon as that is merged, binary swap test will start failing. Disable it now. It will be re-enabled once a new beta tag is generated. Thereafter, binary swap test will verify binary compatibility between the new beta tag and HEAD.
-
由 Asim R P 提交于
This patch pulls in the addition of checksum version information to pg_control and a GUC to report the checksum version. Heap data checksum feature will be pulled in its entirety as subsequent patches. Upstream commit that this patch pulls from: commit 96ef3b8f Author: Simon Riggs <simon@2ndQuadrant.com> Date: Fri Mar 22 13:54:07 2013 +0000 Allow I/O reliability checks using 16-bit checksums commit 44395174 Author: Simon Riggs <simon@2ndQuadrant.com> Date: Tue Apr 30 12:27:12 2013 +0100 Record data_checksum_version in control file. commit 5a7e75849cb595943fc605c4532716e9dd69f8a0 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Mon Sep 16 14:36:01 2013 +0300 Add a GUC to report whether data page checksums are enabled.
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
* Update gpdemo documentation Remove Solaris documentation Update port numbers Add environment variables
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
-
由 Andreas Scherbaum 提交于
This was removed in upstream in c970292a, and is one step forward to make a number of counters 64 bit safe.
-
由 Andreas Scherbaum 提交于
-
由 David Yozie 提交于
* DOCS: Adding security guide source * Proposed updates from review
-
- 27 6月, 2017 2 次提交
-
-
由 Andreas Scherbaum 提交于
-
由 Ning Yu 提交于
Support ALTER RESOURCE GROUP SET CPU_RATE_LIMIT syntax. The new cpu rate limit take effect immediately at end of transaction. Example 1: CREATE RESOURCE GROUP g1 WITH (cpu_rate_limit=0.1,memory_limit=0.1); ALTER RESOURCE GROUP g1 SET CPU_RATE_LIMIT 0.2; The new cpu rate limit take effect immediately. Example 2: BEGIN; ALTER RESOURCE GROUP g1 SET CPU_RATE_LIMIT 0.2; The new cpu rate limit doesn't take effect unless the transaction is committed. Signed-off-by: NRichard Guo <riguo@pivotal.io> Signed-off-by: NGang Xiong <gxiong@pivotal.io>
-