- 24 6月, 2017 14 次提交
-
-
由 Chris Hajas 提交于
The gp_statsistics prefix was not included in the list of files to restore from ddboost, causing restore to fail when gpdbrestore --restore-stats was used.
-
由 Chris Hajas 提交于
This functionality was included with pg_dump, but was missing from gpcrondump.
-
由 Jane Beckman 提交于
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
With --enable-segwalrep, mirror leverages replay of xl_mm_fs_obj records to delete files. Code was not correctly handling appendonly tables as was calling `smgrdounlink()`, which is for heap tables or indexes. For AO/CO tables, need to perform drop of specific single file mentioned in xlog record, which is performed by `MirroredAppendOnly_Drop()`. Without this code currently files like <relfilenode>.127, <relfilenode>.129,... etc get left behind on mirror. The problem was not seen so far as master never stores data for AO/CO tables hence these files are not created on master. Only when now we start enabling wal replication for segments this is required.
-
由 Ashwin Agrawal 提交于
Helpful for debugging to set GUC Debug_print_qd_mirroring but the message were with DEBUG1. Just enabling and disabling is guc enough to control the logging don't need second level.
-
由 Ashwin Agrawal 提交于
Incase of --enable-segwalrep, write-ahead logging should not be skipped for anything, as it relies on that mechanism to construct the things on mirror. Write-ahead logging for these pieces were only enabled performed for master, with this commit gets enabled for segments as well.
-
由 Ashwin Agrawal 提交于
Currently, get_filespaces_to_send() only works for QD. To enable pg_basebackup and wal replication for QEs, this function must also work on QEs. This function relies on pg_filespace_entry table to provide information, as its only available on QD currently can't be leveraged. Hence just enabling basic support for QEs for default filespace. Supporting user defined filespaces and non-default transaction filespace, will be dealt incrementally later.
-
由 Jimmy Yih 提交于
In this behave test, we delete some entries in pg_depend and in some relative catalog tables to simulate a corruption around pg_depend. The gpcheckcat tool should then flag these down.
-
由 Jimmy Yih 提交于
The current gpcheckcat dependency check only checked for extra pg_depend entries where a pg_depend entry's objid or refobjid did not exist as an OID of any catalog table with hasoids set. We also need to check the reverse scenario where a catalog entry is missing an entry in pg_depend. This particular scenario is difficult to flag due to catalog entries having multiple unique pg_depend references or are created later from a query that may add dependency (e.g. granting ownership of a database to a certain user). Therefore, we add a very basic check only against catalog tables that immediately create dependencies upon its relative query.
-
由 Jimmy Yih 提交于
We did not check for missing or extra pg_depend entries across the cluster during gpcheckcat. We would be unaware of scenarios where a pg_depend entry went missing and the object that used that dependency is dropped. Those scenarios could lead to leftover catalog entries and prevent some simple CREATE statements.
-
由 Jimmy Yih 提交于
As gpcheckcat builds its mapping of catalog issues, it can flag objects whose parents no longer exist (e.g. a toast table left over after dropping a table). When these get caught, gpcheckcat will unfortunately error out on the reporting step. To prevent erroring out, we just check for None in the RelationObject's vars during reporting. Another issue that is fixed is the repetitive reporting of issues on the testing's current database following testing of a different database. The catalog issues reported were invalid for the current database and were actually issues from the previous database that was checked. This was caused by the improper resetting of the GPObjects and GPObjectGraph global dictionaries. To fix the issue, we properly use the clear() function to reuse the global variables.
-
[#147538353]
-
由 Chris Hajas 提交于
When gptransfer is run with the gpfdist-verbose or gpfdist-very-verbose flags, the gpfdist logs will be kept. Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
- 23 6月, 2017 9 次提交
-
-
由 Karen Huddleston 提交于
This reverts commit 6a76c5d0. This commit caused gp_dump_agent to hang during backup. Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Jane Beckman 提交于
* Clarify gpdb implementation * Minor updates * Add column storage info * Fix typo * Edits form Mel * Fix minor typo * Clarifying table compression * More on AO storage * Make append-optimized lower case
-
由 Jesse Zhang 提交于
Context: unlike upstream Postgres, not every operator in Greenplum is rescannable, noticeably motions. Semantically, a motion is not rescannable except for one very limited cases: 1. When the motion has been initialized, but it has never streamed out any tuples yet. Rescanning is permitted here, because it is as good as not rescanning. Historically, we've been checking for exactly this condition, until in 4.2 we added a check to allow rescanning when parameters changed. Correlated subquery was cited as the intent of commit ea867177 (private) that introduced the additional relaxing check . But come to think of it, motions are not rescannable regardless of parameter change. In fact, if execution reaches this point, the optimizer must have generated a wrong plan. This commit reinstates the original stricter check.
-
由 David Yozie 提交于
* DOCS: removing DDBoost info from OSS build * DOCS: Removing RHEL 5 reference
-
由 Heikki Linnakangas 提交于
Without this fix, the FILTER expression would be left out of the deparsed DDL of a view. Now it gets dumped as the CASE - WHEN expression that we tranform the FILTER to at parse analysis. Ideally, we would dump it using the original FILTER syntax, but that would be a much bigger patch. We'll get that when we merge the upstream FILTER implementation, in PostgreSQL 9.4. Fixes github issue #1854, reported by @water32.
-
由 foyzur 提交于
This PR adds SQL test to verify that memory consumption of alien nodes drops to zero after we set the guc execute_pruned_plan=on. Signed-off-by: NFoyzur Rahman <foyzur@gmail.com>
-
由 Heikki Linnakangas 提交于
Fixes github issue #2130
-
由 Heikki Linnakangas 提交于
This re-introduces a minor memory leak, per dumped operator. That is not significant in practice, no-one has enough operators for that to matter, and we're storing some information in memory for each dumped operator anyway. Moreover, this is a divergence from the upstream. In the upstream, this was fixed in commit b1aebbb6, slightly differently, in a way that doesn't introduce new compiler warnings. If we must fix this, we should cherry-pick that commit, and fix it the same way in both pg_dump.c and cdb_dump_agent.c. But I think this is not worth fixing, and we are better off just leaving the code as it is in PostgreSQL 8.3.
-
由 Heikki Linnakangas 提交于
Cherry-pick two upstream commits from PostgreSQL 9.2, to silence compiler warnings from src/bin/pg_dump. I don't normally advocate for cherry-picking random things from upstream, but I'm getting pretty annoyed by the warnings. This will probably cause some minor merge conflicts between now and 9.2, but nothing major, and the compiler warnings are annoying too. Fixes github issue #447. Upstream commits included in this: commit d923125b Author: Peter Eisentraut <peter_e@gmx.net> Date: Fri Mar 2 22:30:01 2012 +0200 Fix incorrect uses of gzFile gzFile is already a pointer, so code like gzFile *handle = gzopen(...) is wrong. This used to pass silently because gzFile used to be defined as void*, and you can assign a void* to a void**. But somewhere between zlib versions 1.2.3.4 and 1.2.6, the definition of gzFile was changed to struct gzFile_s *, and with that new definition this usage causes compiler warnings. So remove all those extra pointer decorations. There is a related issue in pg_backup_archiver.h, where FILE *FH; /* General purpose file handle */ is used throughout pg_dump as sometimes a real FILE* and sometimes a gzFile handle, which also causes warnings now. This is not yet fixed here, because it might need more code restructuring. commit 19f45565 Author: Peter Eisentraut <peter_e@gmx.net> Date: Tue Mar 20 20:38:20 2012 +0200 pg_dump: Remove undocumented "files" output format This was for demonstration only, and now it was creating compiler warnings from zlib without an obvious fix (see also d923125b), let's just remove it. The "directory" format is presumably similar enough anyway.
-
- 22 6月, 2017 17 次提交
-
-
由 Heikki Linnakangas 提交于
Fixes github issue #2195, reported by @Toknowledge.
-
由 Daniel Gustafsson 提交于
Just removing the .o file on clean leaves .a and the shlib files around which can cause problems when building. Add the clean-lib targets from Makefile.shlib.
-
由 Daniel Gustafsson 提交于
This is a partial (documentation part left out) backport of upstream commit aafbd1df96 which fixes a potential SSL downgrade in libpq. commit aafbd1df969135c185947c596c46608fc9f4a67c Author: Noah Misch <noah@leadboat.com> Date: Mon May 8 07:24:24 2017 -0700 Restore PGREQUIRESSL recognition in libpq. Commit 65c3bf19 moved handling of the, already then, deprecated requiressl parameter into conninfo_storeval(). The default PGREQUIRESSL environment variable was however lost in the change resulting in a potentially silent accept of a non-SSL connection even when set. Its documentation remained. Restore its implementation. Also amend the documentation to mark PGREQUIRESSL as deprecated for those not following the link to requiressl. Back-patch to 9.3, where commit 65c3bf19 first appeared. Behavior has been more complex when the user provides both deprecated and non-deprecated settings. Before commit 65c3bf19, libpq operated according to the first of these found: requiressl=1 PGREQUIRESSL=1 sslmode=* PGSSLMODE=* (Note requiressl=0 didn't override sslmode=*; it would only suppress PGREQUIRESSL=1 or a previous requiressl=1. PGREQUIRESSL=0 had no effect whatsoever.) Starting with commit 65c3bf19, libpq ignored PGREQUIRESSL, and order of precedence changed to this: last of requiressl=* or sslmode=* PGSSLMODE=* Starting now, adopt the following order of precedence: last of requiressl=* or sslmode=* PGSSLMODE=* PGREQUIRESSL=1 This retains the 65c3bf19 behavior for connection strings that contain both requiressl=* and sslmode=*. It retains the 65c3bf19 change that either connection string option overrides both environment variables. For the first time, PGSSLMODE has precedence over PGREQUIRESSL; this avoids reducing security of "PGREQUIRESSL=1 PGSSLMODE=verify-full" configurations originating under v9.3 and later. Daniel Gustafsson Security: CVE-2017-7485
-
由 Richard Guo 提交于
A dedicated list is maintained for resource group related callbacks. At transaction end, the callback functions are processed in the order of FIFO on COMMIT, and in the order of LIFO on ABORT. Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
-
由 David Yozie 提交于
DOCS: removing/conditionalizing pivotal-specific download info; changing PivotalR references to open source page (#2664)
-
由 Chuck Litzell 提交于
-
由 Chris Hajas 提交于
Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Karen Huddleston 提交于
Signed-off-by: NTodd Sedano <professor@gmail.com>
-
Instead we should maintain NDVRemain and NullFreq to do Cardinality Estimation. Adding function to check if we need to create stats bucket in DXL Function `FCreateStatsBucket` returns true if column data type is not a text/varchar/char/bpchar type. Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Chris Hajas 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 dyozie 提交于
-
由 Chris Hajas 提交于
Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Daniel Gustafsson 提交于
[ci skip]
-
由 David Yozie 提交于
* adding psql example to kerberos linux client doc * removing client/loader maps from main .ditamap * removing client tool guides from repo
-
由 foyzur 提交于
In GPDB the dispatcher dispatches the entire plan tree to each query executor (QX). Each QX deserializes the entire plan tree and starts execution from the root of the plan tree. This begins by calling InitPlan on the QueryDesc, which blindly calls ExecInitNode on the root of the plan. Unfortunately, this is wasteful, in terms of memory and CPU. Each QX is in charge of a single slice. There can be many slices. Looking into plan nodes that belong to other slices, and initializing (e.g., creating PlanState for such nodes) is clearly wasteful. For large plans, particularly planner plans, in the presence of partitions, this can add up to a significant waste. This PR proposes a fix to solve this problem. The idea is to find the local root for each slice and start ExecInitNode there. There are few special cases: SubPlans are special, as they appear as expression but the expression holds the root of the sub plan tree. All the subplans are bundled in the plannedstmt->subplans, but confusingly as Plan pointers (i.e., we save the root of the SubPlan expression's Plan tree). Therefore, to find the relevant sub plans, we need to first find the relevant expressions and extract their roots and then iterate the plannedstmt->subplans, but only ExecInitNode on the ones that we can reach from some expressions in current slice. InitPlan are no better as they can appear anywhere in the Plan tree. Walking from a local motion is not sufficient to find these InitPlan. Therefore, we need to walk from the root of the plan tree and identify all the SubPlan. Note: unlike regular subplan, the initplan may not appear in the expression as subplan; rather it will appear as a parameter generator in some other parts of the tree. We need to find these InitPlan and obtain the SubPlan for each InitPlan. We can then use the SubPlan's setParam to copy precomputed parameter values from estate->es_param_list_info to estate->es_param_exec_vals We also found that the origSliceIdInPlan is highly unreliable and cannot be used as an indicator of a plan node's slice information. Therefore, we precompute each plan node's slice information to correctly determine if a Plan node is alien or not. This makes alien node identification more accurate. In successive PRs, we plan to use the alien memory account balance as a test to see if we successfully eliminated all aliens. We will also use the alien account balance to determine memory savings.
-
-
由 foyzur 提交于
Detecting dead parent account and replacing with Rollover during memory accounting array to tree conversion. * Unit test to check if children of dead parents are serialized as children of Rollover account.
-