- 22 5月, 2018 6 次提交
-
-
由 Jinbao Chen 提交于
-
由 Richard Guo 提交于
By design, all array types should be hashable by GPDB. Issue 3741 shows that GPDB suffers from text array type being not hashable error. This commit fixes that.
-
由 Jacob Champion 提交于
AIX builds are currently failing with the following error message: ../../config/install-sh: utils/errcodes.h does not exist. There is currently a hardcoded list of headers that must be generated before installing src/include on AIX. errcodes.h came in as part of the 9.1 merge, and it was missed in this list. For good measure, add this to the Windows build as well (it's not currently failing there) so that some future unrelated change doesn't introduce this same build failure. Co-authored-by: NAsim Praveen <apraveen@pivotal.io>
-
由 Jacob Champion 提交于
There are no distributed transactions during binary upgrade, so we can ignore them.
-
由 Jacob Champion 提交于
VACUUM FREEZE must function correctly during binary upgrade so that the new cluster's catalogs don't contain bogus transaction IDs. Do a simple check on the QD in our test script, by querying the age of all the rows in gp_segment_configuration.
-
由 Jacob Champion 提交于
As part of the merge with PostgreSQL 9.1, two gpdb/master commits were incorrectly reverted. This change replays the following commits: commit dd42f3d4 Author: Todd Sedano <tsedano@pivotal.io> Date: Thu May 17 10:47:30 2018 -0700 Syncing python-dependencies with pythonsrc-ext In comparing https://github.com/greenplum-db/pythonsrc-ext to python-dependencies.txt, there are several differences. pythonsrc-ext is the vendored python repo, so it should be in sync with pythonsrc-ext argparse is in pythonsrc-ext, adding it to python-dependencies.txt enum34, parse-type, ptyprocess, six are not in pythonsrc-ext, so moving them to python-developer-dependencies.txt Authored-by: NTodd Sedano <tsedano@pivotal.io> commit 7f9f11d3 Author: Daniel Gustafsson <dgustafsson@pivotal.io> Date: Fri May 18 09:43:48 2018 +0200 Fix typo in comment
-
- 21 5月, 2018 1 次提交
-
-
由 Hubert Zhang 提交于
Add hook framework in signal handler, e.g. StatementCancelHandler or die, to cancel the slow function in a 3rd party library. Add PLy_handle_cancel_interrupt hook to cancel plpython, which use Py_AddPendingCall to cancel embedded python asynchronously.
-
- 19 5月, 2018 2 次提交
-
-
由 Ashwin Agrawal 提交于
Many places is code need to check if its row or column oriented storage table, which means basically is it AppendOptimized table or not. Currently its done by combination of two macros RelationIsAoRows() and RelationIsAoCols(). Simplify the same with new macro RelationIsAppendOptimized().
-
由 Asim R P 提交于
This is the final batch of commits from PostgreSQL 9.1 development, up to the point where the REL9_1_STABLE branch was created, and 9.2 development started on the PostgreSQL master branch. Notable upstream changes: * Changes to "temporary" vs. "permanent" relation handling. Previously, relations were classified as either temporary or not-temporary. Now there is a concept of "relation persistence", which includes un-WAL-logged (but permanent) tables as a third option. Additionally, 9.1 adds the backend ID to the relpath of temporary tables. Because GPDB does not have a 1:1 relationship between backends and temp relations, this has been reverted for now and the concept of a special `TempRelBackendId` has been added. We may look into backporting pieces of the parallel session support from future versions of PostgreSQL to minimize this difference. There is currently a fatal flaw in our handling of relfilenodes for temporary relations: it is possible (though unlikely) for a temporary sequence and a permanent relation (or vice-versa) to point to the same shared buffer in the buffer cache. This may lead to corruption of that buffer. Our current attempt at a fix for this issue isn't good enough yet, so the offending code is marked with FIXMEs. We plan to solve this problem with a total overhaul of the way temporary relations are stored in the shared buffer cache. * Collation is now supported per-column. Many plan nodes now have their own concept of input and output collation IDs. Notably, since this change involves heavy changes to generated plans, ORCA does not yet support non-default collation. * The SERIALIZABLE isolation level was added. This is not currently supported in GPDB, because we don't have a way to analyze serialization conflicts in a distributed manner; we will "fall back" to REPEATABLE READ. * GROUP BY now allows selection of columns in the grouping query without mentioning those columns in the GROUP BY clause, as long as those columns are from a table that's being grouped by its primary key. This requires some modification to the parallel grouping planner in GPDB. * Support for INSTEAD OF triggers on views. Notably, this would allow "modification" of views, but GPDB currently has no support for this concept in the planner. This functionality is temporarily disabled for now. * CTEs (WITH ... AS) can now contain data modification statements (INSERT/UPDATE/DELETE). This is not yet supported in GPDB. * `make -k` is now supported for installcheck[-world] and several other Makefile targets. * Foreign tables. Currently, GPDB does not support planning for statements made against foreign tables, so CREATE FOREIGN TABLE has been disabled. * SELinux support and integration has been added with the SECURITY LABEL command and the associated sepgsql utility. SECURITY LABEL support is currently tested in GPDB via the upstream regression tests, but sepgsql is not. * GUC API changes: assignment hooks are now split into two phases (check/assign). Notable GPDB changes: * Several pieces of the COPY dispatching code have been heavily refactored/rewritten to match upstream more closely. For now, compatibility with 5.x external table catalog entries has been maintained, though this is likely to change. Co-authored-by: NAsim Praveen <apraveen@pivotal.io> Co-authored-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io> Co-authored-by: NDhanashree Kashid <dkashid@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io> Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io> Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NJinbao Chen <jinchen@pivotal.io> Co-authored-by: NLarry Hamel <lhamel@pivotal.io> Co-authored-by: NMax Yang <myang@pivotal.io> Co-authored-by: NMelanie Plageman <mplageman@pivotal.io> Co-authored-by: NOmer Arap <oarap@pivotal.io> Co-authored-by: NPaul Guo <paulguo@gmail.com> Co-authored-by: NRichard Guo <guofenglinux@gmail.com> Co-authored-by: NSambitesh Dash <sdash@pivotal.io> Co-authored-by: NShreedhar Hardikar <shardikar@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
- 18 5月, 2018 6 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Todd Sedano 提交于
In comparing https://github.com/greenplum-db/pythonsrc-ext to python-dependencies.txt, there are several differences. pythonsrc-ext is the vendored python repo, so it should be in sync with pythonsrc-ext argparse is in pythonsrc-ext, adding it to python-dependencies.txt enum34, parse-type, ptyprocess, six are not in pythonsrc-ext, so moving them to python-developer-dependencies.txt Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Bhuvnesh Chaudhary 提交于
-
由 Nadeem Ghani 提交于
The TINC tests in the walrep_1 job have been migrated to Behave. This commit removes the walrep_1 job and updates the generated master pipeline. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 David Kimura 提交于
Issue is that during recovery in progress mirror can take longer than default timeout to finish promotion. Because the timeout is sometimes too short gprecoverseg would intermittently throw 'error 'can't start transaction' in 'BEGIN'. This commit leverages GUCS gp_gang_creation_retry_count and gp_gang_creation_retry_timer to increase the timeout alotted for gang retriable errors to approximately 30 seconds. Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 17 5月, 2018 15 次提交
-
-
由 Nadeem Ghani 提交于
The walrep_1 TINC tests have been migrated to the Behave framework under gpMgmt top-level directory. We remove the Makefile target but keep the TINC tests because the walrep_2 target has some test dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Jimmy Yih 提交于
We added a job for gpactivatestandby and moved the gpinitstandby job to the multi-node CLI suite. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jimmy Yih 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
These scenarios come from the walrep_1 TINC Makefile target. They have been designed to run on both singlenode and multinode environments. The tests have not been removed from TINC yet because some walrep_2 TINC tests have dependencies. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
These are ported from the master/standby TINC tests for checking the following: 1. gpstart -a -y does nothing when there is no standby master configured 2. gpstop -a -y skips standby master when there is one configured Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Kevin Yeap 提交于
The gpactivatestandby utility always used $MASTER_DATA_DIRECTORY environment variable, even when the -d flag was populated by the user. Fix the issue by actually using the directory path supplied by the user. Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
由 Tingfang Bao 提交于
Quick indent fix, the tables string was appended wrongly. Signed-off-by: NMing LI <mli@apache.org>
-
由 Tingfang Bao 提交于
We find out all the sequences in the table, and verify if the sequence should be created. later, we get the complete dump sql that included the sequnce DDL and table DDL, It can create the sequence before creating the table This implementation is to avoid locking table during drop table with sequence in transaction, meantime execute command `psql -f ...` to create this sequence. Signed-off-by: NMing LI <mli@apache.org>
-
由 Adam Lee 提交于
Integer overflow occurs without this when copied more than 2^31 rows, under the `COPY ON SEGMENT` mode. Errors happen when it is casted to uint64, the type of `processed` in `CopyStateData`, third-party Postgres driver, which takes it as an int64, fails out of range.
-
由 Lisa Owen 提交于
* docs - resgroup memory_auditor to cat/view/gptoolkit * cgroup mem usage output - used and limit_granted * updates for code refactor that was just merged * type to text
-
由 Mel Kiyama 提交于
* docs: update cast information for GPDB 5/6 Update cast information. Add information about limited text casts. See section "Type Casts" * docs: review comments for GPDB 5/6 updated cast information. * docs: fix typos in updated CAST info
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Jesse Zhang 提交于
To "rebalance" a primary-mirror pair, gprecoverseg -r performs the following steps: 1. bring down the acting primary 2. issue a query that triggers the failover 3. bring up the mirror (gprecoverseg -F) Currently these 3 steps are happening in close succession. However, there is a chance that between step 2 and step 3, the mirror promotion happens slower than we expect. The implicit assumption here is that the acting mirror has finished transitioning to the primary role before step 3 is performed. This patch adds a retry in "sort of step 2, definitely before step 3", to ensure a good state before we can bring up the mirror. Co-authored-by: Jesse Zhang sbjesse@gmail.com Co-authored-by: David Kimura dkimura@pivotal.io
-
由 Chris Hajas 提交于
Since protocols do not belong to a namespace, we do not want to dump them in table or schema filtered backups. They will only be dumped in a full backup. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Omer Arap 提交于
If the column statistics in `pg_statistic` has values with type different than column type, metadata accessor should not translate the stats and create a dummy stats instead. This commit also reorders stats collection from the `pg_statistic` to align with how analyze generates stats. MCV and Histogram translation is moved to the end after NDV, nullfraction and column width extraction. Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
- 16 5月, 2018 9 次提交
-
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Jesse Zhang 提交于
Consider the following SQL, we expect logging to be turned off for table `ext_error_logging_off` ```sql create external table ext_error_logging_off (a int, b int) location ('file:///tmp/test.txt') format 'text' segment reject limit 100; \d+ ext_error_logging_off ``` And then in this next case we expect error logging to be turned on for table `ext_t2`: ```sql create external table ext_t2 (a int, b int) location ('file:///tmp/test.txt') format 'text' log errors segment reject limit 100; \d+ ext_t2 ``` Before this patch, we are making two mistakes in handling these external table DDL: 1. We intend to enable error logging *whenever* the user specifies `SEGMENT REJECT` clause, completely ignoring whether he or she specifies `LOG ERRORS` 1. Even then, we make the mistake of implicitly coercing the OID (an unsigned 32-bit integer) to a bool (which is really just a C `char`): that means, 255/256 of the time (99.6%) the result is `true`, and 0.4% of the time we get a `false` instead. The `OID` to `bool` implicit conversion could have been caught by a `-Wconversion` GCC/Clang flag. It's most likely a leftover from commit 8f6fe2d6. This bug manifests itself in the `dsp` regression test mysteriously failing about once every 200 runs -- with the only diff on a `\d+` of an external table that should have error logging turned on, but the returned definition has it turned off. While working on this we discovered that all of our existing external tables have both `LOG ERRORS` and `SEGMENT REJECT`, which is why this bug wasn't caught in the first place. This patch fixes the issue by properly setting the catalog column `pg_exttable.logerrors` according to the user input. While we were at it, we also cleaned up a few dead pieces of code and made the `dsp` test a bit friendlier to debug. Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Abhijit Subramanya 提交于
-
由 Abhijit Subramanya 提交于
-
由 Jesse Zhang 提交于
Fixes greenplum-db/gporca#358
-
由 Asim R P 提交于
QE readers incorrectly return true for TransactionIdIsCurrentTransactionId() when passed with an xid that is an aborted subtransaction of current transaction. The end effect is wrong results because tuples inserted by the aborted subtransaction are seen (treated as visible according to MVCC rules) by a reader. Current patch fixes the bug by looking up abort status of an XID from pg_clog. In a QE writer, just like in upstream PostgreSQL, subtransaction information is available in CurrentTransactionState (even when subxip cache has overflown). This information is not maintained in shared memory, making it unavailable to a reader. Readers must resort to a longer route to get the same information - pg_subtrans and pg_clog. The patch does not use TransactionIdDidAbort() to check abort status. The interface is designed to work with all transaction IDs. It walks up the transaction hierarchy to look for an aborted parent if status of the given transaction is found to be SUB_COMMITTED. This is a wasted effort when a QE reader wants to test if its own subtransaction has aborted or not. A new interface is introduced to avoid this wasted effort for QE readers. We choose to rely on AbortSubTransaction()'s behavior to mark entire subtree under the aborted subtransaction to be aborted in pg_clog. A SUB_COMMITTED status in pg_clog, therefore, allows us to conclude that the subtransaction is not aborted without having to walk up the hierarchy, provided, the subtransaction is child of our own transaction. The test case also needed a fix because the SQL query (insert into select *) didn't result in a reader gang being created. The SQL is changed to a join on a non-distribution column so as to achieve reader gang creation.
-
由 Ashwin Agrawal 提交于
Temp tables don't have to be replicated neither crash safe and hence avoid generating xlog records for them. Heap already avoids, this patch skips for AO/CO tables as well. Adding new variable `isTempRel` to `BufferedAppend` to help perform the check for temp tables and skip generating xlog records.
-
由 Shreedhar Hardikar 提交于
Printable filters are used to produce an expression for the Partition Selector node that is printed during EXPLAIN to give a hint to the general nature of the filter used by the node. It is not used by the executor in any way which actually uses levelEqExpressions, levelExpressions & residualPredicate instead. These usually contain completely different set of expressions such as PartBounExpr etc. which are not printed during EXPLAIN. Also, with dynamic partition elimination, the partition selector's printable filter may contain VARs that are not in its subtree and instead refer to a DynamicTableScan node on the other side of a Join. This means that it becomes tricky to extract the correct printable filter expression during DXL to PlStmt translation since that occurs bottom-up. Since it is misleading and sometimes incorrect, it's better to remove it altogether. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Ashwin Agrawal 提交于
*** CID 185522: Security best practices violations (STRING_OVERFLOW) /tmp/build/0e1b53a0/gpdb_src/src/backend/cdb/cdbtm.c: 2486 in gatherRMInDoubtTransactions() and *** CID 185520: Null pointer dereferences (FORWARD_NULL) /tmp/build/0e1b53a0/gpdb_src/src/backend/storage/ipc/procarray.c: 2251 in GetSnapshotData() This condition cannot happen as `GetDistributedSnapshotMaxCount()` doesn't return 0 for DTX_CONTEXT_QD_DISTRIBUTED_CAPABLE and hence `inProgressXidArray` will always be initialzed. hence marked as ignore in coverity but still worth adding Assert for the same.
-
- 15 5月, 2018 1 次提交
-
-
由 Tingfang Bao 提交于
Because the pg_dump handle the implicit sequence (serial column type) and explicit sequence differently in different gpdb version, so we need to detect the sequences which not included in the table dump sql, then we should create them firstly. And also for all dependent sequence, setval() to source value after data transferred. Signed-off-by: NMing Li <mli@pivotal.io>
-