- 17 5月, 2018 7 次提交
-
-
由 Adam Lee 提交于
Integer overflow occurs without this when copied more than 2^31 rows, under the `COPY ON SEGMENT` mode. Errors happen when it is casted to uint64, the type of `processed` in `CopyStateData`, third-party Postgres driver, which takes it as an int64, fails out of range.
-
由 Lisa Owen 提交于
* docs - resgroup memory_auditor to cat/view/gptoolkit * cgroup mem usage output - used and limit_granted * updates for code refactor that was just merged * type to text
-
由 Mel Kiyama 提交于
* docs: update cast information for GPDB 5/6 Update cast information. Add information about limited text casts. See section "Type Casts" * docs: review comments for GPDB 5/6 updated cast information. * docs: fix typos in updated CAST info
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Jesse Zhang 提交于
To "rebalance" a primary-mirror pair, gprecoverseg -r performs the following steps: 1. bring down the acting primary 2. issue a query that triggers the failover 3. bring up the mirror (gprecoverseg -F) Currently these 3 steps are happening in close succession. However, there is a chance that between step 2 and step 3, the mirror promotion happens slower than we expect. The implicit assumption here is that the acting mirror has finished transitioning to the primary role before step 3 is performed. This patch adds a retry in "sort of step 2, definitely before step 3", to ensure a good state before we can bring up the mirror. Co-authored-by: Jesse Zhang sbjesse@gmail.com Co-authored-by: David Kimura dkimura@pivotal.io
-
由 Chris Hajas 提交于
Since protocols do not belong to a namespace, we do not want to dump them in table or schema filtered backups. They will only be dumped in a full backup. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Omer Arap 提交于
If the column statistics in `pg_statistic` has values with type different than column type, metadata accessor should not translate the stats and create a dummy stats instead. This commit also reorders stats collection from the `pg_statistic` to align with how analyze generates stats. MCV and Histogram translation is moved to the end after NDV, nullfraction and column width extraction. Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
- 16 5月, 2018 9 次提交
-
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Jesse Zhang 提交于
Consider the following SQL, we expect logging to be turned off for table `ext_error_logging_off` ```sql create external table ext_error_logging_off (a int, b int) location ('file:///tmp/test.txt') format 'text' segment reject limit 100; \d+ ext_error_logging_off ``` And then in this next case we expect error logging to be turned on for table `ext_t2`: ```sql create external table ext_t2 (a int, b int) location ('file:///tmp/test.txt') format 'text' log errors segment reject limit 100; \d+ ext_t2 ``` Before this patch, we are making two mistakes in handling these external table DDL: 1. We intend to enable error logging *whenever* the user specifies `SEGMENT REJECT` clause, completely ignoring whether he or she specifies `LOG ERRORS` 1. Even then, we make the mistake of implicitly coercing the OID (an unsigned 32-bit integer) to a bool (which is really just a C `char`): that means, 255/256 of the time (99.6%) the result is `true`, and 0.4% of the time we get a `false` instead. The `OID` to `bool` implicit conversion could have been caught by a `-Wconversion` GCC/Clang flag. It's most likely a leftover from commit 8f6fe2d6. This bug manifests itself in the `dsp` regression test mysteriously failing about once every 200 runs -- with the only diff on a `\d+` of an external table that should have error logging turned on, but the returned definition has it turned off. While working on this we discovered that all of our existing external tables have both `LOG ERRORS` and `SEGMENT REJECT`, which is why this bug wasn't caught in the first place. This patch fixes the issue by properly setting the catalog column `pg_exttable.logerrors` according to the user input. While we were at it, we also cleaned up a few dead pieces of code and made the `dsp` test a bit friendlier to debug. Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Abhijit Subramanya 提交于
-
由 Abhijit Subramanya 提交于
-
由 Jesse Zhang 提交于
Fixes greenplum-db/gporca#358
-
由 Asim R P 提交于
QE readers incorrectly return true for TransactionIdIsCurrentTransactionId() when passed with an xid that is an aborted subtransaction of current transaction. The end effect is wrong results because tuples inserted by the aborted subtransaction are seen (treated as visible according to MVCC rules) by a reader. Current patch fixes the bug by looking up abort status of an XID from pg_clog. In a QE writer, just like in upstream PostgreSQL, subtransaction information is available in CurrentTransactionState (even when subxip cache has overflown). This information is not maintained in shared memory, making it unavailable to a reader. Readers must resort to a longer route to get the same information - pg_subtrans and pg_clog. The patch does not use TransactionIdDidAbort() to check abort status. The interface is designed to work with all transaction IDs. It walks up the transaction hierarchy to look for an aborted parent if status of the given transaction is found to be SUB_COMMITTED. This is a wasted effort when a QE reader wants to test if its own subtransaction has aborted or not. A new interface is introduced to avoid this wasted effort for QE readers. We choose to rely on AbortSubTransaction()'s behavior to mark entire subtree under the aborted subtransaction to be aborted in pg_clog. A SUB_COMMITTED status in pg_clog, therefore, allows us to conclude that the subtransaction is not aborted without having to walk up the hierarchy, provided, the subtransaction is child of our own transaction. The test case also needed a fix because the SQL query (insert into select *) didn't result in a reader gang being created. The SQL is changed to a join on a non-distribution column so as to achieve reader gang creation.
-
由 Ashwin Agrawal 提交于
Temp tables don't have to be replicated neither crash safe and hence avoid generating xlog records for them. Heap already avoids, this patch skips for AO/CO tables as well. Adding new variable `isTempRel` to `BufferedAppend` to help perform the check for temp tables and skip generating xlog records.
-
由 Shreedhar Hardikar 提交于
Printable filters are used to produce an expression for the Partition Selector node that is printed during EXPLAIN to give a hint to the general nature of the filter used by the node. It is not used by the executor in any way which actually uses levelEqExpressions, levelExpressions & residualPredicate instead. These usually contain completely different set of expressions such as PartBounExpr etc. which are not printed during EXPLAIN. Also, with dynamic partition elimination, the partition selector's printable filter may contain VARs that are not in its subtree and instead refer to a DynamicTableScan node on the other side of a Join. This means that it becomes tricky to extract the correct printable filter expression during DXL to PlStmt translation since that occurs bottom-up. Since it is misleading and sometimes incorrect, it's better to remove it altogether. Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Ashwin Agrawal 提交于
*** CID 185522: Security best practices violations (STRING_OVERFLOW) /tmp/build/0e1b53a0/gpdb_src/src/backend/cdb/cdbtm.c: 2486 in gatherRMInDoubtTransactions() and *** CID 185520: Null pointer dereferences (FORWARD_NULL) /tmp/build/0e1b53a0/gpdb_src/src/backend/storage/ipc/procarray.c: 2251 in GetSnapshotData() This condition cannot happen as `GetDistributedSnapshotMaxCount()` doesn't return 0 for DTX_CONTEXT_QD_DISTRIBUTED_CAPABLE and hence `inProgressXidArray` will always be initialzed. hence marked as ignore in coverity but still worth adding Assert for the same.
-
- 15 5月, 2018 6 次提交
-
-
由 Tingfang Bao 提交于
Because the pg_dump handle the implicit sequence (serial column type) and explicit sequence differently in different gpdb version, so we need to detect the sequences which not included in the table dump sql, then we should create them firstly. And also for all dependent sequence, setval() to source value after data transferred. Signed-off-by: NMing Li <mli@pivotal.io>
-
由 Zhenghua Lyu 提交于
The persistent field of LOCK struct may cause confusion. Its meaning is the lock can only be released after the transaction ends. This commit renames this field as holdTillEndXact and changes some related function's name. Some comments are also added.
-
由 Nadeem Ghani 提交于
-
由 Nadeem Ghani 提交于
Prior to this commit, gpaddmirrors was missing two bits of work which were previously done by gpinitsystem/gpcreateseg. When adding mirrors to a cluster, the pg_hba.conf on primary segments has to be modified to allow replication connections, e.g. for pgbasebackup. And after the mirrors are built, the catalog has to be updated to reflect the new cluster config. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Jason Vigil 提交于
Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Jason Vigil 提交于
Co-authored-by: NJason Vigil <jvigil@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
- 12 5月, 2018 3 次提交
-
-
由 Ashwin Agrawal 提交于
Code had these two variables (GUCs), serving same purpose. GpIdentity.segindex is set to content-id, based on command line argument at start-up and inherited by all processes from postmaster. Whereas Gp_segment, was session level guc only set for backends, by dispatching the same from QD. So. essentially Gp_segment was not available and had incorrect value in auxiliary processes. Hence replaced all usages with GpIdentity.segindex. As a side effect log files now have correct value reported for segment number (content-id) in each and every line of file, irrespective of which process generated the log message. Discussion: https://groups.google.com/a/greenplum.org/forum/#!msg/gpdb-dev/Yr8-LZIiNfA/ob4KLgmkAQAJ
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
This patch helps cut-down 8 secs out of 30 secs (for single node gpinitsystem as part of gpdemo), mostly more for multinode setups. - Removes explicit 2 sec and 1 sleep. - Removes explicit call to CHECKPOINT, not required. - Avoid starting primaries after initdb. As they were started and then shutdown pretty soon to start back cluster with required cmdline arguments. - Add -w to mirror start. Still more room for improvement and speedup, this just gets us started.
-
- 11 5月, 2018 14 次提交
-
-
由 Todd Sedano 提交于
pycrypto version is now identical with the version in python-dependencies.txt Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Todd Sedano 提交于
Parse version is now identical with the version in python-dependencies.txt Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Ning Yu 提交于
We should pass binary_swap_gpdb_centos6 instead of gpdb_src_binary_swap.
-
由 Ning Yu 提交于
We used to implement the memory auditor feature differently on master and 5X, on master the attribute is stored in pg_resgroup while on 5X it's stored in pg_resgroupcapability. This increases the maintenance effort significantly. So we refactor this feature on master to minimize the difference between these two branches. - Revert "resgroup: fix an access to uninitialized address." This reverts commit 56c20709. - Revert "ic: Mark binary_swap_gpdb as optional input for resgroup jobs." This reverts commit 9b3d0cfc. - Revert "resgroup: fix a boot failure when cgroup is not mounted." This reverts commit 4c8f28b0. - Revert "resgroup: backward compatibility for memory auditor" This reverts commit f2f86174. - Revert "Show memory statistics for cgroup audited resource group." This reverts commit d5fb628f. - Revert "Fix resource group test failure." This reverts commit 78b885ec. - Revert "Support cgroup memory auditor for resource group." This reverts commit 6b3d0f66. - Apply "resgroup: backward compatibility for memory auditor" This cherry picks commit 23cd8b1e - Apply "ic: Mark binary_swap_gpdb as optional input for resgroup jobs." This Cherry picks commit c86652e6 - Apply "resgroup: fix an access to uninitialized address." This Cherry picks commit b257b344
-
由 Ashwin Agrawal 提交于
Same transaction truncate performs unsafe truncation, hence adding test to cover for the same.
-
由 Ashwin Agrawal 提交于
Upstream commit cab9a065 introduced an optimization to truncate tables in scenarios that permit "unsafe" operations where we don't have to churn on the relfilenode for the underlying tables. AO table got a free ride but for the wrong reason. This patch teaches heap_truncate_one_rel() to perform the unsafe / optimal truncation on AO tables. This allows us to converge the callers back to how they look like in Postgres 9.0. Specifically, we're now able to inline TruncateRelfiles() back into ExecuteTruncate() . One caveat introduced by this patch though, is the "optimal" / unsafe truncation of an AO table can potentially leak some disk space: we are not performing a real file-level truncate, merely seeking back to offset 0 -- because the aoseg auxiliary table is truncated -- on the next write, therefore the space after the EOF mark is wasted in some sense. Co-authored-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Todd Sedano 提交于
Missed one reference [ci skip] Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Omer Arap 提交于
-
由 Todd Sedano 提交于
[ci skip] Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
由 Jimmy Yih 提交于
There were scenarios where adding a new partition to a partition table would cause a negative or duplicate partition rule order (parruleord) value to show up in the pg_partition_rule catalog table. 1. Negative parruleord values could show up during parruleord gap closing when the new partition is inserted above a parruleord gap. 2. Negative parruleord values could show up when the max number of partitions for that level has been reached (32767), and there is an attempt to add a new partition that would have been the highest ranked partition in that partition's partition range. 3. Duplicate parruleord values could show up when the max number of partitions for that level has been reached (32767), and there is an attempt to add a new partition that would have been inserted between the partition table's sequence of parruleord values. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
To avoid disk space spamming while running tests and also save some time, patch intends to avoid core file generation for intentional PANICs caused by tests. Using `setrlimit()` to achieve the same is output of discussion with Jacob Champion and Asim Praveen. Alternative option considered was calling `quickdie()` instead of PANIC, but that suffers from not providing feedback PANIC string used by test to validate the reason for PANIC.
-
由 Tom Lane 提交于
GNU readline defines the return value of write_history() as "zero if OK, else an errno code". libedit's version of that function used to have a different definition (to wit, "-1 if error, else the number of lines written to the file"). We tried to work around that by checking whether errno had become nonzero, but this method has never been kosher according to the published API of either library. It's reportedly completely broken in recent Ubuntu releases: psql bleats about "No such file or directory" when saving ~/.psql_history, even though the write worked fine. However, libedit has been following the readline definition since somewhere around 2006, so it seems all right to finally break compatibility with ancient libedit releases and trust that the return value is what readline specifies. (I'm not sure when the various Linux distributions incorporated this fix, but I did find that OS X has been shipping fixed versions since 10.5/Leopard.) If anyone is still using such an ancient libedit, they will find that psql complains it can't write ~/.psql_history at exit, even when the file was written correctly. This is no worse than the behavior we're fixing for current releases. Back-patch to all supported branches. (cherry picked from commit df9ebf1e) Fixes #4437.
-
由 Ashwin Agrawal 提交于
The test can fail if PANIC happens before insert has even reached the intended point of finishing commit on master. In this case select rightfully returns zero rows compared to expected 1 row.
-
- 10 5月, 2018 1 次提交
-
-
由 Shivram Mani 提交于
-