- 11 4月, 2017 13 次提交
-
-
由 Daniel Gustafsson 提交于
Passing InvalidOid to as a HeapTuple argument cause a warning for NULL pointer constant, replace InvalidOid with NULL in call to AlterResqueueCapabilityEntry() to avoid. Also initialize the owner in GetResGroupIdForRole() since we otherwise read an uninitialized value in case the CurrentResourceOwner was set.
-
由 Pengzhou Tang 提交于
QD used to send a transient types table to QEs, then QE would remap the tuples with this table before sending them to QD. However in complex queries QD can't discover all the transient types so tuples can't be correctly remapped on QEs. One example is like below: SELECT q FROM (SELECT MAX(f1) FROM int4_tbl GROUP BY f1 ORDER BY f1) q; ERROR: record type has not been registered To fix this issue we changed the underlying logic: instead of sending the possibly incomplete transient types table from QD to QEs, we now send the tables from motion senders to motion receivers and do the remap on receivers. Receivers maintain a remap table for each motion so tuples from different senders can be remapped accordingly. In such way, queries contain multi-slices can also handle transient record type correctly between two QEs. The remap logic is derived from the executor/tqueue.c in upstream postgres. There is support for composite/record types and arrays as well as range types, however as range types are not yet supported in GPDB so the logic is put under a conditional compilation macro, in theory it shall be automatically enabled when range types are supported in GPDB. One side effect for this approach is that on receivers a performance down is introduced as the remap requires recursive checks on each tuple of record types. However optimization is made to make this side effect minimum on non-record types. Old logic that building transient types table on QD and sending them to QEs are retired. Signed-off-by: NGang Xiong <gxiong@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Adam Lee 提交于
All test cases it used to run are covered by ICW now.
-
由 Ashwin Agrawal 提交于
Alter table add column for CO table completely missed updating block-directory if default value for column is greater than blockSize. In this case one large content block will be created followed by small content blocks containing the actual column value. Missing to update block-directory generates wrong result during index scans after such alter. The commit fixes the issue by updating the block-directory for such a case accompanied with test to validate the same. Also, while fixing the same refactor code - rename lastWriteBeginPosition to logicalBlockStartOffset for better clarity based on its usage - centralize block-directory insert in datumstream block read-write routine - remove redundant buildBlockDirectory flag
-
由 Ashwin Agrawal 提交于
Incorrect block offset was recorded in block-directory incase of inserting column value greater than blocksize for CO table. For such a case, column value is divided into multiple small content blocks stitched together by a large content block at start. hence, block-directory for such case should record offset of large content block as thats the logical start of block, instead offset for last small content block was recorded. Hence, fixing the issue by not touching `lastWriteBeginPosition` inside `AppendOnlyStorageWrite_FinishBuffer()` as that's called for completing every physical block. Means incase of large content block for large content block plus every following small content block chain of it. Instead move the responsbility of updating `lastWriteBeginPosition` to caller of `AppendOnlyStorageWrite_FinishBuffer()` to finish block routines which clearly know logical block boundary, plus makes it consistent through-out codebase.
-
由 Ashwin Agrawal 提交于
For some tests (mainly filerep tests) inconsistency was reported between primary and mirror by script used to detect inconsistency between the two gpcheckmirrorseg.pl. The script performs checkpoint at start but if 1st phase finds diffs it performs some catalog lookups via persistent table. With heap page pruning its possible, the page may get modified during this scan. Hence checkpoint again post it to make sure the change makes to mirror and then only move ahead with further comparisons to avoid false positives.
-
由 Shreedhar Hardikar 提交于
This is useful information in EXPLAIN ANALYZE even when there is no spilling. Also refactor this code into a separate function.
-
由 Shreedhar Hardikar 提交于
This shift value is used to alter the hash function when reloading spilled entries. The hash table uses only one such value at a time, so it can be maintained in the hash table to simplify access methods.
-
由 Shreedhar Hardikar 提交于
The commit contains a number of minor refactors and fixes. - Fix indentation and spelling - Remove unused variable (pass) - Reset bloom filter after spilling HashAgg - Remove dead code in ExecAgg - Move to AggStatus to AggState as HashAggStatus because state change algorithm is implemented in nodeAgg
-
由 C.J. Jameson 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Marbin Tan 提交于
- add new configure option `--enable-gpperfmon` - include gpperfmon libraries into configure Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NChumki Roy <croy@pivotal.io> Signed-off-by: NMelanie Plageman <mplageman@pivotal.io> Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Chumki Roy 提交于
-
由 Larry Hamel 提交于
Signed-off-by: NChumki Roy <croy@pivotal.io>
-
- 10 4月, 2017 4 次提交
-
-
由 Ning Wu 提交于
The old cdbfast test suite had these, good to move to ICW.
-
由 Daniel Gustafsson 提交于
Move the input tuple check to the main context to avoid a possible NULL ptr deref in case outerPlanState(node) returns NULL.
-
由 Adam Lee 提交于
Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Haozhou Wang 提交于
In a copy statement, while processing NULL string at the last column, GPDB assumed the length of EOL was 1, which is wrong when it's a DOS format file ends line with '\r\n'. For example: ------------------------------------------------------------ /tmp/file.txt: abc|\N\r\n cde|123\r\n test=# CREATE TABLE tbl (c1 text, c2 int); CREATE TABLE test=# COPY tbl FROM '/tmp/file.txt' WITH DELIMITER AS '|' NEWLINE AS 'CRLF'; ERROR: invalid input syntax for integer: "N" ------------------------------------------------------------ This commit fixes it. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
- 08 4月, 2017 3 次提交
-
-
由 Divya Bhargov 提交于
-
由 Chumki Roy 提交于
- gpperfmon.py was used for collecting statistics and to control the gpperfmon web interface. This has been deprecated and replaced by GPCC. Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Ashwin Agrawal 提交于
Introduced by commit 5ccdd6a2 just for info.
-
- 07 4月, 2017 14 次提交
-
-
由 Daniel Gustafsson 提交于
libpq is front-end code and shouldn't be used in backend processes. The requirement here is to correctly quote the relation name in partitioning such that pg_dump/gp_dump can create working DDL for the partition hierarchy. For this purpose, quote_literal_internal() does the same thing as PQescapeString(). The following relation definitions were hitting the previous bug fixed by applying proper quoting: CREATE TABLE part_test (id int, id2 int) PARTITION BY LIST (id2) ( PARTITION "A1" VALUES (1) ); CREATE TABLE sales (trans_id int, date date) DISTRIBUTED BY (trans_id) PARTITION BY RANGE (date) ( START (date '2008-01-01') INCLUSIVE END (date '2009-01-01') EXCLUSIVE EVERY (INTERVAL '1 month') ); ALTER TABLE sales SPLIT PARTITION FOR ('2008-01-01') AT ('2008-01-16') INTO (PARTITION jan081to15, PARTITION jan0816to31); ALTER TABLE sales ADD DEFAULT PARTITION other; ALTER TABLE sales SPLIT DEFAULT PARTITION START ('2009-01-01') INCLUSIVE END ('2009-02-01') EXCLUSIVE INTO (PARTITION jan09, PARTITION other); This commit was previously pushed and reverted due to test failures in the build pipeline. It seems the errors were due to another patch that went in at the same time so re-applying this commit.
-
由 C.J. Jameson 提交于
-
由 Daniel Gustafsson 提交于
Set the correct Wiki URL to use for the project, we have moved over to using the Github wiki on the main repo and not the wiki on the website repo. Remove references to Pivotal Greenplum documentation in the README and stick to greenplum.orc/docs/ there as those are the applicable docs. While there update the README.md to actually match reality, the docs are now in the main repo.
-
由 Kenan Yao 提交于
utils/resgroup/resgroup.c. Signed-off-by Richard Guo <riguo@pivotal.io> Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
由 Kenan Yao 提交于
database. Signed-off-by Richard Guo <riguo@pivotal.io> Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
由 Kenan Yao 提交于
Works include: * define structures used by resource group in shared memory; * insert/remove shared memory object when Create/Drop Resource Group; * clean up and restore when Create/Drop Resource Group fails; * implement concurrency slot acquire/release functionality; * sleep when concurrency slot is not available, and wake up others when releasing a concurrency slot if necessary; * handle signals in resource group properly; Signed-off-by Richard Guo <riguo@pivotal.io> Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
由 Kenan Yao 提交于
and resource group when 'resource_scheduler' is on, we need to change the condition of the resource queue branches. Also, tidy up error messages related to resource manager under these different GUC settings. Signed-off-by Richard Guo <riguo@pivotal.io> Signed-off-by Gang Xiong <gxiong@pivotal.io>
-
由 Adam Lee 提交于
To fix the credential leak, make these jobs able to be public after their job history are scrubbed. Also remove unnecessary -B, which benefits from 08ec642dSigned-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 David Sharp 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Roman Shaposhnik 提交于
-
由 Michael Roth 提交于
* Clean up from the removal of the client submodule (#2166) * Clean up from the removal of the client submodule * Cleanup - Update makefile to remove references to connectors - removed EULA header added godb_packaging back into repo deleted clients to match * reapplied makefile changes after merge
-
由 Michael Roth 提交于
* Clean up from the removal of the client submodule * Cleanup - Update makefile to remove references to connectors - removed EULA header
-
由 Todd Sedano 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Todd Sedano 提交于
Before when gprestore_filter failed, the restore command would claim success Sets pipefail for gp_restore filter invocation Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
- 06 4月, 2017 6 次提交
-
-
由 Daniel Gustafsson 提交于
* Rewrite syslogger unittest to not mock fopen() Mocking fopen() caused an error in the pipeline due to an OpenSSL init function segfaulting on the mock. Rewrite the test to only test the interesting bit of open_alert_log_file() once the mocks are taken away. * Fix typo
-
由 Karen Huddleston 提交于
Authors: Karen Huddleston and Vaibhav Tandon
-
由 David Yozie 提交于
-
由 David Yozie 提交于
[ci skip]
-
由 David Yozie 提交于
* Removing DCA v1 reference; small edit for clarity [ci skip] * line edit * more line edits * more edits [ci skip]
-
由 mkiyama 提交于
* GPDB DOCS - add/update warnings [ci skip] * GPDB DOCS - add/update warnings - conditionalized Pivotal information. [ci skip]
-