- 05 10月, 2018 7 次提交
-
-
由 Adam Berlin 提交于
A cached query planned statement contains information that is freed after the first execution of a function. The second execution used the cached planned statement to populate the execution state using a freed pointer and throws a segmentation fault. To resolve, we do not free the dynamicTableScanInfo. Co-authored-by: NDavid Kimura <dkimura@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> (cherry picked from commit 04e43e64)
-
由 Bhuvnesh Chaudhary 提交于
`hashDatum` expects the incoming oid for the array types to be ANYARRAYOID, else it will flag them as unsupported. While performing the MCV calcuation on the array type columns, we were passing the specific array oid, i.e character array oid for char array type column, due to which `hashdatum` was marking them as not supported. However, we should pass ANYARRAYOID if the column is an array type. This commit fixes the issue. Also, relevant test cases are added.
-
由 Sambitesh Dash 提交于
Via https://github.com/greenplum-db/gporca/pull/400, ORCA will optimize DML queries by enforcing a gather on segment instead of master, whenever possible. Previous to this commit, ORCA always picked the first segment to gather on while translating the DXL-GatherMotion node to GPDB motion node. This commit uses GPDB's hash function to select the segment to gather on, in a round-robin fashion starting with a random segment index. This will ensure that concurrent DML queries issued via a same session, will be gathered on different segments to distribute the workload. Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Sambitesh Dash 提交于
When ON, ORCA will optimize DML queries by enforcing a non-master gather whenever possible. When off, a gather on master will be enforced instead. Default value will be ON. Also add new tests to ensure sane behavior when this optimization is turned on and fix the existing tests. Signed-off-by: NSambitesh Dash <sdash@pivotal.io> Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io>
-
由 Kris Macoskey 提交于
The PXF tarball is an unversioned component pushing binaries to a versioned s3 bucket. Therefore to use the same binaries from 5.11.1, it is necessary to reference the 5X-release pipeline to get the PXF s3 file version used in the release. This s3 file version can then be hardcoded in the locations that the artifact is consumed. Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Trevor Yacovone 提交于
Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Kris Macoskey 提交于
Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
- 19 9月, 2018 3 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
Update relevant tests files
-
由 Lisa Owen 提交于
-
由 Mel Kiyama 提交于
* docs - update when pg_class.relfrozenxid is 0 (zero) * docs - update relfrozenxid info. Update - XID values start at 3 (0, 1, 2 are special IDs).
-
- 18 9月, 2018 1 次提交
-
-
由 Shoaib Lari 提交于
Usually after PT rebuild, we need to run gpcheckcat to confirm if all issues are fixed. Hence, the tool should by default start the database in 'Restricted' mode. This would make sure that only database superusers are allowed to connect. Authored-by: NShoaib Lari <slari@pivotal.io>
-
- 17 9月, 2018 5 次提交
-
-
由 Daniel Gustafsson 提交于
Make sure we use the psql client from the new bindir (the on in PATH might well be an upstream postgres psql etc), and use a port different from the standard postgres port since it's likely to be in use. Also fix integer value comparison which caused a warning in my Bash, while it might work in other versions of Bash. (cherry picked from commit 743aa9ba)
-
由 Jacob Champion 提交于
There are no distributed transactions during binary upgrade, so we can ignore them. This is a backport from commit fdab8817 in master.
-
由 Jacob Champion 提交于
VACUUM FREEZE must function correctly during binary upgrade so that the new cluster's catalogs don't contain bogus transaction IDs. Do a simple check on the QD in our test script, by querying the age of all the rows in gp_segment_configuration. (cherry picked from commit 5bf6d8b419a0363ca47c80325b06415905c608d5)
-
由 mkiyama 提交于
-
由 Pengzhou Tang 提交于
In dispatch test cases, we need a way to put a segment to in-recovery status to test gang recreating logic of dispatcher. We used to trigger a panic fault on a segment and suspend the quickdie() to simulate in-recovery status. To avoid segment staying in recovery mode for a long time, we used a "sleep" fault instead of 'suspend' in quickdie(), so segment can accept new connections after 5 seconds. 5 seconds works fine most of time, but still not stable enough, so we decide to use more straight-forward mean to simulate in-recovery mode which reports a POSTMASTER_IN_RECOVERY_MSG directly in ProcessStartupPacket(). To not affecting other backends, we create a new database so fault injectors only affect dispatch test cases.
-
- 15 9月, 2018 4 次提交
-
-
由 Omer Arap 提交于
Currently `gpsd` and `minirepro` dumps the hll stats aaand it causes the output files to have bigger size. As we use these tools to debug query plans, we do not use the HLL counter info in the query planning. Instead it is just used to derive root level stats for partition tables. For this reason, it is better to provide HLL stats dump as an option instead of having `gpsd` and `minirepro` dump it by default. This commit addresses this issue.
-
由 mkiyama 提交于
-
由 David Yozie 提交于
-
由 Lisa Owen 提交于
* docs - add content for PXF JDBC connector * edits requested by david * address review comments from ivan
-
- 14 9月, 2018 5 次提交
-
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 Bhuvnesh Chaudhary 提交于
Updating the expected test file to reflect the output changes.
-
由 Jesse Zhang 提交于
Under GCC 8, I get a warning (and rumor has it that you get this under GCC 7 with `-Wrestrict`): ``` sharedsnapshot.c: In function ‘LogDistributedSnapshotInfo’: sharedsnapshot.c:924:11: warning: passing argument 1 to restrict-qualified parameter aliases with argument 4 [-Wrestrict] snprintf(message, MESSAGE_LEN, "%s, In progress array: {", ^~~~~~~ message); ~~~~~~~ sharedsnapshot.c:930:13: warning: passing argument 1 to restrict-qualified parameter aliases with argument 4 [-Wrestrict] snprintf(message, MESSAGE_LEN, "%s, (dx%d)", ^~~~~~~ message, ds->inProgressXidArray[no]); ~~~~~~~ sharedsnapshot.c:933:13: warning: passing argument 1 to restrict-qualified parameter aliases with argument 4 [-Wrestrict] snprintf(message, MESSAGE_LEN, "%s (dx%d)", ^~~~~~~ message, ds->inProgressXidArray[no]); ~~~~~~~ ``` Upon further inspection, the compiler is right: according to C99, it is undefined behavior to pass aliased arguments to the "str" argument of `snprint` (`restrict`-qualified function parameters, to be pedantic). To make this safer and more readable, this patch switches to using the StringInfo API. This change might come with a teeny tiny bit of performance because of: 1. stack vs heap allocation 2. larger initial allocation size of StringInfo But this area of the code is *never* a hot spot, and `appendStringInfo` and friends are arguably faster than our old call patterns of `snprintf`, so I won't sweat on that. (cherry picked from commit 89553ad2) (Back port of greenplum-db/gpdb#5753)
-
由 David Yozie 提交于
-
- 13 9月, 2018 9 次提交
-
-
由 Ning Yu 提交于
On 5X branch the dir 'memory/gpdb' is optional but 'memory' is mandatory to provide 'memory.limit_in_bytes', in such a case we must always set a proper 'memory' component dir.
-
由 Ning Yu 提交于
Take cpu for example, by default we expect gpdb dir to locate at cgroup/cpu/gpdb. But we'll also check for the cgroup dirs of init process (pid 1), e.g. cgroup/cpu/custom, then we'll look for gpdb dir at cgroup/cpu/custom/gpdb, if it's found and has good permissions, it can be used instead of the default one. If any of the gpdb cgroup component dir can not be found under init process' cgroup dirs or has bad permissions we'll fallback all the gpdb cgroup component dirs to the default ones. NOTE: This auto detection will look for memory & cpuset gpdb dirs even on 5X. (cherry picked from commit f3dc101a)
-
由 Mel Kiyama 提交于
* docs - support fqdn as client address in pg_hba.conf --update pg_hba.conf information --add gpintisystem --hba_hostnames option and HBA_HOSTNAMES parameter This will be backported 5X_STABLE * docs - support fqdn as client address in pg_hba.conf - updates/corrections --removed --hba_hostnames option. --clarified HBA_HOSTNAMES parameter value. * docs - support fqdn in pg_hba.conf - fix typo * docs - support fqdn in pg_hba.conf - update based on review comment. * docs - fqdn in pg_hba.conf correction - fqdn always available, HBA_HOSTNAME for gpdb utilities * docs - fqdn in pg_hba.conf - fix typo.
-
由 Tom Lane 提交于
gcc 8 has started emitting some warnings that are largely useless for our purposes, particularly since they complain about code following the project-standard coding convention that path names are assumed to be shorter than MAXPGPATH. Even if we make the effort to remove that assumption in some future release, the changes wouldn't get back-patched. Hence, just suppress these warnings, on compilers that have these switches. Backpatch to all supported branches. Discussion: https://postgr.es/m/1524563856.26306.9.camel@gunduz.org (cherry picked from commit e7165852) (cherry picked from commit 18f9c0b9)
-
由 Tom Lane 提交于
Considering the number of cases in which "unused" command line arguments are silently ignored by compilers, it's fairly astonishing that anybody thought this warning was useful; it's certainly nothing but an annoyance when building Postgres. One such case is that neither gcc nor clang complain about unrecognized -Wno-foo switches, making it more difficult to figure out whether the switch does anything than one could wish. Back-patch to 9.3, which is as far back as the patch applies conveniently (we'd have to back-patch PGAC_PROG_CC_VAR_OPT to go further, and it doesn't seem worth that). (cherry picked from commit 73b416b2) (cherry picked from commit 28d6c289)
-
由 Adam Berlin 提交于
-
由 Adam Berlin 提交于
The only external manipulation of this field occurs in PortalStart, which we would also like to get rid of, but we're not sure how at the moment. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Adam Berlin 提交于
We have been using Portal->releaseResLock to decide if a resource queue is locked for a given portal. Instead, we give the responsibility to the resource queue system to decide if the portal is locked. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
由 Adam Berlin 提交于
Avoid acquiring a resource queue lock for the same portal more than once while calling ProcessQuery for the portal. An example where this situation occurs can be found in the provided test. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 12 9月, 2018 4 次提交
-
-
由 Kris Macoskey 提交于
A race condition was occuring because the set of concourse tasks that create the second of two CCP clusters was expecting to find and use the `terraform` volume. The `terraform` volume is expected to only be created and used in the first set of tasks for the first CCP cluster. If the first set of tasks did not complete before the second set then there was the potential for the `terraform` volume to not exist yet. This causes the job to error in concourse. The fix is to correct the mistake of the second set of tasks using the wrong volume. They should only be using the `terraform2` volume. This completely removes the potential for the race condition. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Chris Hajas 提交于
This reverts commit 899e933b. The network connectivity issue with the Data Domain has been resolved.
-
由 Chris Hajas 提交于
The DDBoost tests require access to an instance that is currently experiencing network connectivity issues. We're removing these jobs from blocking the release until the networking issues are resolved. Authored-by: NChris Hajas <chajas@pivotal.io>
-
- 11 9月, 2018 2 次提交
-
-
由 Goutam Tadi 提交于
-
由 Joao Pereira 提交于
This reverts commit 67fb52e6. CI was failing and the problem was in a new test created in this commit that was expecting ORCA to do a Table Scan but in 5X with version 2.70.2 it is doing a Seq Scan. This need to be reviewed before it is committed.
-