- 07 6月, 2018 7 次提交
-
-
由 Pengzhou Tang 提交于
Previously, for an interconnect connection, if no data are available at sender peer, the sender sends a customized EOS packet to the receiver and disables further send operations using shutdown(SHUT_WR), then somehow, the sender closes the connection totally with close() immediately and it counts on the kernel and TCP stack to guarantee the data been transformed to the receiver. The problem is, on some platform, if the connection is closed on one side, the TCP behave is undetermined, the packets may be lost and receiver may report an unexpected error. The correct way is sender blocks on the connection until receiver getting the EOS packet and close its peer, then the sender can close the connection safely.
-
由 Pengzhou Tang 提交于
For a result node with one-time filter, if it's outer plan is not empty and contains a motion node, then it needs to squelch the outer node explicitly if the one-time filter check is false. This is necessary espically for motion node under it, ExecSquelchNode() force a stop message so the interconnect sender don't stuck at recending or polling ACKs.
-
由 Pengzhou Tang 提交于
This is a quick fix to make dispatch test pass, for a long term, we need to redesign the dispatch test or make it a unit test.
-
由 Xiaoran Wang 提交于
* upgrade pgbouncer to 1.8.1 * support PAM/HBA auth type * update submodule pgbouncer commit * update pgbouncer's commit to support SSL connection -change pgbouncer server_tls_ciphers default value
-
由 Bhuvnesh Chaudhary 提交于
-
由 Omer Arap 提交于
-
由 Mel Kiyama 提交于
* docs - add gpcopy utility * docs - gpcopy - review comment updates. * docs - gpcopy reference - review comment updates and edits. Also --changed --dest-host to be a required option * docs - gpcopy reference - command option changes. -schema-only change to --metadata-only --database change to --dbname --batch-size change to --jobs * docs - gpcopy ref. fix typos
-
- 06 6月, 2018 17 次提交
-
-
由 Lisa Owen 提交于
* docs - discuss the partner connector (gppc) api * address most of the edits requested by david * add to requirements * add the memory context functions
-
由 anki-code 提交于
-
由 Ashwin Agrawal 提交于
-
由 Pengzhou Tang 提交于
Dispatch tests don't expect backends created by other tests or auxiliary processes like FTS and GDD, this commit disables GDD too to make dispatch tests stable.
-
由 Jialun 提交于
- Change strncpy to StrNCpy, make sure dest string be terminated - Initilize some variables before use it.
-
由 Jesse Zhang 提交于
Commit 1c1945fd9dbaf217062596062f73beac4934d7b6 broke compilation when we use the trivial / dummy implementation of resource group. The fix for that is trivial (this commit). But it begs the question: should we make the build system less magical (switching the implementation based on the platform), and instead just always exercise the dummy implementation (or at least the building of it).
-
由 Ashwin Agrawal 提交于
Previous algorithm scans entire directory to find specific relfilenode extensions to be deleted. This is not optimal for large directory sizes. This patch introduces extra logic based on the table extension pattern which helps to avoid directory scan. Algorithm is coded based on assumption that for CO tables a given concurrency level either all columns have the file or none as well as the following file table extension pattern: Heap Tables: contiguous extensions, no upper bound AO Tables: non contiguous extensions [.0 - .127] CO Tables: non contiguous extensions [ .0 - .127] for first column [.128 - .255] for second column [.256 - .283] for third column etc AO file format can be treated as a special case of CO tables with 1 column. High level logic: 1) Finds for which concurrency levels the table has files. This is calculated based off the first column. It performs 127 (MAX_AOREL_CONCURRENCY) unlink(). 2) Iterates over the single column and deletes all concurrency level files. For AO tables this will exit fast. This algorithm can be used for heap tables as well, however to prevent merge conflicts it currently is only used for CO/AO tables. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
Without this patch the strorage layout is not known in md and smgr layer. Due to lack of this info sub-optimal operations need to be performed generically for all table types. For example Heap specific functions like ForgetRelationFsyncRequests(), DropRelFileNodeBuffers() gets called even for AO and CO tables. Adding new RelFileNodeWithStorageType struct to carry pass storage type to md and smgr layer. XLOG_XACT_COMMIT and XLOG_XACT_ABORT wal records use the new structure which has RelFileNode and storage type Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - pl/container new setting attribute roles * docs - review comment updates for pl/container roles attribute
-
由 Jimmy Yih 提交于
There was a recent change to fault injector framework that made simple form "gp_inject_fault(faultname, type, db_id)" not work with wait_until_triggered fault type. To get around this, we should properly use "gp_wait_until_triggered_fault()" instead. Reference: https://github.com/greenplum-db/gpdb/commit/723e58481ad706d4c8f4f7af1325be2dcd36c985
-
由 Shoaib Lari 提交于
For long running commands such as gpinitstandby with a large master data directory, the server takes a long time. Therefore, there is no acitivity from the client to the server. If the ClientAliveInterval is set, then the server reports a timeout after ClientAliveInterval seconds. Setting a ServerAliveInterval value less than the ClientAliveInterval interval forces the client to send a Null message to the server. Hence, avoiding the timeout. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> (cherry picked from commit 675aa8e3bd1d5bb187dc93d7ba494819cadb120e)
-
由 Ashwin Agrawal 提交于
Alter tablespace needs to copy all underlying files of table from one tablespace to other. For AO/CO tables this was implemented using full directory scan to find files and copy when persistent tables were removed. This gets very inefficient and varies in performance based on number of files present in the directory. Instead use the same optimization logic used for `mdunlink_ao()` leveraging known file layout for AO/CO tables. Also, old logic had couple of bugs: - missed coping the base or .0 file. Which means data loss if table was altered in past. - xlogging even for temp tables These are fixed as well with this patch. Additional tests added to cover for those missing scenaiors. Also, moved the AO specific code to aomd.c, out of tablecmds.c file to reduce conflicts to upstream.
-
由 Ashwin Agrawal 提交于
Commit 07ee8008 added test section in query_finish_pending.sql to validate case where if a query can be canceled when cancel signal arrives fast than the query dispatched. For the same uses sleep fault. But the test was incorrect due to usage of "begin", as begin sleeps for 50 secs instead of actual select query sleeping. Also, since the fault always trigger the reset fault sleeps for additional 50 secs. Instead remove begin and just set endoccurence to 1. Verified modified test fails/hangs without the fix and passes/completes in couple secs with the fix.
-
由 Ashwin Agrawal 提交于
bfv_partition tests fail if ICW is run n times after creating the cluster, as the role is not dropped. With this commit now this test can be run n times successfully without re-creating the cluster. On the way also remove the suppression of warnings in role.sql.
-
由 Ashwin Agrawal 提交于
Unit tests generate mock version of the .c files and its very annoying as they get in TAGS file and always first visit lands in mocked implementation.
-
由 David Yozie 提交于
* adding best practice note for setting timezone * edits to clarify timezone behavior
-
- 05 6月, 2018 4 次提交
-
-
由 Andreas Scherbaum 提交于
SPI 64 bit changes for pl/Python Includes fault injection tests
-
由 Jialun 提交于
* Implement CPUSET, a new management of cpu resource in resource group which can reserve the specified cores for specified resource group exclusively. This can ensure that there are always available cpu resources for the group which has set CPUSET. The most common scenario is allocating fixed cores for short queries. - One can use it by executing CREATE RESOURCE GROUP xxx WITH ( cpuset='0-1', xxxx). 0-1 are the reserved cpu cores for this group. Or ALTER RESOURCE GROUP SET CPUSET '0,1' to modify the value. - The syntax of CPUSET is a combination of the tuples, each tuple represents one core number or the core numbers interval, separated by comma. E.g. 0,1,2-3. All the core in CPUSET must be available in system and the core numbers in each group cannot have overlap. - CPUSET and CPU_RATE_LIMIT are mutually exclusive. One cannot create a resource group with both CPUSET and CPU_RATE_LIMIT. But the CPUSET and CPU_RATE_LIMIT can be freely switched in one group by executing ALTER operation, that means if one feature has been set, the other is disabled. - The cpu cores will be returned to GPDB, when the group has been dropped, or the CPUSET value has been changed, or the CPU_RATE_LIMIT has been set. - If some of the cores have been allocated to the resource group, then the CPU_RATE_LIMIT in other groups only indicating the percentage of cpu resources of the left cpu cores. - If the GPDB is busy, all the other cores which have not be allocated to any resource groups exclusively through CPUSET have already been run out, the cpu cores in CPUSET will still not be allocated. - The cpu cores in CPUSET will be used exclusively only in GPDB level, the other non-GPDB processes in system may use them. - Add test cases for this new feature, and the test environment must contain at least two cpu cores, so we upgrade the configuration of instance_type in resource_group jobs. * - Compatible with the case that cgroup directory cpuset/gpdb does not exist - Implement pg_dump for cpuset & memory_auditor - Fix a typo - Change default cpuset value from empty string to -1, for the code in 5X assume that all the default value in resource group is integer, a non-integer value will make the system fail to start
-
由 Asim R P 提交于
Temp tables must be included in PREPARE and COMMIT records in GPDB because they are not exempt from 2PC, as in upstream. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Asim R P 提交于
We have found the culprit causing relfilenode collisions to be VACUUM FULL on a mapped relation. The code was reusing OID as relfilenode for the temporary table created by vacuum full. This happened without bumping the relfilenode counter. The patch fixes this such that relfilenode is always generated, even in case of mapped relations. With this, we believe that the possibility of collision still exists in the way sequence OIDs are generated. That needs to be fixed in a separate patch. The fixme in GetNewRelFileNode() should be sufficient to note this. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 04 6月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
This adds the basic scaffolding for allowing COMMENT ON RESOURCE GROUP, but without any user visible functions for retrieving the comment. Since we allow COMMENTs on resource queues, we should do the same for resource groups for completeness.
-
由 Daniel Gustafsson 提交于
In order to be able to set comments on resource queues, they must be object addressable, so fix by implement object addressing. Also add a small test for commenting on a resource queue.
-
- 02 6月, 2018 3 次提交
-
-
由 Ashwin Agrawal 提交于
* Remove redundant copy of toast and its index in ATExecSetTableSpace() Commit f70f49fe introduced this double copy of toast and its index. Lets fix it. * Fix mismerged lines in src/interfaces/libpq/Makefile. Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Lisa Owen 提交于
* docs - add resgroup global shmem recommendation * put conditions in a list * explicitly call out vmtracker roles for global shmem * Changing formatting from varname to i
-
由 Mel Kiyama 提交于
* docs - add gprestore options --data-only, --metadata-only. also, fix title link to gpbackup plugins. * docs - gprestore --data-only, --metadata-only. review comment updates
-
- 01 6月, 2018 5 次提交
-
-
由 Taylor Vesely 提交于
Unlike upstream, GPDB needs to keep collations in-sync between multiple databases. Add tests for GPDB specific collation behavior. These tests need to import a system locale, so add a @syslocale@ variable to gpstringstubs.pl in order to test the creation/deletion of collations from system locales. Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Taylor Vesely 提交于
Make CREATE COLLATION and pg_import_system_collations() parallel aware by dispatching collation creation to the QEs. In order for collations to work correctly, we need to be sure that every collation that is created on the QD is also installed on the QEs, and that the OID matches in every database. We will take advantage of two phase commit to prevent collations from being created if there is a problem adding it on any QE. In upstream, collations are created during initdb, but this won't work for GPDB, because while initdb is running there is no way to be sure that every segment has the same locales installed. We disable collation creation during initdb, and make it the responsibility of the system administrator to initialize any needed collations by either running a CREATE COLLATION command, or running the pg_import_system_collations() UDF. Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Tom Lane 提交于
Pull in more recent version of pg_import_system_collations() from upstream. We have not pulled in the ICU collations, so wholesale remove the sections of code that deal with them. This commit is primarily a cherry-pick of 0b13b2a7, but also pulls in prerequisite changes for CollationCreate(). Rethink behavior of pg_import_system_collations(). Marco Atzeri reported that initdb would fail if "locale -a" reported the same locale name more than once. All previous versions of Postgres implicitly de-duplicated the results of "locale -a", but the rewrite to move the collation import logic into C had lost that property. It had also lost the property that locale names matching built-in collation names were silently ignored. The simplest way to fix this is to make initdb run the function in if-not-exists mode, which means that there's no real use-case for non if-not-exists mode; we might as well just drop the boolean argument and simplify the function's definition to be "add any collations not already known". This change also gets rid of some odd corner cases caused by the fact that aliases were added in if-not-exists mode even if the function argument said otherwise. While at it, adjust the behavior so that pg_import_system_collations() doesn't spew "collation foo already exists, skipping" messages during a re-run; that's completely unhelpful, especially since there are often hundreds of them. And make it return a count of the number of collations it did add, which seems like it might be helpful. Also, re-integrate the previous coding's property that it would make a deterministic selection of which alias to use if there were conflicting possibilities. This would only come into play if "locale -a" reports multiple equivalent locale names, say "de_DE.utf8" and "de_DE.UTF-8", but that hardly seems out of the question. In passing, fix incorrect behavior in pg_import_system_collations()'s ICU code path: it neglected CommandCounterIncrement, which would result in failures if ICU returns duplicate names, and it would try to create comments even if a new collation hadn't been created. Also, reorder operations in initdb so that the 'ucs_basic' collation is created before calling pg_import_system_collations() not after. This prevents a failure if "locale -a" were to report a locale named that. There's no reason to think that that ever happens in the wild, but the old coding would have survived it, so let's be equally robust. Discussion: https://postgr.es/m/20c74bc3-d6ca-243d-1bbc-12f17fa4fe9a@gmail.com (cherry picked from commit 0b13b2a7)
-
由 Peter Eisentraut 提交于
Move this logic out of initdb into a user-callable function. This simplifies the code and makes it possible to update the standard collations later on if additional operating system collations appear. Reviewed-by: NAndres Freund <andres@anarazel.de> Reviewed-by: NEuler Taveira <euler@timbira.com.br> (cherry picked from commit aa17c06f)
-
由 Omer Arap 提交于
-
- 31 5月, 2018 1 次提交
-
-
由 Paul Guo 提交于
This seems to be related to AIX system issue or old compiler. Not long ago there is a similar complaint on the pg community also. http://www.postgresql-archive.org/pgsql-Improve-performance-of-SendRowDescriptionMessage-td5987721.html Do not want to waste too much time on this. Instead, I just work around this issue following what we did on auth.c, i.e. +#if defined(_AIX) +int getpeereid(int, uid_t *__restrict__, gid_t *__restrict__); +#endif
-