- 06 6月, 2018 10 次提交
-
-
由 Ashwin Agrawal 提交于
Without this patch the strorage layout is not known in md and smgr layer. Due to lack of this info sub-optimal operations need to be performed generically for all table types. For example Heap specific functions like ForgetRelationFsyncRequests(), DropRelFileNodeBuffers() gets called even for AO and CO tables. Adding new RelFileNodeWithStorageType struct to carry pass storage type to md and smgr layer. XLOG_XACT_COMMIT and XLOG_XACT_ABORT wal records use the new structure which has RelFileNode and storage type Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - pl/container new setting attribute roles * docs - review comment updates for pl/container roles attribute
-
由 Jimmy Yih 提交于
There was a recent change to fault injector framework that made simple form "gp_inject_fault(faultname, type, db_id)" not work with wait_until_triggered fault type. To get around this, we should properly use "gp_wait_until_triggered_fault()" instead. Reference: https://github.com/greenplum-db/gpdb/commit/723e58481ad706d4c8f4f7af1325be2dcd36c985
-
由 Shoaib Lari 提交于
For long running commands such as gpinitstandby with a large master data directory, the server takes a long time. Therefore, there is no acitivity from the client to the server. If the ClientAliveInterval is set, then the server reports a timeout after ClientAliveInterval seconds. Setting a ServerAliveInterval value less than the ClientAliveInterval interval forces the client to send a Null message to the server. Hence, avoiding the timeout. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io> (cherry picked from commit 675aa8e3bd1d5bb187dc93d7ba494819cadb120e)
-
由 Ashwin Agrawal 提交于
Alter tablespace needs to copy all underlying files of table from one tablespace to other. For AO/CO tables this was implemented using full directory scan to find files and copy when persistent tables were removed. This gets very inefficient and varies in performance based on number of files present in the directory. Instead use the same optimization logic used for `mdunlink_ao()` leveraging known file layout for AO/CO tables. Also, old logic had couple of bugs: - missed coping the base or .0 file. Which means data loss if table was altered in past. - xlogging even for temp tables These are fixed as well with this patch. Additional tests added to cover for those missing scenaiors. Also, moved the AO specific code to aomd.c, out of tablecmds.c file to reduce conflicts to upstream.
-
由 Ashwin Agrawal 提交于
Commit 07ee8008 added test section in query_finish_pending.sql to validate case where if a query can be canceled when cancel signal arrives fast than the query dispatched. For the same uses sleep fault. But the test was incorrect due to usage of "begin", as begin sleeps for 50 secs instead of actual select query sleeping. Also, since the fault always trigger the reset fault sleeps for additional 50 secs. Instead remove begin and just set endoccurence to 1. Verified modified test fails/hangs without the fix and passes/completes in couple secs with the fix.
-
由 Ashwin Agrawal 提交于
bfv_partition tests fail if ICW is run n times after creating the cluster, as the role is not dropped. With this commit now this test can be run n times successfully without re-creating the cluster. On the way also remove the suppression of warnings in role.sql.
-
由 Ashwin Agrawal 提交于
Unit tests generate mock version of the .c files and its very annoying as they get in TAGS file and always first visit lands in mocked implementation.
-
由 David Yozie 提交于
* adding best practice note for setting timezone * edits to clarify timezone behavior
-
- 05 6月, 2018 4 次提交
-
-
由 Andreas Scherbaum 提交于
SPI 64 bit changes for pl/Python Includes fault injection tests
-
由 Jialun 提交于
* Implement CPUSET, a new management of cpu resource in resource group which can reserve the specified cores for specified resource group exclusively. This can ensure that there are always available cpu resources for the group which has set CPUSET. The most common scenario is allocating fixed cores for short queries. - One can use it by executing CREATE RESOURCE GROUP xxx WITH ( cpuset='0-1', xxxx). 0-1 are the reserved cpu cores for this group. Or ALTER RESOURCE GROUP SET CPUSET '0,1' to modify the value. - The syntax of CPUSET is a combination of the tuples, each tuple represents one core number or the core numbers interval, separated by comma. E.g. 0,1,2-3. All the core in CPUSET must be available in system and the core numbers in each group cannot have overlap. - CPUSET and CPU_RATE_LIMIT are mutually exclusive. One cannot create a resource group with both CPUSET and CPU_RATE_LIMIT. But the CPUSET and CPU_RATE_LIMIT can be freely switched in one group by executing ALTER operation, that means if one feature has been set, the other is disabled. - The cpu cores will be returned to GPDB, when the group has been dropped, or the CPUSET value has been changed, or the CPU_RATE_LIMIT has been set. - If some of the cores have been allocated to the resource group, then the CPU_RATE_LIMIT in other groups only indicating the percentage of cpu resources of the left cpu cores. - If the GPDB is busy, all the other cores which have not be allocated to any resource groups exclusively through CPUSET have already been run out, the cpu cores in CPUSET will still not be allocated. - The cpu cores in CPUSET will be used exclusively only in GPDB level, the other non-GPDB processes in system may use them. - Add test cases for this new feature, and the test environment must contain at least two cpu cores, so we upgrade the configuration of instance_type in resource_group jobs. * - Compatible with the case that cgroup directory cpuset/gpdb does not exist - Implement pg_dump for cpuset & memory_auditor - Fix a typo - Change default cpuset value from empty string to -1, for the code in 5X assume that all the default value in resource group is integer, a non-integer value will make the system fail to start
-
由 Asim R P 提交于
Temp tables must be included in PREPARE and COMMIT records in GPDB because they are not exempt from 2PC, as in upstream. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Asim R P 提交于
We have found the culprit causing relfilenode collisions to be VACUUM FULL on a mapped relation. The code was reusing OID as relfilenode for the temporary table created by vacuum full. This happened without bumping the relfilenode counter. The patch fixes this such that relfilenode is always generated, even in case of mapped relations. With this, we believe that the possibility of collision still exists in the way sequence OIDs are generated. That needs to be fixed in a separate patch. The fixme in GetNewRelFileNode() should be sufficient to note this. Co-authored-by: NDavid Kimura <dkimura@pivotal.io>
-
- 04 6月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
This adds the basic scaffolding for allowing COMMENT ON RESOURCE GROUP, but without any user visible functions for retrieving the comment. Since we allow COMMENTs on resource queues, we should do the same for resource groups for completeness.
-
由 Daniel Gustafsson 提交于
In order to be able to set comments on resource queues, they must be object addressable, so fix by implement object addressing. Also add a small test for commenting on a resource queue.
-
- 02 6月, 2018 3 次提交
-
-
由 Ashwin Agrawal 提交于
* Remove redundant copy of toast and its index in ATExecSetTableSpace() Commit f70f49fe introduced this double copy of toast and its index. Lets fix it. * Fix mismerged lines in src/interfaces/libpq/Makefile. Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
由 Lisa Owen 提交于
* docs - add resgroup global shmem recommendation * put conditions in a list * explicitly call out vmtracker roles for global shmem * Changing formatting from varname to i
-
由 Mel Kiyama 提交于
* docs - add gprestore options --data-only, --metadata-only. also, fix title link to gpbackup plugins. * docs - gprestore --data-only, --metadata-only. review comment updates
-
- 01 6月, 2018 5 次提交
-
-
由 Taylor Vesely 提交于
Unlike upstream, GPDB needs to keep collations in-sync between multiple databases. Add tests for GPDB specific collation behavior. These tests need to import a system locale, so add a @syslocale@ variable to gpstringstubs.pl in order to test the creation/deletion of collations from system locales. Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Taylor Vesely 提交于
Make CREATE COLLATION and pg_import_system_collations() parallel aware by dispatching collation creation to the QEs. In order for collations to work correctly, we need to be sure that every collation that is created on the QD is also installed on the QEs, and that the OID matches in every database. We will take advantage of two phase commit to prevent collations from being created if there is a problem adding it on any QE. In upstream, collations are created during initdb, but this won't work for GPDB, because while initdb is running there is no way to be sure that every segment has the same locales installed. We disable collation creation during initdb, and make it the responsibility of the system administrator to initialize any needed collations by either running a CREATE COLLATION command, or running the pg_import_system_collations() UDF. Co-authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Tom Lane 提交于
Pull in more recent version of pg_import_system_collations() from upstream. We have not pulled in the ICU collations, so wholesale remove the sections of code that deal with them. This commit is primarily a cherry-pick of 0b13b2a7, but also pulls in prerequisite changes for CollationCreate(). Rethink behavior of pg_import_system_collations(). Marco Atzeri reported that initdb would fail if "locale -a" reported the same locale name more than once. All previous versions of Postgres implicitly de-duplicated the results of "locale -a", but the rewrite to move the collation import logic into C had lost that property. It had also lost the property that locale names matching built-in collation names were silently ignored. The simplest way to fix this is to make initdb run the function in if-not-exists mode, which means that there's no real use-case for non if-not-exists mode; we might as well just drop the boolean argument and simplify the function's definition to be "add any collations not already known". This change also gets rid of some odd corner cases caused by the fact that aliases were added in if-not-exists mode even if the function argument said otherwise. While at it, adjust the behavior so that pg_import_system_collations() doesn't spew "collation foo already exists, skipping" messages during a re-run; that's completely unhelpful, especially since there are often hundreds of them. And make it return a count of the number of collations it did add, which seems like it might be helpful. Also, re-integrate the previous coding's property that it would make a deterministic selection of which alias to use if there were conflicting possibilities. This would only come into play if "locale -a" reports multiple equivalent locale names, say "de_DE.utf8" and "de_DE.UTF-8", but that hardly seems out of the question. In passing, fix incorrect behavior in pg_import_system_collations()'s ICU code path: it neglected CommandCounterIncrement, which would result in failures if ICU returns duplicate names, and it would try to create comments even if a new collation hadn't been created. Also, reorder operations in initdb so that the 'ucs_basic' collation is created before calling pg_import_system_collations() not after. This prevents a failure if "locale -a" were to report a locale named that. There's no reason to think that that ever happens in the wild, but the old coding would have survived it, so let's be equally robust. Discussion: https://postgr.es/m/20c74bc3-d6ca-243d-1bbc-12f17fa4fe9a@gmail.com (cherry picked from commit 0b13b2a7)
-
由 Peter Eisentraut 提交于
Move this logic out of initdb into a user-callable function. This simplifies the code and makes it possible to update the standard collations later on if additional operating system collations appear. Reviewed-by: NAndres Freund <andres@anarazel.de> Reviewed-by: NEuler Taveira <euler@timbira.com.br> (cherry picked from commit aa17c06f)
-
由 Omer Arap 提交于
-
- 31 5月, 2018 1 次提交
-
-
由 Paul Guo 提交于
This seems to be related to AIX system issue or old compiler. Not long ago there is a similar complaint on the pg community also. http://www.postgresql-archive.org/pgsql-Improve-performance-of-SendRowDescriptionMessage-td5987721.html Do not want to waste too much time on this. Instead, I just work around this issue following what we did on auth.c, i.e. +#if defined(_AIX) +int getpeereid(int, uid_t *__restrict__, gid_t *__restrict__); +#endif
-
- 30 5月, 2018 5 次提交
-
-
由 Tang Pengzhou 提交于
* Refine the fault injector framework * Add counting feature so a fault can be triggered N times. * Add a simpler version named gp_inject_fault_infinite. * Refine and make code cleaner include renaming sleepTimes to extraArg so it can be used by other fault types. Now 3 functions provided: 1. gp_inject_fault(faultname, type, ddl, database, tablename, start_occurrence, end_occurrence, extra_arg, db_id) startOccurrence: nth occurrence that a fault starts triggering endOccurrence: nth occurrence that a fault stops triggering, -1 means the fault is always triggered until it is reset. 2. gp_inject_fault(faultname, type, db_id) simpler version for fault triggered only once. 3. gp_inject_fault_infinite(faultname, type, db_id) simpler version for fault always triggered until it's reset. * fix bgwriter_checkpoint case * use gp_inject_fault_infinite here instead of gp_inject_fault so cache of pg_proc that contains gp_inject_fault_infinite is loaded before checkpoint and the following gp_inject_fault_infinite don't dirty the buffer again. * Add a matchsubs to ignore 5 or 6 times hits of fsync_counter. * Fix flaky twophase_tolerance_with_mirror_promotion test * use different session for Scenario 2 and Scenario 3 because the gang of session 2 is no longer valid. * wait for wanted fault to be triggered so no unexpected error occurs. * Add more segment status info to identify error quickly Some cases are right behind FTS test cases. If the segments are not in the desired status, those test cases will fail unexpectedly, this commit adds more debug info at the beginning of test cases to help to identify issues quickly. * Enhance cases to skip fts probe for sure * Do FTS probe request twice to guarantee fts error is triggered
-
由 Paul Guo 提交于
This kind of error could lead to serious problems sometimes.
-
由 Mel Kiyama 提交于
* docs - gpbackup/gprestore plugin for DD Boost - also minor update to S3 plugin- change yaml file parameter backupdir to folder. * docs - review updates for gpbackup plugin for ddboost also an update for S3. change keyword backupdir -> folder in example. * docs - more review updates for gpbackup plugin for ddboost updated files on review comments. also updated HTML on review site. * docs - another set of review updates for gpbackup plugin for ddboost --edits/updates --change toc -move plugin api up a level. --tweaked format of parameter definition. --also update S3 plugin to parallel gpbackup doc. * docs - added pivotal only attribute to topic.
-
由 Lisa Owen 提交于
* docs - misc updates to gpbackup/gprestore plugin api docs * address review comments from david
-
由 Jacob Champion 提交于
The 9.1 merge brought upstream support for `make -k` when running installcheck-world. Use it in the master pipeline.
-
- 29 5月, 2018 3 次提交
-
-
由 Ning Yu 提交于
Some error messages were updated during the 9.1 merge, update the answers for the RETURNING test cases of replicated tables.
-
由 Ning Yu 提交于
* rpt: reorganize data when ALTER from/to replicated. There was a bug that altering from/to a replicated table has no effect, the root cause is that we did not change gp_distribution_policy neither reorganize the data. Now we perform the data reorganization by creating a temp table with the new dist policy and transfering all the data to it. * rpt: support RETURNING for replicated tables. This is to support below syntax (suppose foo is a replicated table): INSERT INTO foo VALUES(1) RETURNING *; UPDATE foo SET c2=c2+1 RETURNING *; DELETE * FROM foo RETURNING *; A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN output, data will be received from one explicit sender in this motion type. * rpt: fix motion type under explicit gather motion. Consider below query: INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan; We used to generate a plan like this: Explicit Gather Motion 3:1 (slice2; segments: 3) -> Insert -> Seq Scan on foo SubPlan 1 (slice2; segments: 3) -> Gather Motion 3:1 (slice1; segments: 1) -> Seq Scan on int8_tbl A gather motion is used for the subplan, which is wrong and will cause a runtime error. A correct plan is like below: Explicit Gather Motion 3:1 (slice2; segments: 3) -> Insert -> Seq Scan on foo SubPlan 1 (slice2; segments: 3) -> Materialize -> Broadcast Motion 3:3 (slice1; segments: 3) -> Seq Scan on int8_tbl * rpt: add test case for with both PRIMARY and UNIQUE. On a replicated table we could set both PRIMARY KEY and UNIQUE constraints, test cases are added to ensure this feature during future development. (cherry picked from commit 72af4af8)
-
由 Ning Yu 提交于
When altering a table's distribution policy we might need to reorganize the data by creating a __temp__ table, copying the data to it, then swap the underlying relation files. However we always create the __temp__ table as permanent, then when the original table is temp the underlying files can not be found in later queries. CREATE TEMP TABLE t1 (c1 int, c2 int) DISTRIBUTED BY (c1); ALTER TABLE t1 SET DISTRIBUTED BY (c2); SELECT * FROM t1;
-
- 28 5月, 2018 2 次提交
-
-
由 Ning Yu 提交于
* rpt: reorganize data when ALTER from/to replicated. There was a bug that altering from/to a replicated table has no effect, the root cause is that we did not change gp_distribution_policy neither reorganize the data. Now we perform the data reorganization by creating a temp table with the new dist policy and transfering all the data to it. * rpt: support RETURNING for replicated tables. This is to support below syntax (suppose foo is a replicated table): INSERT INTO foo VALUES(1) RETURNING *; UPDATE foo SET c2=c2+1 RETURNING *; DELETE * FROM foo RETURNING *; A new motion type EXPLICIT GATHER MOTION is introduced in EXPLAIN output, data will be received from one explicit sender in this motion type. * rpt: fix motion type under explicit gather motion. Consider below query: INSERT INTO foo SELECT f1+10, f2, f3+99 FROM foo RETURNING *, f1+112 IN (SELECT q1 FROM int8_tbl) AS subplan; We used to generate a plan like this: Explicit Gather Motion 3:1 (slice2; segments: 3) -> Insert -> Seq Scan on foo SubPlan 1 (slice2; segments: 3) -> Gather Motion 3:1 (slice1; segments: 1) -> Seq Scan on int8_tbl A gather motion is used for the subplan, which is wrong and will cause a runtime error. A correct plan is like below: Explicit Gather Motion 3:1 (slice2; segments: 3) -> Insert -> Seq Scan on foo SubPlan 1 (slice2; segments: 3) -> Materialize -> Broadcast Motion 3:3 (slice1; segments: 3) -> Seq Scan on int8_tbl * rpt: add test case for with both PRIMARY and UNIQUE. On a replicated table we could set both PRIMARY KEY and UNIQUE constraints, test cases are added to ensure this feature during future development.
- 26 5月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
-
- 25 5月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
Running the full ICW under Travis was problematic, but the unittests doesn't require a running cluster so let's run them to boost our test coverage a little. This changes the invocation of mocker.py to accomodate how Travis need it to be, without changing the effect of it.
-
由 Ning Yu 提交于
* gdd: alloc MyProcPort on stack. We used to allocate MyProcPort with malloc() and does not check for the result, it's reported to trigger a crash in gprecoverseg test: (gdb) bt 0x00007f56b506794f in __strlen_sse42 () from /lib64/libc.so.6 0x00000000009f7894 in write_message_to_server_log () 0x00000000009fc200 in send_message_to_server_log () 0x00000000009ffb85 in EmitErrorReport () 0x00000000009fccf5 in errfinish () 0x0000000000acd99f in FtsTestSegmentDBIsDown () Now we allocate it directly on stack. We can not find out a way to construct a testcase for this issue, but according to the report it should be covered in existing tests. * gdd: store name of super user on stack. We used to store name of super user in a buffer allocated by strdup() and did not check for the result, it's reported to trigger a crash in gprecoverseg test. Now we store it in a buffer on stack. A fatal error will be triggered if no super user can be found. We can not figure out a way to construct a testcase for this change, but according to the report it should be covered in existing tests.
-
由 Jimmy Yih 提交于
The Postgres 9.1 merge introduced a problem where issuing a COPY FROM to a partition table could result in an unexpected error, "ERROR: extra data after last expected column", even though the input file was correct. This would happen if the partition table had partitions where the relnatts were not all the same (e.g. ALTER TABLE DROP COLUMN, ALTER TABLE ADD COLUMN, and then ALTER TABLE EXCHANGE PARTITION). The internal COPY logic would always use the COPY state's relation, the partition root, instead of the actual partition's relation to obtain the relnatts value. In fact, the only reason this is intermittently seen is because the COPY logic, when working on the leaf partition's relation that has a different relnatts value, was looking beyond a boolean array's allocated memory and got a phony value that would evaluate to TRUE. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-