- 27 1月, 2017 3 次提交
-
-
由 Nikos Armenatzoglou 提交于
Closes #1606 Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Abhijit Subramanya 提交于
Postgres merge introduced init_sequence() which can optionally lock sequence relation. In GPDB, single sequence server instance is used to generate sequence values for all requests coming from segments, hence it doesn't require a lock on sequence relation. There are three concurrent scenarios when using sequence: Scenario A: concurrent requests from segments: create table t1 (c int, d serial) distributed by (c); insert into t1 select i from generate_series(1, 100) i; Scenario B: concurrent requests from master: tx1: select nextval('t1_c_seq'::regclass); tx2: select nextval('t1_c_seq'::regclass); Scenario C: concurrent requests from both master and segments tx1: select nextval('t1_c_seq'::regclass); tx2: insert into t1 values (200, default); Scenario A is protected by the single instance of sequence server. Scenario B and C are protected by the BUFFER_LOCK_EXCLUSIVE on shared buffer of the sequence relation. With that said, we don't need to hold additional lock on sequence relation. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Venkatesh Raghavan 提交于
In PR #1585 Heikki suggested we replace OidListNth() in PdrgpmdidResolvePolymorphicTypes(). Also verified that in all other places inside PQO related translators we always use ForEach to iterate over the list.i
-
- 26 1月, 2017 8 次提交
-
-
由 Jimmy Yih 提交于
We recently updated some persistent table fields to remove previous_free_tid from gp_persistent_* tables and add tablespace oid to gp_relation_node. However, the persistent table catalog functions were not updated alongside the changes. This commit updates the functions to use the current schema. Commit references: https://github.com/greenplum-db/gpdb/commit/a56c032b7bb4926081828cb00d99909aa871e9c9 https://github.com/greenplum-db/gpdb/commit/8fe321aff600d0b52d4d77fafc23d3292109d3ec Reported by Christopher Hajas.
-
由 David Yozie 提交于
* updating adminguide source with most recent 4.3.x work * updating reference manual with most recent 4.3.x work * updating utility guide with most recent 4.3.x changes * updating client tools guide with most recent 4.3.x changes * adding new file for client tools * updating map files with most recent 4.3.x changes * updating map files with most recent 4.3.x changes * Revert "updating map files with most recent 4.3.x changes" This reverts commit d7570343c17a126b4d11eaee3870ad6daa36966f. * Revert "updating map files with most recent 4.3.x changes" This reverts commit d7570343c17a126b4d11eaee3870ad6daa36966f. * updating ditamaps with latest 4.3.x changes * updating ditamaps with latest 4.3.x changes
-
由 Daniel Gustafsson 提交于
While this would've been a neat thing had it been kept up to date, it's now over 9 years since it was last touched and it doesn't even load in recent versions of Sysquake anymore.
-
-
Leveraged bound for the limit with mk sort.
-
由 Jimmy Yih 提交于
There are segment recovery scenarios where revent would be POLLNVAL and event as POLLOUT. This would cause an infinite loop until the default 10 minute timeout is reached. Because of this, the FTS portion at the bottom of the createGang_async() function does not get correctly executed. This patch adds checking the fd poll revent for POLLERR, POLLHUP, and POLLNVAL to call a PQconnectPoll so that polling status PGRES_POLLING_WRITING can correctly update to PGRES_POLLING_FAILED. It will then be able to exit the loop and execute the FTS stuff.
-
由 Abhijit Subramanya 提交于
During lazy_truncate_heap, an exclusive lock is taken on the heap to be truncated. This lock is required to prevent other concurrent transactions reading an invalid rd_targblock which is going to be propogated to other backends as part of cache invalidation at commit time. This lock need to be held till the end of commit, hence we remove the UnlockRelation() added for GPDB, which is introduced to avoid a deadlock situation caused by concurrent vacuums. However, we cannot repro this deadlock on latest GPDB, where this deadlock was found in a very early version of GPDB (back to 3.3). Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
There are two issues with current logfile_getname() - First, it doesn't honor the `gp_log_format` GUC. Originally, when the format is CSV, then `.csv` is used, and when format is TEXT, the `.log` if used. However, currently the `.csv` is always used regardless of the `gp_log_format` settings, hence make the content of the file and suffix inconsistent. - Second, it mistakenly generate logs with wrong suffix `.csv.csv` during logfile_rotate(), due to the wrong assumption of the filename always containing `.log` when suffix is NULL. Also, due to the calling sequence of the logfile_rotate, an extra empty file is generated, e.g. in this case, the file with `.csv.csv` is always empty. Fix in this patch bring back original GPDB behavior. After the fix, we generated correct extension, however, still an empty extra log file generated during log rotation. However, a separate refactoring is required to clean up all the API changes in all the callers of logfile_getname(), since the parameter `suffix` is no longer needed. Also, the calling of logfile_rotate() to fix the extra empty log file issue. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 25 1月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
If an invalid query is passed to gp_dump_query_oids(), the error position would incorrectly point at the location in the original query, containing the call to gp_dump_query_oids() call, rather than the query passed as argument to it. For example: regression=# select gp_dump_query_oids('select * from invalid'); ERROR: relation "invalid" does not exist LINE 1: select gp_dump_query_oids('select * from invalid'); ^ To fix, set up error context information correctly before parsing the query.
-
由 Nikos Armenatzoglou 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Nikos Armenatzoglou 提交于
If GPDB evaluates a query that needs to access an index, e.g., btree, then at least one Bitmap Index Scan will appear in query's plan. MultiExecBitmapIndexScan function will be invoked to construct the bitmap. If a query contains a WHERE clause similar to d in (1,2), where a bitmap index has been created on column d), then instead of creating a simple bitmap, MultiExecBitmapIndexScan will generate a composite bitmap that ORs all the bitmaps that satisfy the query condition. In our example, MultiExecBitmapIndexScan (in particular bmgetmulti) will OR the bitmaps of d = 1 and d = 2. To OR two bitmaps A and B, A should be a StreamBitmap (and not a HashBitmap). In this commit, we remove code that was executed when MultiExecBitmapIndexScan has to generate a composite bitmap of A and B, where A is a StreamBitmap and B is a HashBitmap generated when accessing either a btree or a gin or a gist or a hash index. It seems that this is not a possible scenario because i) A can be a StreamBitmap only if a bitmap index has been constructed on a particular column, and ii) to the best of our knowledge we cannot have a single Bitmap Index Scan that will access both a Bitmap index and an index of another type e.g., btree. Authors: Nikos Armenatzoglou Shreedhar Hardikar <shardikar@pivotal.io> Haisheng Yuan <hyuan@pivotal.io>
-
由 Ashwin Agrawal 提交于
As part of 8.3 merge via this upstream commit 92c2ecc1, code to ignore lazy vacuum from calculating RecentXmin and RecentGlobalXmin was introduced. In GPDB as part of lazy vacuum, reindex is performed for bitmap indexes, which generates tuples in pg_class with lazy vacuum's transaction ID. Ignoring lazy vacuum from RecentXmin and RecentGlobalXmin during GetSnapshotData caused incorrect setting of hintbits to `HEAP_XMAX_INVALID` for tuple intended to be deletd by lazy vacuum and breaking HOT chain. This transaction visibility issue was encountered in CI many times with parallel schedule `bitmap_index, analyze` failing with error `could not find pg_class tuple for index` at commit time of lazy vacuum. Hence this commit stops tracking lazy vacuum in MyProc and performing any specific action related to same.
-
由 Ashwin Agrawal 提交于
Commit 8fe321af added tablespace OID to gp_relation_node to correctly reflect unique relfilenode. As a result need to modify the gpcheckcat query by adding tablespace OID validating gp_relation_node's correctness with gp_persistent_relation_node.
-
由 Abhijit Subramanya 提交于
-
由 David Yozie 提交于
-
- 24 1月, 2017 22 次提交
-
-
由 Pengzhou Tang 提交于
Commit a2c3dd20 unexpectly cause test_fts_transitions_02 running for a longer time, so revert related modification from it.
-
由 Heikki Linnakangas 提交于
This reduces the risk of accidentally masking out messages in a test that's not supposed to produce such messages in the first place, and is just nicer in general, IMHO. While we're at it, add a brief comment to init_file to explain what it's for. Also, remove a few more matchsubs from atmsort.pm that seem to be unused.
-
由 Daniel Gustafsson 提交于
Since overflow_tuple() cannot handle a negative value for victim_lp_len, ensure that we have found a victim before continuing. Even though this should be rare, being extra careful in datapage rewriting seems like a good idea.
-
由 Daniel Gustafsson 提交于
In trying to free up space via unused line pointers the test for enough room wasn't recalculating the effect of any mem moves. Add recalc step before testing.
-
由 Daniel Gustafsson 提交于
The gpoptutils module was removed during the 5.0 development cycle and won't be available in the new cluster. Add exception for upgrades from 4.3.
-
由 Heikki Linnakangas 提交于
When a table which had an attribute whose type has been dropped process the ALTER TABLE command queue, a "hidden" type will be created, and immediately dropped, during ALTER TABLE processing for table redistribution. This will emit several NOTICEs which can be confusing to the user as it's an autogenerated name and the DROP TYPE can have happened at a previous time. Below is an example of the output: create table <tablename> (a integer, b <typename>); drop type <typename>; ... alter table <tablename> set with(reorganize = true) distributed randomly; NOTICE: return type pg_atsdb_<oid>_2_3 is only a shell NOTICE: argument type pg_atsdb_<oid>_2_3 is only a shell NOTICE: drop cascades to function pg_atsdb_<oid>_2_3_out(pg_atsdb_<oid>_2_3) NOTICE: drop cascades to function pg_atsdb_<oid>_2_3_in(cstring) The reason for adding the hidden types is that the redistribution is performed with a CTAS doing SELECT *. To fix, change the way the CTAS is done, to not create hidden types. The temp table that we create still needs to include dropped columns at the same positions as the old one. Otherwise, when we swap the relation files, a tuple's representation on-disk won't match the catalogs. However, we cannot easily re-construct a dropped column with the same attlen, attalign, etc. as the original dropped column. Instead, create it as if it was an INT4 column, and just before swapping the relation files, update the attlen, attalign fields in pg_attribute entries of the dropped columns to match that of INT4. That way, the original table's catalog entries match that of the temp table. Alternatively, we could build the temp table without the dropped columns, and remove them from pg_attribute altogether. However, we'd need to update the attnum field of all following columns, and cascade that change to at least pg_attrdef and pg_depend. That seems more complicated. Also remove output from expected testfiles and perform minor cleanups. Original patch by Daniel Gustafsson, with the int4-placeholder mechanism added by me.
-
由 Heikki Linnakangas 提交于
In previous versions of GPDB, we compiled PL/perl against the Perl version that shipped with RHEL5, and built a separate gpkkg package of plperl, compiled against the version that ships with RHEL6. If you wanted to use PL/perl on RHEL6, you had to install the package separately. That's now obsolete, as we don't support RHEL5 anymore. We can just always build against the later Perl version. Adjust concourse script to cope when no gppkg packages were built, as is now the case on most platforms.
-
由 Daniel Gustafsson 提交于
Perforce was used at Pivotal before Greenplum was open sourced, but all uses of which has since been retired. Remove Perforce leftovers that are dead code now.
-
由 Jimmy Yih 提交于
-
由 Jimmy Yih 提交于
Some walrep TINC tests have been broken for a while now due to all the changes going into 5.0_MASTER. This commit brings the tests back to a green state so that we can add these tests back to our validation pipeline. Most changes are simple like fixing pg_ctl calls to use long options, updating xlogdump parsing, or just updating ans files.
-
由 Nikos Armenatzoglou 提交于
-
由 Christopher Hajas 提交于
* The --batch-size=10 flag was mistakenly added to a scenario that * requires ordering and thus a batch size of 1.
-
由 Nikos Armenatzoglou 提交于
In this commit, we change the ownership of a StreamNode, so that when a StreamNode A is added as an input to a StreamNode B, the bitmap that was pointing to A abandons the ownership and the pointer is set to NULL. In the above example, when GPDB frees BitmapOR, it will not attempt to free OR's StreamNode. In addition, if a StreamNode is an OpStream, then opstream_free function is used to free the memory used by the StreamNode, which actually uses pfree. On the other hand, if a StreamNode is an IndexStream, then stream_free function is invoked, which does not pfree the StreamNode. For consistency, we add function indexstream_free to always pfree a StreamNode of IndexStream type. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Marbin Tan 提交于
-
由 Larry Hamel 提交于
-- add behave test for gp_default_storage_options -- add PgHba class to represent pg_hba.conf for textual manipulation Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Chris Hajas 提交于
* Update foreign key json file name to 5.0 * This was breaking gpcheckcat which assumes the json file corresponds * to the current greenplum version. * Update docs for generating json file. Authors: Chris Hajas and Jamie McAtamney
-
由 Ashwin Agrawal 提交于
CI red with unit test failure: ``` [ RUN ] test__open_alert_log_file__NonGucOpen [ OK ] test__open_alert_log_file__NonGucOpen [ RUN ] test__open_alert_log_file__NonMaster [ OK ] test__open_alert_log_file__NonMaster [ RUN ] test__open_alert_log_file__OpenAlertLog "gpperfmon/logs/alert_log.1.log" != "gpperfmon/logs/alert_log.12345" ERROR: Check of parameter filename, function fopen failed Expected parameter declared at syslogger_test.c:69 ERROR: syslogger_test.c:19 Failure! [ FAILED ] test__open_alert_log_file__OpenAlertLog [=============] 3 tests ran [ PASSED ] 2 tests [ FAILED ] 1 tests, listed below [ FAILED ] test__open_alert_log_file__OpenAlertLog ``` This reverts commit ffb2cf08.
-
由 Ashwin Agrawal 提交于
There are two issues with current logfile_getname() - First, it doesn't honor the `gp_log_format` GUC. Originally, when the format is CSV, then `.csv` is used, and when format is TEXT, the `.log` if used. However, currently the `.csv` is always used regardless of the `gp_log_format` settings, hence make the content of the file and suffix inconsistent. - Second, it mistakenly generate logs with wrong suffix `.csv.csv` during logfile_rotate(), due to the wrong assumption of the filename always containing `.log` when suffix is NULL. Also, due to the calling sequence of the logfile_rotate, an extra empty file is generated, e.g. in this case, the file with `.csv.csv` is always empty. Fix in this patch bring back original GPDB behavior. After the fix, we generated correct extension, however, still an empty extra log file generated during log rotation. However, a separate refactoring is required to clean up all the API changes in all the callers of logfile_getname(), since the parameter `suffix` is no longer needed. Also, the calling of logfile_rotate() to fix the extra empty log file issue. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Dhanashree Kashid 提交于
After 8.3 merge, gpdb has new polymorphic types, ANYENUM and ANYNONARRAY. This fix adds support for ANYENUM and ANYNONARRAY in Translator. As per postgreSQL, when a function has polymorphic arguments and results; in the function call they must have the same actual type. For example, a function declared as `f(ANYARRAY) returns ANYENUM` will only accept arrays of enum types. We already have this resolution logic implemented in `resolve_polymorphic_argtypes()`. Refactor the code in `PdrgpmdidResolvePolymorphicTypes()` to use `resolve_polymorphic_argtypes()` to deduce the correct data type for function argument and return type, based on function call. Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io> Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
由 Shreedhar Hardikar 提交于
-
由 Abhijit Subramanya 提交于
-
由 Tom Meyer 提交于
We fixed all the shell checks for the scan_with_coverity script and now the cov-build is run in gpAux. We added the following step to tar it and upload the tarball to scan.coverity.com. In the pipeline file we added a twice weekly trigger on Monday and Thursday morning. Signed-off-by: NJingyi Mei <jmei@pivotal.io> Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-