- 01 2月, 2017 3 次提交
-
-
由 Marbin Tan 提交于
5x has external options while 43x does not. This causes a diff, so we will ignore this.
-
由 Marbin Tan 提交于
* change inet and cidr from plain to main. Author: Chumki Roy and Marbin Tan
-
由 Shreedhar Hardikar 提交于
Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
- 31 1月, 2017 4 次提交
-
-
由 Jamie McAtamney 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
This comes from a discussion on the GPDB mailing list about need of the prefetch_inner parameter in Joins and the deadlock case it is trying to prevent.
-
由 Shreedhar Hardikar 提交于
When the size of the inner tuple in a MergeJoin is large, planner decides to place a Materialize on the top of the inner Sort operator. Furthemore if there is any Motion below the MergeJoin, mjnode->prefetch_inner is set to true. During execution of plan that has a Motion underneath a Materialize underneath a MJ, MJ retrieves one tuple from the Motion (via Materialize) and then resets the stream (by calling ExecReScan). Since the Materialize underneath is initialized with incorrect eflags, it deletes its tuplestore, instead of rewinding its cursor. This fix sets the right eflags value during Init (see comments in ExecMaterialReScan).
-
- 28 1月, 2017 13 次提交
-
-
由 Christopher Hajas 提交于
Increase the timeout to start transactions after killing a primary in the gprecoverseg test suite.
-
由 Christopher Hajas 提交于
* Update kerberos init file to ignore gid value
-
由 Christopher Hajas 提交于
-
由 Christopher Hajas 提交于
* Fixes missed file in 14c0a048. * One of the steps was looking for a stale sql file. * One of the steps was looking for a sql file in the wrong location.
-
由 Abhijit Subramanya 提交于
This patch ports the tests for exception blocks in udfs from tinc.
-
由 Abhijit Subramanya 提交于
In GPDB if there was a heap tuple which was being deleted, i.e HEAPTUPLE_DELETE_IN_PROGRESS, it would error out during index rebuild, except for three scenarios: - Scenario 1: it's deleted within its own transaction - Scenario 2: it's a system catalog tuple - Scenario 3: there was a bitmap index being rebuilt The 3rd scenario is introduced long time ago in 3.3. In Postgres, the upstream behavior only handling the first 2 scenarios, and never bother with the 3rd scenario. We cannot repro the Scenario 3 on current code base using concurrent update on bitmap index with vacuum, and this scenario is already covered by test at `src/test/tinc/tincrepo/mpp/gpdb/tests/storage/vacuum/reindex/concurrency/`. It's confident that scenario 3 is also protected. Hence we take upstream behavior. [ci skip] Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Chris Hajas 提交于
* This adds gpcheckcat to the master pipeline. This test takes about 20 * minutes. Authors: Karen Huddleston and Chris Hajas
-
由 dyozie 提交于
-
由 Abhijit Subramanya 提交于
In Postgres, `remove_dbtablespaces()` are used for both `createdb()` failure, and `dropdb()` commit. In GPDB, persistent tables are used to track the state of the objects, like table space, database files, etc., and properly clean them up at the end of transaction (either commit or abort). That's handled by `AtEOXact_smgr()`. We can safely delete `remove_dbtablespaces()` function. We put the deadcode under `#if 0` to make future merge easier. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Abhijit Subramanya 提交于
`locally_opened` is replaced by `local_estate` in postgres by the following commit: commit 817946bb Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Wed Aug 15 21:39:50 2007 +0000 Arrange to cache a ResultRelInfo in the executor's EState for relations that are not one of the query's defined result relations, but nonetheless have triggers fired against them while the query is active. This was formerly impossible but can now occur because of my recent patch to fix the firing order for RI triggers. Caching a ResultRelInfo avoids duplicating work by repeatedly opening and closing the same relation, and also allows EXPLAIN ANALYZE to "see" and report on these extra triggers. Use the same mechanism to cache open relations when firing deferred triggers at transaction shutdown; this replaces the former one-element-cache strategy used in that case, and should improve performance a bit when there are deferred triggers on a number of relations. The deadcode was introduced by merge with 8.3, but the code was already removed in the original commit. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Daniel Gustafsson 提交于
As we have merged with upstream 8.3, update the inline links to the upstream documentation from 8.2 to their 8.3 counterparts. Also go over to HTTPS as postgresql.org is now fully HTTPS with redirects for all HTTP requests.
-
由 Daniel Gustafsson 提交于
JSON and uuid are new datatypes in the 5.0 cycle, and money has been expanded to 8 byte storage.
-
由 Heikki Linnakangas 提交于
The qp_derived_table test has the exact same queries as bugbuster/tiny. The qp_executor test is identical to the bugbuster/executor test. qp_subquery test has the exact same test queries, and more, as bugbuster/subquery.
-
- 27 1月, 2017 13 次提交
-
-
由 Heikki Linnakangas 提交于
There's nothing special about this table. I think it used to be used as to log errors from a subsequent COPY, but we don't even have the "error tables" feature anymore.
-
由 Heikki Linnakangas 提交于
Commit 846b6e5d removed the function, but forgot to remove the prototype.
-
由 Heikki Linnakangas 提交于
Some of these differences were left over by the COPY BINARY patch, but many were introduced earlier already.
-
由 Heikki Linnakangas 提交于
These were missed when the support for COPY BINARY was resurrected.
-
由 Shoaib Lari 提交于
There is considerable overlap between the aoco_alter/aoco_alter_sql_test tests and the test/regress/*/uao_dll tests. This is the first commit to move several of the aoco_alter/aoco_alter_sql_test tests to ICG. This commit moves alter_ao_table_oid_column.sql and alter_ao_table_oid_row.sql from sql/uao_ddl directories to sql/uao_col and sql/uao_row directories respectively. After this change, the sql/uao_ddl and expected/uao_ddl directories are only used for generation of intermediate .sql and .out files during the processing of the --ao-dir option of regress. Therefore, I added .gitignore files in these directories. The bulk_dense_content_rle_compress.sql and small_content_rle_compress_hdr.sql tests are moved to rle.sql. Teh table name used in the alter_drop_allcol.source is renamed to avoid conflict with the same table name used in other tests. Remove migrated aoco_alter tests from tinc. Since these tests are part of ICG, so we don't need them in tinc anymore.
-
由 Chris Hajas 提交于
This is a followup to commit 80d1fc21 as the previous commit only works on a single node cluster Authors: Marbin Tan and Karen Huddleston
-
由 Ashwin Agrawal 提交于
Delta between two adjacent tuples has fixed size in CO blocks of 29 bits during delta compression. So any value larger than it should be stored natively without applying delta compression to it. In cases where adjacent values are two far part delta calculation missed checking for overflows. Hence adding the check that if its negative don't apply delta compression, as logic always subtracts smaller number from larger number only for overflow case it will go negative. Also, adding validation tests for the same. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Tom Lane 提交于
Per discussion, change the log level of this message to be LOG not WARNING. The main point of this change is to avoid causing buildfarm run failures when the stats collector is exceptionally slow to respond, which it not infrequently is on some of the smaller/slower buildfarm members. This change does lose notice to an interactive user when his stats query is looking at out-of-date stats, but the majority opinion (not necessarily that of yours truly) is that WARNING messages would probably not get noticed anyway on heavily loaded production systems. A LOG message at least ensures that the problem is recorded somewhere where bulk auditing for the issue is possible. Also, instead of an untranslated "pgstat wait timeout" message, provide a translatable and hopefully more understandable message "using stale statistics instead of current ones because stats collector is not responding". The original text was written hastily under the assumption that it would never really happen in practice, which we now know to be unduly optimistic. Back-patch to all active branches, since we've seen the buildfarm issue in all branches.
-
由 Nikos Armenatzoglou 提交于
Closes #1606 Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Abhijit Subramanya 提交于
Postgres merge introduced init_sequence() which can optionally lock sequence relation. In GPDB, single sequence server instance is used to generate sequence values for all requests coming from segments, hence it doesn't require a lock on sequence relation. There are three concurrent scenarios when using sequence: Scenario A: concurrent requests from segments: create table t1 (c int, d serial) distributed by (c); insert into t1 select i from generate_series(1, 100) i; Scenario B: concurrent requests from master: tx1: select nextval('t1_c_seq'::regclass); tx2: select nextval('t1_c_seq'::regclass); Scenario C: concurrent requests from both master and segments tx1: select nextval('t1_c_seq'::regclass); tx2: insert into t1 values (200, default); Scenario A is protected by the single instance of sequence server. Scenario B and C are protected by the BUFFER_LOCK_EXCLUSIVE on shared buffer of the sequence relation. With that said, we don't need to hold additional lock on sequence relation. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Venkatesh Raghavan 提交于
In PR #1585 Heikki suggested we replace OidListNth() in PdrgpmdidResolvePolymorphicTypes(). Also verified that in all other places inside PQO related translators we always use ForEach to iterate over the list.i
-
- 26 1月, 2017 7 次提交
-
-
由 Jimmy Yih 提交于
We recently updated some persistent table fields to remove previous_free_tid from gp_persistent_* tables and add tablespace oid to gp_relation_node. However, the persistent table catalog functions were not updated alongside the changes. This commit updates the functions to use the current schema. Commit references: https://github.com/greenplum-db/gpdb/commit/a56c032b7bb4926081828cb00d99909aa871e9c9 https://github.com/greenplum-db/gpdb/commit/8fe321aff600d0b52d4d77fafc23d3292109d3ec Reported by Christopher Hajas.
-
由 David Yozie 提交于
* updating adminguide source with most recent 4.3.x work * updating reference manual with most recent 4.3.x work * updating utility guide with most recent 4.3.x changes * updating client tools guide with most recent 4.3.x changes * adding new file for client tools * updating map files with most recent 4.3.x changes * updating map files with most recent 4.3.x changes * Revert "updating map files with most recent 4.3.x changes" This reverts commit d7570343c17a126b4d11eaee3870ad6daa36966f. * Revert "updating map files with most recent 4.3.x changes" This reverts commit d7570343c17a126b4d11eaee3870ad6daa36966f. * updating ditamaps with latest 4.3.x changes * updating ditamaps with latest 4.3.x changes
-
由 Daniel Gustafsson 提交于
While this would've been a neat thing had it been kept up to date, it's now over 9 years since it was last touched and it doesn't even load in recent versions of Sysquake anymore.
-
-
Leveraged bound for the limit with mk sort.
-
由 Jimmy Yih 提交于
There are segment recovery scenarios where revent would be POLLNVAL and event as POLLOUT. This would cause an infinite loop until the default 10 minute timeout is reached. Because of this, the FTS portion at the bottom of the createGang_async() function does not get correctly executed. This patch adds checking the fd poll revent for POLLERR, POLLHUP, and POLLNVAL to call a PQconnectPoll so that polling status PGRES_POLLING_WRITING can correctly update to PGRES_POLLING_FAILED. It will then be able to exit the loop and execute the FTS stuff.
-
由 Abhijit Subramanya 提交于
During lazy_truncate_heap, an exclusive lock is taken on the heap to be truncated. This lock is required to prevent other concurrent transactions reading an invalid rd_targblock which is going to be propogated to other backends as part of cache invalidation at commit time. This lock need to be held till the end of commit, hence we remove the UnlockRelation() added for GPDB, which is introduced to avoid a deadlock situation caused by concurrent vacuums. However, we cannot repro this deadlock on latest GPDB, where this deadlock was found in a very early version of GPDB (back to 3.3). Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-