- 14 2月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
It hasn't done anything since 2010. If I'm reading the commit log correctly, it was added and deprecated only a few months apart, and probably hasn't done anything in any released version.
-
- 13 2月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
This hasn't been tested for a while. And if someone wants to build GPDB on Solaris, should use autoconf tests and upstream "#ifdef _sparc" method to guard platform-dependent code, rather than the GPDB-specific "pg_on_solaris" flag.
-
由 Pengzhou Tang 提交于
Add GUC_GPDB_ADDOPT flag for internalstyle so it can be dispatched to QEs
-
- 10 2月, 2017 3 次提交
-
-
Due to the plan caching, we might use the same plan again. In that case, operatormem is not 0, to begin with, and it will have previous values. So this assert assumption is wrong and hence removing it. Based on the investigation we found that, 1. Operator Memory is being initialized even if statement_mem get changed when we use the cached plan. 2. We also confirmed that even though we use plan cache, we will still call the executor init. 3. MemoryAccountId may be another potential candidate in Plan to have a similar issue. But w.r.t memory account id, we confirmed that is not serialized from QD to QE and they all get reinitialized anyway when we do the ExecInit. Note: We don't serialize memory account id because it is an index to the account array specific to the current process only.
-
由 Heikki Linnakangas 提交于
They were moved to a separate file, because we don't want them to be mocked in the unit tests that mock shmem.c. Deal with that by having special mock version of them that do the real thing.
-
由 Heikki Linnakangas 提交于
None of the source files that #included gp_atomic.h actually needed the declarations from gp_atomic.h. They actually needed the definitions from port/atomics.h, which gp-atomic.h in turn #included.
-
- 09 2月, 2017 4 次提交
-
-
由 Daniel Gustafsson 提交于
The accum, classes and total arrays must all be of equal dimensionality for classification. Fix bogus dimension comparison likely stemming from a copy-pasteo.
-
由 Heikki Linnakangas 提交于
It's not clear to me why show_grouping_keys() needs to dig into the child like this, when e.g. show_sort_keys() does not. However, this code is going to be replaced with upstream code once we catch up with PostgreSQL 9.4, commit f2609905, so I'm not going worry about that right now. Fixes github issue #1708.
-
由 Daniel Gustafsson 提交于
-
由 Shoaib Lari 提交于
Originally, we don't hold AccessExclusiveLock while renaming relations to provide supportability for altering relation name while directly modifying system catalogs. However, this breaks the isolation semantics of ALTER TABLE. Hence, we add the lock to make it consistent with upstream behavior. The original behavior is now possible with the GUC. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 07 2月, 2017 2 次提交
-
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
There is already a preset GUC for the PostgreSQL version number in machine readable form: server_version_num, and human readable form: server_version. This adds a Greenplum counterparts for these as gp_server_version_num and gp_server_version.
-
- 03 2月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
There's no need to allocate ExecWorkFile struct in TopMemoryContext. We don't rely on a callback to clean them up anymore, they will be closed by the normal ResourceOwner and MemoryContext mechanisms. Otherwise, the memory allocated for the struct is leaked on abort. In the passing, make the error handling more robust. Also modify tuplestorenew.c, to allocate everything in a MemoryContext, rather than using gp_malloc() directly. ResourceOwner and MemoryContext mechanisms will now take care of cleanup on abort, instead of the custom end-of-xact callback. Remove execWorkFile mock test. The only thing it tested was that ExecWorkFile was allocated in TopMemoryContext. That didn't seem very interesting to begin with, and the assumption is now false, anyway. We could've changed the test to test that it's allocated in CurrentMemoryContext instead, but I don't see the point. Per report in private from Karthikeyan Rajaraman. Test cases also by him, with some cleanup by me.
-
This is fixed with the commit d5463511.
-
- 27 1月, 2017 2 次提交
-
-
由 Ashwin Agrawal 提交于
Delta between two adjacent tuples has fixed size in CO blocks of 29 bits during delta compression. So any value larger than it should be stored natively without applying delta compression to it. In cases where adjacent values are two far part delta calculation missed checking for overflows. Hence adding the check that if its negative don't apply delta compression, as logic always subtracts smaller number from larger number only for overflow case it will go negative. Also, adding validation tests for the same. Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
- 26 1月, 2017 1 次提交
-
-
Leveraged bound for the limit with mk sort.
-
- 25 1月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
If an invalid query is passed to gp_dump_query_oids(), the error position would incorrectly point at the location in the original query, containing the call to gp_dump_query_oids() call, rather than the query passed as argument to it. For example: regression=# select gp_dump_query_oids('select * from invalid'); ERROR: relation "invalid" does not exist LINE 1: select gp_dump_query_oids('select * from invalid'); ^ To fix, set up error context information correctly before parsing the query.
-
- 24 1月, 2017 1 次提交
-
-
由 Shreedhar Hardikar 提交于
-
- 23 1月, 2017 1 次提交
-
-
由 Sumedh Kadoo 提交于
-
- 18 1月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
This is addressing the GPDB_83_MERGE_FIXME comment in xact.c:1081. GPDB doesn't need `haveNonTemp` check, since GPDB doesn't allow data loss. GPDB doesn't support the asynchronous commits from upstream because this might cause data inconsistency across segments in a cluster. We disable the support of async commit using macro IMPLEMENT_ASYNC_COMMIT. And make the user GUC `synchronous_commit` as DEFUNCT_OPTIONS, so that its setting will be ignored, and a WARNING is generated. The original check for temp table in smgrGetPendingFileSysWork() is not valid in GPDB, since GPDB temp table use shared buffers to support access across slices. Once GPDB decide to support async commit, this macro can be removed. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
- 13 1月, 2017 2 次提交
-
-
由 Heikki Linnakangas 提交于
DynamicTableScanInfo is an extension of EState, so always allocate it in the same memory context. The DynamicTableScanInfo.memoryContext field always pointed to es_query_cxt, so that is what we in fact always did anyway, this just removes the unnecessary abstraction, for simplicity.
-
由 Heikki Linnakangas 提交于
Pass the EState that contains it to where it's needed, instead.
-
- 12 1月, 2017 1 次提交
-
-
由 Dave Cramer 提交于
* Implement bytea HEX encode/decode this is from upstream commit a2a8c7a6 The primary reason for back patching is to allow pgadmin4 to work Unfortunately there are a few hacks to make this work without Enum Config GUCs mostly around setting the GUC as the target is an integer but the guc value is a string * fixed regression tests \ removed unused code
-
- 10 1月, 2017 1 次提交
-
-
由 Nikos Armenatzoglou 提交于
-
- 07 1月, 2017 1 次提交
-
-
由 Foyzur Rahman 提交于
This reverts commit 48c495a1.
-
- 04 1月, 2017 1 次提交
-
-
由 foyzur 提交于
Undo selected partitions before reselecting new partitions to avoid unnecessary leftover partitions from previous selections. * Adding ICG qp_dpe test to verify that the partitions are reset for each outer tuple.
-
- 30 12月, 2016 1 次提交
-
-
由 Nikos Armenatzoglou 提交于
-
- 29 12月, 2016 1 次提交
-
-
由 Ashwin Agrawal 提交于
Getting mirror or primary in change tracking and recovering the same is done in many other tests, no point testing the same this old way using guc to inject fault.
-
- 21 12月, 2016 2 次提交
-
-
由 Ashwin Agrawal 提交于
Relfilenode is only unique within a tablespace. Across tablespaces same relfilenode may be allocated within a database. Currently, gp_relation_node only stores relfilenode and segment_num and has unique index using only those fields without tablespace. So, it breaks for situations where same relfilenode gets allocated to table within database.
-
由 Ashwin Agrawal 提交于
QE reader leverages SharedLocalSnapshot to perform visibility checks. QE writer is responsible to keep the SharedLocalSnapshot up to date. Before this fix, SharedLocalSnapshot was only updated by writer while acquiring the snapshot. But if transaction id is assigned to subtransaction after it has taken the snapshot, it was not reflected. Due to this when QE reader called TransactionIdIsCurrentTransactionId, it may get sometimes false based on timings for subtransaction ids used by QE writer to insert/update tuples. Hence to fix the situation, SharedLocalSnapshot is now updated when assigning transaction id and deregistered if subtransaction aborts. Also, adding faultinjector to suspend cursor QE reader instead of guc/sleep used in past. Moving cursor tests from bugbuster to ICG and adding deterministic test to exercise the behavior. Fixes #1276, reported by @pengzhout
-
- 20 12月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
Backport another pg_upgrade-related change from upstream. Distinguishing between binary-upgrade and "normal" utility mode turns out to be rather handy. CAUTION: This removes the GPDB-specific postmaster command-line options -b and -C, becuase they clashed with the newly introduced -b flag, used to choose binary-upgrade mode! In order to invoke the GPDB functionality previously reached via -b -C, the long option forms need to be used. This commit switches pg_ctl over to using long options. This commit adds a substantial amount of Greenplum specific code to handle binary-upgrade in the database backend. Below is a list of the changes: - When restoring a schema dump on a master node, in binary-upgrade mode, we want to populate gp_distribution_policy, while in "normal" utility mode, we don't. In the passing, simplify the query used to fetch the entries from gp_distribution_policy in pg_dump. - pg_dump will in binary upgrade mode preassign Oids for the Oid dispatcher to ensure that object Oids are stable across dump/restore. This requires the Oid dispatcher to handle binary upgrade mode. Dump files created with binary-upgrade mode will assign multiple Oids into the preassigned Oids list, sometimes for objects not created within the current transaction. Allow the preassigned_oids list to preserve the Oids across end-of-xact when in binary upgrade mode. - The binary upgrade functions from pg_upgrade_support are used to preassign Oids during pg_dump in binary mode and are installed into a separate binary_upgrade schema. Due to a chicken-and-egg problem, installing these functions by themselves will however fail Oid assignment as they are required for Oid assignment. This reserves a block of OIDs for use with the binary_upgrade schema to ensure that the functions can be installed. The original commit from upstream which has been heavily amended to: commit 76dd09bb Author: Bruce Momjian <bruce@momjian.us> Date: Mon Apr 25 12:00:21 2011 -0400 Add postmaster/postgres undocumented -b option for binary upgrades. This option turns off autovacuum, prevents non-super-user connections, and enables oid setting hooks in the backend. The code continues to use the old autoavacuum disable settings for servers with earlier catalog versions. This includes a catalog version bump to identify servers that support the -b option. Heikki Linnakangas, Daniel Gustafsson and Dave Cramer
-
- 19 12月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
While it should be rare (and the original ticket referred indicates that it is), it's perfectly legal for a UDP buffer to fill up. Set the messagelevel to LOG rather than WARNING.
-
- 17 12月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
-
- 16 12月, 2016 6 次提交
-
-
由 Heikki Linnakangas 提交于
It turns out that PostgreSQL had effectively the same bug, so reported and fixed (commit 4f5182e1) there too. See: https://www.postgresql.org/message-id/20161216105001.13334.42819%40wrigleys.postgresql.org This needs to be refactored, by latest when we merge with PostgreSQL 9.1, as there is a function called quote_literal_internal there with different arguments, and the equivalend of GPDB's quote_literal_internal() is called quote_literal_cstr() there. But for now, just fix the bug. Fixes github issue #1301, reported by Tang Pengzhou.
-
由 Heikki Linnakangas 提交于
stacktracearray is an array member of the ErrorData struct, not a pointer. It cannot be NULL. To silence Coverity warning.
-
由 Heikki Linnakangas 提交于
The if() block in pg_get_expr_worker() had apparently been backported from 8.4, long time ago. But when that if-block was introduced in 8.4, this line was removed at the same time. As it stands, the context constructed in the if-block was ignored. Spotted by Coverity.
-
由 Heikki Linnakangas 提交于
The 8.3 merge introduced new LOCKTAG_VIRTUALTRANSACTION case, and a mistake was made in fixing the merge conflict with the GPDB-specific LOCKTAG_RELATION_APPENDONLY_SEGMENT_FILE case. As a result, pg_locks showed information for virtualxid locks incorrectly. Spotted by Coverity.
-
由 Heikki Linnakangas 提交于
Per Coverity.
-
由 Heikki Linnakangas 提交于
This greatly simplifies the error handling and resource cleanup of Bfz files. Now closing and unlinking bfz files on error is handled by the resource owner mechanism. This brings the BufFile and BFZ codepaths of so-called "workfiles", in execWorkFile.c, much closer to each other. Both now rely on the same mechanism for resource cleanup. This fixes a crash in the test case that's memorized in the included regression test. Remove silly test on bfz_scan_begin(). It was broken, because it used to set the 'fd' field to 1, which means stdout. That trick doesn't work with the File interface, it will throw an assertion failure. Rather than fix it, remove the test altogether. It only tested things that are very explicitly, directly and unconditionally set in bfz_scan_begin(). There's no real value in that. Fix the compress_zlib mock test to reflect that the bfz is now allocated in CurrentMemoryContext rather than TopMemoryContext. Analysis and test case by Karthikeyan Jambu Rajaraman.
-