- 28 12月, 2018 17 次提交
-
-
由 Heikki Linnakangas 提交于
Now that filerep is gone, it shouldn't be necessary anymore. Clarify the way 'lockHolderProcPtr' is cleared, while we're at it.
-
由 Heikki Linnakangas 提交于
These happen often, when we have backported some upstream commits, and later merge the same commits the second time, as part of the PostgreSQL merge.
-
由 Heikki Linnakangas 提交于
We don't support compiling the server on Windows. Even if we did, I think much of these wouldn't be needed.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It was always set to same value as 'numConns'. I guess there was some idea of also connecting to mirrors with the interconnect, but that hasn't been used for a long time.
-
由 Heikki Linnakangas 提交于
Include the correct header instead.
-
由 Heikki Linnakangas 提交于
Not sure why it was put in cdbpublic.h originally. I don't see any reason for it now.
-
由 Paul Guo 提交于
See the plan example blow (without this patch). regression=# explain select a.idv, b.idv from tidv a, tidv b where a.idv = b.idv; QUERY PLAN ----------------------------------------------------------------------------------------------------------------- Merge Join (cost=10000000000.33..10000085645.93 rows=2787840 width=64) Merge Cond: (a.idv = b.idv) -> Gather Motion 3:1 (slice1; segments: 3) (cost=0.17..21848.17 rows=52800 width=32) Merge Key: a.idv -> Index Only Scan using tidv_idv_idx on tidv a (cost=0.17..20792.17 rows=17600 width=32) -> Materialize (cost=0.00..0.00 rows=0 width=0) -> Materialize (cost=0.00..0.00 rows=0 width=0) -> Gather Motion 3:1 (slice2; segments: 3) (cost=0.17..21848.17 rows=52800 width=32) Merge Key: b.idv -> Index Only Scan using tidv_idv_idx on tidv b (cost=0.17..20792.17 rows=17600 width=32) Optimizer: legacy query optimizer (11 rows) Reviewed by Richard Guo and Jinbao Chen
-
由 Kris Macoskey 提交于
It has been a long time since this spec file has been used to build a Greenplum rpm installer. The ability to build a Greenplum rpm installer will eventually be replaced with a new spec file, but it will not live in this directory nor does it need to be based on this older spec file. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
I am unsure where this script was originally used and when. I cannot find anything currently using it for Greenplum 6.X, so I believe it is safe to delete. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
This Greenplum logo png is used in the top level README of the repository. Instead of having the image live a few layers deep in the gpAux/releng directory (which we will eventually be removing), have the image be placed in close proximity to the README.md itself. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Kris Macoskey 提交于
The self-extracting, binary installer for Greenplum (a bash script with a tarball appended) will no longer be used for Greenplum 6.X. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Taylor Vesely 提交于
This build script appears to have been unused for quite some time.
-
由 Kris Macoskey 提交于
This script was once necessary when Pulse was being used for building and testing Greenplum. Pulse is no longer used and it's not necessary to maintain any of the scripts associated with using Pulse. Authored-by: NKris Macoskey <kmacoskey@pivotal.io>
-
由 Heikki Linnakangas 提交于
The 'complex_decode_double' subroutine, used in the input function for the 'complex' datatype, was copy-pasted from float8in, and modified. However, those modifications broke compilation with HAVE_BUGGY_SOLARIS_STRTOD. The code inside the #ifdef referenced to non-existent 'endptr' local variable (it was called 'end_ptr'). To make maintenance of this function easier, copy-paste it verbatim from float8in, and avoid making any unnecessary changes to it. In the passing, fix a missing space in the error message. Fixes https://github.com/greenplum-db/gpdb/issues/5879. Reviewed-by: NDavid Kimura <dkimura@pivotal.io>
-
由 Ashwin Agrawal 提交于
FTS message handler process is special system process, which doesn't have access to catalog. Hence, it shouldn't be performing any catalog access. Calling `superuser()` via `set_gp_replication_config()` or `pg_reload_conf()` results in error for the same reason. Hence, if `am_ftshandler` process avoid calling `superuser()` and thereby avoid catalog access. Rest all the stuff performed by this process doesn't need catalog access. Fixes #4764 github issue.
-
由 Ashwin Agrawal 提交于
When FTS promotes mirror, in gp_segment_configuration it marks mirror as not in sync ('n'). Hence, on mirror when promotion request is received, best to reset the synchronous_standby_names. If this is not done, commits hang/wait after promotion till next FTS probe cycle to detect this condition and reset the synchronous_standby_names.
-
- 27 12月, 2018 6 次提交
-
-
由 Pengzhou Tang 提交于
This is involved unexpectedly by b120194a when resolving conflict with the master.
-
由 Tang Pengzhou 提交于
* Do not expose system columns to users for replicated table for replicated table, all replica in segments should always be the same which make gp_segment_id ambiguous for replicated table, instead of makeing effort to keep gp_segment_id the same in all segments, I would like to just hide the gp_segment_id for replicated table to make things simpler. This commit only make gp_segment_id invisible to users at the stage of transforming parsetree, in the underlying storage, each segment still store different value for system column gp_segment_id, operations like SPLITUPDATE might also use gp_segment_id to do an explicit redistribution motion, this is fine as long as the user-visible columns have the same data. * Fixup reshuffle* test cases reshuffle.* cases used to use gp_segment_id to test that a replicated table is actually expanded to new segments, now gp_segment_id is invisible to users, some errors reports. Because replicated table has triple data, we can get enough info from the total count even without a group by of gp_segment_id. It's not 100% accurate but should be enough already. * Do dependencies check when converting a table to replicated table system columns of replicated table should not be exposed to users, we do the check at parser stage, however, if users create views or rules involve the system columns of a hash distributed table and then the hash distributed table is converted to a replicated table, then users can still access the system columns of replicated table through views and rules because they bypass the check in the parser stage. To resolve this, we add a dependencies check when altering a table to replicated table, users need to drop the views or rules first. I tried to add a recheck in later stages like planner or executor, but it seems that system columns like CTID are used internally for basic DMLs like UPDATE/DELETE and those columns are added even before the rewrite of views and rules, so adding a recheck will block basic DMLs too.
-
由 Paul Guo 提交于
There is a case that a postmaster process is killed-by-9 and then the process has no chance to remove its temporary files /tmp/.s.PGSQL.${PORT}*, which are used by the demo create script to check the port usability. For the above case re-creating demo cluster will fail but the below error message information is misleading since apparently the port is not used. Check to see if the port is free by using : 'netstat -an | grep 16432'. Fix this by providing more information to users. Reviewed by Ashwin Agrawal
-
由 Karen Huddleston 提交于
The repo name was changed after the github migration. The previous paths will not work correctly for new clones of the repository. Authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
由 Ning Yu 提交于
Here is the return values of `LockAcquire()`: * LOCKACQUIRE_NOT_AVAIL lock not available, and dontWait=true * LOCKACQUIRE_OK lock successfully acquired * LOCKACQUIRE_ALREADY_HELD incremented count for lock already held In our case `dontWait` is false so `LOCKACQUIRE_NOT_AVAIL` is never returned, the other 2 both indicate that the lock is successfully acquired. So simply ignore the return value. This issue is detected by Coverity, here is the original report: *** CID 190295: Error handling issues (CHECKED_RETURN) /tmp/build/0e1b53a0/gpdb_src/src/backend/utils/misc/gpexpand.c: 98 in gp_expand_lock_catalog() 92 * 93 * This should only be called by gpexpand. 94 */ 95 Datum 96 gp_expand_lock_catalog(PG_FUNCTION_ARGS) 97 { >>> CID 190295: Error handling issues (CHECKED_RETURN) >>> Calling "LockAcquire" without checking return value (as is done elsewhere 14 out of 16 times). 98 LockAcquire(&gp_expand_locktag, AccessExclusiveLock, false, false); 99 100 PG_RETURN_VOID(); 101 } 102 103 /*
-
由 Abhijit Subramanya 提交于
-
- 26 12月, 2018 2 次提交
-
-
由 Paul Guo 提交于
-
由 Zhenghua Lyu 提交于
A detailed README for gpexpand. Co-authored-by: NYandong Yao <yyao@pivotal.io>
-
- 25 12月, 2018 2 次提交
-
-
由 Ning Yu 提交于
In macro `FAST_PATH_SET_HOLD_TILL_END_XACT()` we expect an expression to return uint64 type, however the actual return type depends is decided by the argument `bits`, when `bits` is 32-bit wide it might not produce correct result. Fixed by adding necessary type conversion in the macro. This issue is detected by Coverity, here is the original report: *** CID 190294: Integer handling issues (BAD_SHIFT) /tmp/build/0e1b53a0/gpdb_src/src/backend/storage/lmgr/lock.c: 4557 in setFPHoldTillEndXact() 4551 4552 if (proc->fpRelId[f] != relid || 4553 (lockbits = FAST_PATH_GET_BITS(proc, f)) == 0) 4554 continue; 4555 4556 /* one relid only occupies one slot. */ >>> CID 190294: Integer handling issues (BAD_SHIFT) >>> In expression "(lockbits & 7U) << 3U * f", left shifting by more than 31 bits has undefined behavior. The shift amount, "3U * f", is as much as 45. 4557 FAST_PATH_SET_HOLD_TILL_END_XACT(proc, f, lockbits); 4558 result = true; 4559 break; 4560 } 4561 4562 LWLockRelease(proc->backendLock); 4563 4564 return result;
-
由 David Kimura 提交于
* Make SendTuple do de-toasting for heap as well as memtuples Motion node didn't consistently handle where MemTuples and HeapTuples are de-toasted. This change delays de-toasting of MemTuple in motion sender until SerializeTupleDirect(), which is where HeapTuples are de-toasted. This change also corrects a bug in de-toasting of memtuples in motion sender. De-toasting of memtuples marked the slot containing the de-toasted memtuple as virtual. When a TupleTableSlot is marked virtual, its values array should point to addresses within the memtuple. The bug was that the values array was populated *before* de-toasting and the memory used by the toasted memtuple was freed. Subsequent usages of this slot suffered from dereferencing invalid memory through the pointers in values array. This change corrects the issue by simplifying ExecFetchSlotMemTuple() to no longer manage memory required to de-toast which and also aligns the code structure more similar to ExecFetchSlotMinimalTuple() in Postgres. An alternative solution that we considered is to continue de-toasting in ExecFetchSlotMemTuple() and clear the virtual bit before freeing the tuple. A disadvantage of that approach is that it modifies an existing slot and complicates the contract of memtuple and virtual table slots. Also, it doesn't match with where HeapTuples are de-toasted so the contract becomes less clear. The bug is hard to discover because motion node in executor is the only place where a memtuple is de-toasted, before sending it over interconnect. Only a few plan nodes make use of a tuple that's already returned to the parent nodes e.g. Unique (DISTINCT ON()) and SetOp (INTERSECT / EXCEPT). Example plan that manifests the bug: Gather Motion 3:1 Merge Key: a, b -> Unique Group By: a, b -> Sort Sort Key (Distinct): a, b -> Seq Scan The Unique node uses the previous tuple de-toasted by the Gather Motion sender to compare with the new tuple obtained from the Sort node for eliminating duplicates. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 24 12月, 2018 2 次提交
-
-
由 Adam Lee 提交于
``` >>> Using uninitialized value "useHeapMultiInsert". 4130 if (useHeapMultiInsert) ``` Also, refine the lines to make it clear.
-
由 Daniel Gustafsson 提交于
Prototypes from functions removed in the large dispatch refactoring. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
- 23 12月, 2018 1 次提交
-
-
由 Daniel Gustafsson 提交于
The parquet relstorage support was, according to the closed ticket system, added a long time ago in order to reach code parity with WAWQ. There was never a way to create a relation with relstorage 'p' and indeed PARQUET external tables in Greenplum use relstorage 'x' for external. Having dead code around as a compatibility shim with another database seems quite pointless, and if ORCA actually ever calls this then that should be fixed rather than having a lookup which will always fail. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
- 22 12月, 2018 4 次提交
-
-
由 Zhenghua Lyu 提交于
If bmgetbimap cannot acquire a bitmap, it return an empty TIDBITMAP. This will cause an error in the next time loop in MultiExecBitmapIndexScan. An example is shown here: ```sql SET enable_seqscan = OFF; SET enable_indexscan = ON; SET enable_bitmapscan = ON; create table bm_test (i int, t text); insert into bm_test select i % 10, (i % 10)::text from generate_series(1, 100) i; create index bm_test_idx on bm_test using bitmap (i); select count(*) from bm_test where i=1; select count(*) from bm_test where i in(1, 8); ``` The tuple with `i = 1` and the tuple with `i=8` are on different segments. When we execute the expr `i in (1,8)`, we do the following work on each segment: loop the array `(1, 8)` to get two bitmaps and merge them using OR. (See function `MultiExecBitmapIndexScan`) Suppose on the segment where `i=8` is, it first tries to get bitmap for `i=1`. Since there is no bitmap for `i=1` on this segment, it will return an empty TIDBitmap(see function `bmgetbitmap`). Then the next time, we could find `i=8`'s bitmap, but its type is `StreamBitmap`, we cannot merge it with the previous TIDBitmap. That is the bug's root cause. In this commit, we fix this by returning an empty Streambitmap in the function bmgetbitmap. Co-authored-by: NShujie Zhang <shzhang@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - remove track_count GUC * docs - removed GUC track_activities will be backported to 5X_STABLE
-
由 Heikki Linnakangas 提交于
Fixes github issue #6345. Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Taylor Vesely 提交于
Previously PXF was compiled and tested as part of the gpdb_master pipeline, but it has since been spun out into its own pipeline. These references were left behind, and appear to no longer be used. Co-authored-by: NJimmy Yih <jyih@pivotal.io>
-
- 21 12月, 2018 6 次提交
-
-
由 Lisa Owen 提交于
-
由 Heikki Linnakangas 提交于
Track ZSTD handles with resource owners, so that they can be closed on abort. Reviewed-by: NIvan Leskin <leskin.in@arenadata.io>
-
由 Heikki Linnakangas 提交于
With GCC 8: fileam.c: In function ‘external_set_env_vars_ext’: fileam.c:2315:41: warning: ‘%d’ directive writing between 1 and 10 bytes into a region of size 8 [-Wformat-overflow=] sprintf(extvar->GP_LINE_DELIM_LENGTH, "%d", line_delim_len); ^~ fileam.c:2315:40: note: directive argument in the range [-1, 2147483647] sprintf(extvar->GP_LINE_DELIM_LENGTH, "%d", line_delim_len); ^~~~ fileam.c:2315:2: note: ‘sprintf’ output between 2 and 11 bytes into a destination of size 8 sprintf(extvar->GP_LINE_DELIM_LENGTH, "%d", line_delim_len); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For some reason, the compiler is only warning about GP_LINE_DELIM_LENGTH, and not the other small buffers used similarly. Perhaps because it considers it OK to overflow to another char[] field in a struct, but not over a pointer? But for consistency, fix them all.
-
由 Heikki Linnakangas 提交于
heap.c: In function ‘heap_create_with_catalog’: heap.c:1678:3: error: ‘stdRdOptions’ may be used uninitialized in this function [-Werror=maybe-uninitialized] InsertAppendOnlyEntry(relid, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdRdOptions->blocksize, ~~~~~~~~~~~~~~~~~~~~~~~~ safefswritesize, ~~~~~~~~~~~~~~~~ stdRdOptions->compresslevel, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdRdOptions->checksum, ~~~~~~~~~~~~~~~~~~~~~~~ stdRdOptions->columnstore, ~~~~~~~~~~~~~~~~~~~~~~~~~~ stdRdOptions->compresstype, ~~~~~~~~~~~~~~~~~~~~~~~~~~~ InvalidOid, ~~~~~~~~~~~ InvalidOid, ~~~~~~~~~~~ InvalidOid, ~~~~~~~~~~~ InvalidOid, ~~~~~~~~~~~ InvalidOid); ~~~~~~~~~~~ This cannot actually happen, 'stdRdOptions' is always set, when 'appendOnlyRel' is true. It's a bit surprising that GCC apparently sees that with -O2 and below, but not with -O3... Fixes github issue https://github.com/greenplum-db/gpdb/issues/6121.
-
由 Heikki Linnakangas 提交于
resgroup-ops-linux.c: In function ‘removeDir’: resgroup-ops-linux.c:652:17: warning: ‘%ld’ directive writing between 1 and 20 bytes into a region of size 16 [-Wformat-overflow=] sprintf(str, "%ld", pid); ^~~ resgroup-ops-linux.c:652:16: note: directive argument in the range [-9223372036854775808, 9223372036854775806] sprintf(str, "%ld", pid); ^~~~~ resgroup-ops-linux.c:652:3: note: ‘sprintf’ output between 2 and 21 bytes into a destination of size 16 sprintf(str, "%ld", pid); ^~~~~~~~~~~~~~~~~~~~~~~~ That couldn't overflow in reality, because a pid cannot be that large, but let's avoid the warning.
-
由 Heikki Linnakangas 提交于
With GCC 8, -O1, I was getting this: In file included from ../../../src/include/postgres.h:47, from resgroupcmds.c:14: resgroupcmds.c: In function ‘AlterResourceGroup’: ../../../src/include/c.h:852:4: error: ‘cpuset’ may be used uninitialized in this function [-Werror=maybe-uninitialized] strncpy(_dst, (src), _len); \ ^~~~~~~ resgroupcmds.c:374:14: note: ‘cpuset’ was declared here const char *cpuset; ^~~~~~ It's a false warning, all the code paths that lead here initialize 'cpuset', but apparently GCC doesn't see that with -O1.
-