- 10 7月, 2018 10 次提交
-
-
由 Daniel Gustafsson 提交于
OpenBSD require that linking with a library using sigwait(3) be compiled with -pthread. This adds a kludge in the relevant makefiles which is less than elegant, but it seemed the least intrusive change to make.
-
由 Daniel Gustafsson 提交于
In order to use backtrace() in error reporting on OpenBSD we need to link with libexecinfo from ports as backtrace() is a glibc only addition.
-
由 Daniel Gustafsson 提交于
There is no need to build gpmapreduce separately as it's automatically built on "make install", and with recent changes to detect libyaml it's not even supported. Fix by removing separate step and instead use the --enable-mapreduce switch to autoconf.
-
由 Daniel Gustafsson 提交于
Greenplum Mapreduce requires libyaml, but was lacking a specific test for it in autoconf. This worked since gpfdist has the same check but when building with --disable-gpfdist we need to ensure we have libyaml to avoid late compilation failures.
-
由 Daniel Gustafsson 提交于
Rather than hardcoding to require /bin/bash, move to using a lookup via "/usr/bin/env bash" to allow for greater portability of the code. This also changes the Bash test to checking if the current shell is actually Bash, rather than looking if bash is available on the file system (since we by the above mentioned changes no longer need that).
-
由 Daniel Gustafsson 提交于
The NetCheckNIC() functionality was only used by FileRep, and is not even compiling properly under all possible preprocessor flags it once supported. Remove.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The -not syntax is not portable across all platforms, most notably OpenBSD, so use the more portable ! operator instead.
-
由 Daniel Gustafsson 提交于
Make sure to include all required header files to silence compilers that are picky about that.
-
由 Tom Lane 提交于
According to recent tests, this case now works fine, so there's no reason to reject it anymore. (Even if there are still some OpenBSD platforms in the wild where it doesn't work, removing the check won't break any case that worked before.) We can actually remove the entire test that discovers whether libpython is threaded, since without the OpenBSD case there's no need to know that at all. Per report from Davin Potts. Back-patch to all active branches. Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
- 09 7月, 2018 4 次提交
-
-
由 Daniel Gustafsson 提交于
Following the change in 8fcd3fdd to cost-based enable GUCs, failing to find a way to construct an N-way join should be an error rather than debug (as in upstream). Reported-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 阿福Chris 提交于
-
由 阿福Chris 提交于
Discussion: https://github.com/greenplum-db/gpdb/pull/5155Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Heikki Linnakangas 提交于
Instead of completely disabling the generation of Paths with disabled plan types, add a high penalty to their cost estimates, like in the upstream. This reduces our diff vs. upstream, making future merges more straightforward. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/Az2cDcqf73g/_tY6Yv1kBgAJCo-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io> Reviewed-by: NRichard Guo <riguo@pivotal.io>
-
- 07 7月, 2018 2 次提交
-
-
由 Jimmy Yih 提交于
As part of the Postgres 8.3 merge, all heap tables now automatically create an array type. The array type will usually be created with typname '_<heap_name>' since the automatically created composite type already takes the typname '<heap_name>' first. If typname '_<heap_name>' is taken, the logic will continue to prepend underscores until no collision (truncating the end if typname gets past NAMEDATALEN of 64). This might be an oversight in upstream Postgres since certain scenarios involving creating a large number of heap tables with similar names could result in a lot of typname collisions until no heap tables with similar names can be created. This is very noticable in Greenplum heap partition tables because Greenplum has logic to automatically name child partitions with similar names instead of having the user name each child partition. To prevent typname collision failures when creating a heap partition table with a large number of child partitions, we will now stop automatically creating the array type for child partitions. References: https://www.postgresql.org/message-id/flat/20070302234016.GF3665%40fetter.org https://github.com/postgres/postgres/commit/bc8036fc666a8f846b1d4b2f935af7edd90eb5aa
-
由 Chris Hajas 提交于
The pg_get_partition_template_def and pg_get_partition_def functions take accesss share locks, but do not release them until the end of the transaction. If a transaction is long-running, this can conflict with other user operations. Is it not necessary to hold the lock indefinitely as the lock is only needed for the duration of the function call. Co-authored-by: NChris Hajas <chajas@pivotal.io> Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io>
-
- 06 7月, 2018 10 次提交
-
-
由 Jialun 提交于
If a segment exists in gp_segment_configuration but its ip address can not be resolved we will run into a runtime error on gang creation: ERROR: could not translate host name "segment-0a", port "40000" to address: Name or service not known (cdbutil.c:675) This happens even if segment-0a is a mirror and is marked as down. With this error queries can not be executed, gpstart and gpstop will also fail. One way to trigger the issue: - create a multiple segments cluster; - remove sdw1's dns entry from /etc/hosts on mdw; - kill postgres primary process on sdw1; FTS can detect this error and automatically switch to mirror, but queries can not be executed.
-
由 Mel Kiyama 提交于
* docs - update system catalog maintenance information. --Updated Admin. Guide and Best Practices for running REINDEX, VACUUM, and ANALYZE --Added note to REINDEX reference about running ANALYZE after REINDEX. * docs - edits for system catalog maintenance updates * docs - update recommendation for running vacuum and analyze. Update based on dev input.
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - add foreign data wrapper-related ref pages * remove CREATE SERVER example referencing default fdw * edits from david, and his -> their
-
由 Jimmy Yih 提交于
We currently exit VACUUM early when there is a concurrent operation on an AO relation. Instead of exiting early, go through the rest of the AO segment files to see if they have crossed threshold for compaction.
-
由 Jimmy Yih 提交于
TRUNCATE will rewrite the relation by creating a temporary table and swapping it with the real relation. For AO, this includes the auxiliary tables which is concerning for the AO relation's pg_aoseg table which holds information that a AO segment file is available for write or waiting to be compacted/dropped. Since we do not currently invalidate the AppendOnlyHash cache entry, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of TRUNCATE on AO relations.
-
由 Jimmy Yih 提交于
ALTER TABLE commands that are tagged as AT_SetDistributedBy require a gather motion and does its own variation of creating a temporary table for CTAS (basically bypassing the usual ATRewriteTable which actually does do AppendOnlyHash cache entry invalidation). Without the AppendOnlyHash cache entry invalidation, the entry could have invisible leaks in its AOSegfileStatus array that will be stuck in state AOSEG_STATE_AWAITING_DROP. These leaks will persist until the user evicts the cache entry by not using the table to allow another AO table to cache itself in that slot or by restarting the database. We fix this issue by invalidating the cache entry at the end of AT_SetDistributedBy ALTER TABLE cases.
-
由 Jimmy Yih 提交于
The schema is named differently from the one being used in the search_path so all the tables, views, functions, and etc. were incorrectly being created in the public schema.
-
由 Omer Arap 提交于
We had significant deduplication in hyperloglog extension and utility library that we use in the analyze related code. This commit removes the deduplication as well as significant amount dead code. It also fixes some compiler warnings and some coverity issues. This commit also puts the hyperloglog functions in a separate schema which is non-modifiable by non superusers. Signed-off-by: NBhuvnesh Chaudhary <bchaudhary@pivotal.io>
-
由 Lisa Owen 提交于
-
- 04 7月, 2018 7 次提交
-
-
由 Daniel Gustafsson 提交于
Fixing some of the more obvious breaches of common style around the code I just read for another patchset. There is no logical changes introduced here, only rearrangement for clarity. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
The backup list is either a leftover debugging artefact or its use was removed during the merge work and never made it into the rewritten commit history. Either way, it serves no purpose so remove from this hot code path. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Daniel Gustafsson 提交于
Commit e0409357 moved to using default estimates rather than interrogating the QEs for relations which lack statistics. As an effect of this, the cdb_default_stats_used member was hardcoded to false and the warnings for missing statistics did never fire. Rather than resurrecting the warnings, this removes the code that attempts to figure out if the warnings at all apply since it seems quite expensive to run that in the hot path of every join query being planned. Discussion: https://github.com/greenplum-db/gpdb/pull/5216Reviewed-by: NVenkatesh Raghavan <vraghavan@pivotal.io>
-
由 Adam Lee 提交于
The map was missed by mistake, all AO loading actions need it.
-
由 Daniel Gustafsson 提交于
Commit 4483b7d3 remove spclocation from the tablespace catalog, but the \db command in psql wasn't updated to match the corresponding Greenplum version as it was backported prior to when it was introduced in upstream. This will eventually go away as we merge with PostgreSQL, but that's not an excuse for not fixing what is broken. Discussion: https://github.com/greenplum-db/gpdb/pull/5238Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Ashwin Agrawal 提交于
Not sure why setting of guc gp_vmem_protect_limit has specific value for darwin.
-
由 Todd Sedano 提交于
Authored-by: NTodd Sedano <tsedano@pivotal.io>
-
- 03 7月, 2018 6 次提交
-
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
Starting with this 8.4 commit 1d577f5e, backend checks for the existence of the directory and if not present creates the same. So, we can avoid creating the same in utilities.
-
由 Ashwin Agrawal 提交于
The AO implementation aligns with 8.4 and forward heap implementation, to write the data during recovery and not fail. Also, to note in case of AO the way seek it performed during replay, its not going to fail if file doesn't have that much data yet. As the way seek works irrespective of the length of file will seek to that offset from requested position and write the data (sure file will have hole in it for this case) . But it will not result in seek failure as such. We will write the data and if truncation has happened then will happen again during recovery.
-
由 Ashwin Agrawal 提交于
The lock level looks fine hence resolving the fixme's.
-
由 Ashwin Agrawal 提交于
There are many tests which flow through this code path specifically in alter_table.sql. Nothing exploded with the removal of the same and gpcheckcat flagged nothing means its fine to delete the same.
-
由 Hubert Zhang 提交于
-
- 02 7月, 2018 1 次提交
-
-
由 Jialun 提交于
- Introduce a new GUC gp_resource_group_bypass, when it is on, the query in this session will not be limited by resource group
-