- 06 11月, 2018 6 次提交
-
-
由 Abhijit Subramanya 提交于
Add a test for nullif expression to make sure that the ORCA translators are working as expected. Co-authored-by: NChris Hajas <chajas@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Heikki Linnakangas 提交于
Avoids looking through domains, array types, etc. on every call. That seems like a more sensible API, since the data types don't change during the lifetime of a CdbHash. Make cdbhash() more convenient for callers, by handling NULLs within the function. This way the callers don't need to do the NULL check and call either cdbhash() or cdbhashnull(). This also fixes the performance issue caused by the syscache lookups reported in https://github.com/greenplum-db/gpdb/issues/5961. The type's type is now checked only once, when the CdbHash object is initialized, instead of every row. Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - CREATE TABLESPACE command --removed filespace information --added per segment location syntax --added function gp_tablespace_location() GPDB 6.0 ONLY. NOTE: Does not include topics in Admin Guide. Assigning a tablespace for temp files currently does not work. * docs - removed references to filespace. * Small line edit * Line edit * line edit * correct my own typo * PostgreSQL -> Greenplum Database * line edits
-
由 David Yozie 提交于
* Update gp_create_table_random_default_distribution to describe new 6.x rules * Update from Daniel * Using same wording for behavior in CREATE TABLE
-
由 Adam Berlin 提交于
When there are many concurrent operations on AO tables, it is possible the aoentry fetched during RegisterSegnoForCompactionDrop is a brand new entry that does not contain information about the current vacuum of the relation. In this case, compactedSegmentFileList contains the accurate list of segment files that have been compacted. Remove the elogif that assumes the aoentry is accurate. Note: The aoentry will not be evicted again after RegisterSegnoForCompactionDrop because the entry is marked as 'in use'. Co-authored-by: NAsim R P <apraveen@pivotal.io>
-
- 05 11月, 2018 10 次提交
-
-
由 Heikki Linnakangas 提交于
We had duplicated code in a few places, to reconstruct a DistributedBy clause from policy of an existing relation. Use the existing function to do that. Rename the function to make_distributedby_for_rel(). That's a more descriptive name. Reviewed-by: NNing Yu <nyu@pivotal.io>
-
由 Heikki Linnakangas 提交于
loci_compatible() performs a more relaxed check than equal(). Doing the more stringent equal() check first is a waste of time.
-
由 Heikki Linnakangas 提交于
All callers of cdbpathlocus_compare were asking for strict equality check.
-
由 Heikki Linnakangas 提交于
As far as I can tell, GPDB works the same as PostgreSQL with regards to path keys used for append rels, so I don't see why we'd need to do any transformation here. Regression tests are passing without it. This code has been moved around as part of the 9.2 merge, and some other cleanup, but goes all the way back to 2007 in the old pre-open sourcing repository. The commit that introduced was a massive commit with message "Merge of Release-3_1_0_0-alpha1-branch branch down to HEAD", so I lost the trace of its origin there. I guess it was needed back then, but seems unnecessary now.
-
由 Heikki Linnakangas 提交于
Notes in testcase about backslash escaping: - Need to add ESCAPE 'OFF' to COPY ... PROGRAM - echo will behaves differently on different platforms, force to use bash shell with -E option. Signed-off-by: NMing LI <liming01@gmail.com>
-
由 Ming LI 提交于
1) Fixes github issue https://github.com/greenplum-db/gpdb/issues/5925: If environment variable value contains single quote, it will report error: ``` ERROR: external table env command ended with error. sh: -c: line 0: unexpected EOF while looking for matching `'' (seg0 slice1 172.31.81.199:6000 pid=7192) DETAIL: sh: -c: line 1: syntax error: unexpected end of file ``` The external program executed with COPY PROGRAM or EXECUTE-type external table is passed a bunch of environment variables. They are passed by adding them to the command line of the program being executed, with "<var>=<value> && export VAR && ...". However, the quoting in the code that builds that command line was broken. Fix it, and add a test. 2) It also fixed: a backslash should not be escaped by duplicating the backslash. Using single quote as shell quote, only need to escape ' to '\'', no need to escape backslash. Most escaping problem occurs during display the value. Notes in testcase about backslash escaping: - Need to add ESCAPE 'OFF' to EXTERNAL WEB TABLE - Need to add ESCAPE '&' for LIKE predicate - For shell 'env' output, don't seperate to 2 columns because CI env has funny char in variable value. e.g. "LS_OPTIONS=-N --color=tty -T 0" and "LESSOPEN=||/usr/bin/lesspipe.sh %s" - echo will behaves differently on different platforms, force to use bash shell with -E option.
-
由 Ming Li 提交于
Signed-off-by: NTingfang Bao <bbao@pivotal.io>
-
由 BaiShaoqi 提交于
-
由 Daniel Gustafsson 提交于
The duplication arose due to Greenplum backporting a commit which we've now gained via the merge. Remove the hunk which came via the backport to align us more with upstream.
-
由 Heikki Linnakangas 提交于
The check in the parser didn't recurse correctly, and therefore only checked whether the last DISTRIBUTED BY column was the same as any previous one. As long as the last column was unique, duplicates elsewhere in the list were ignored. Reviewed-by: NShaoqi Bai <sbai@pivotal.io>
-
- 03 11月, 2018 6 次提交
-
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
Co-authored-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Daniel Gustafsson 提交于
Reviewed-by: NJacob Champion <pchampion@pivotal.io> Reviewed-by: NMel Kiyama <mkiyama@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Mel Kiyama 提交于
* docs - add information about nested cgroups * docs - nested cgroup information. --updated note for resource groups --added note to gp_toolkit.gp_resgroup_config table description * docs - nested cgroup information - updates based on review comments.
-
- 02 11月, 2018 1 次提交
-
-
由 Zhenghua Lyu 提交于
Current reshuffle implementation is based on split-update. Previously, We mark a query split-update if the query is an UPDATE and it updates some of the table's hash distribution columns. We should also mark the query split-update when it is a reshuffle, even if it's not a hash-distributed table.
-
- 01 11月, 2018 17 次提交
-
-
由 Jacob Champion 提交于
For now, just support gpdemo clusters. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
They don't work yet. Modify get_segment_datadirs() so it only pulls in the information from primary segments, and unset mirror-related variables in the new gpinitsystem configuration. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
...for greater cross-platform portability. Enable extended regex syntax instead, which works on both BSD and GNU sed. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
Concourse uses the default port, but for local builds we can't assume that. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
Most of the global variables we factored up in previous steps are now set according to the mode of operation -- for Concourse (-c), we assume several hardcoded locations on disk, whereas for local mode, we pull from GPHOME/MASTER_DATA_DIRECTORY et al. Several installation steps can be skipped/amended for local mode, and sqldump loads will only be done if explicitly requested with the -s option. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
Eventually we'll only want to perform cluster installation in Concourse mode, so factor it into its own helper (prep_new_cluster). Also reduce the duplication for the generation of a new directory name (get_new_datadir). Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
The following concepts are different between Concourse and a local deployment: - master hostname - data directory locations and prefix - gpinitsystem configuration location Give them their own variables in preparation for supporting local runs. Co-authored-by: NDavid Krieger <dkrieger@pivotal.io>
-
由 Jacob Champion 提交于
This way, even if something inside the loop reads from stdin, the loop's execution will not be affected.
-
由 Jacob Champion 提交于
This doesn't need to be checked in.
-
由 Daniel Gustafsson 提交于
Although I admittedly somewhat prefer boostrap..
-
由 Daniel Gustafsson 提交于
Rename readRecoveryCommandFile() back to the name used in upstream, and remove the emode parameter and associated log entry. Also tidy up the xlog.h header by removing stale entries and making functions only used in a single context static with associated small cleanups. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NHeikki Linnakangas <hlinnakangas@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Daniel Gustafsson 提交于
Materialized views are not backed by append-only tables, so calling make_new_heap() requesting an AO blockdir will only cause an extra lookup on the relstorage for no purpose. Reviewed-by: NAshwin Agrawal <aagrawal@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Daniel Gustafsson 提交于
On the QE's, make sure to keep the API property of CreateExtension() by returning the Oid of the newly created extension object. There are no callsites that use the return value currently, so this has little to no effect, but in case we start using it we may as well get it right from the start. Reviewed-by: NBaiShaoqi <sbai@pivotal.io> Reviewed-by: NMelanie Plageman <mplageman@pivotal.io> Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Zhenghua Lyu 提交于
For replicated table, we have to examine the data on each segment to make sure reshuffle successfully. This commit defines a UDF using plpythonu, in that UDF, the python code will connect specific segment in utility mode to access the replicated table.
-
由 Heikki Linnakangas 提交于
This makes it possible to use the functions without getting errors, if there is a chance that the file might be removed or renamed concurrently. pg_rewind needs to do just that, although this could be useful for other purposes too. (The changes to pg_rewind to use these functions will come in a separate commit.) The read_binary_file() function isn't very well-suited for extensions.c's purposes anymore, if it ever was. So bite the bullet and make a copy of it in extension.c, tailored for that use case. This seems better than the accidental code reuse, even if it's a some more lines of code. Michael Paquier, with plenty of kibitzing by me.
-
由 Alexandra Wang 提交于
Otherwise the backup label file generated from pg_rewind will fail pg_ctl start. This is also a step getting closer to upstream code Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-
由 Alexandra Wang 提交于
We have 9.5 pg_rewind code and tests. But 9.5 has pg_rewind tests written in TAP and use common functions added to common TAP modules. To unnecessary cherry-picking of code for upstream for TAP tests, leveraging 9.4 pg_rewind tests. These tests use the normal pg_rewind framework. Essential coverage is the same just the framework it uses is different. Once catchup to 9.5 can retire these tests and use the TAP tests present for it. The tests were modified to work in GPDB, mainly - Added --data-checksum for initdb - Always use utility mode to connect - Greenplum initdb logs extra messages to stderr, so we redirected them to the log file Co-authored-by: NAshwin Agrawal <aagrawal@pivotal.io>
-