- 21 3月, 2017 4 次提交
-
-
由 Jane Beckman 提交于
* Deprecation notices for gpmapreduce * Fix typos, phrasing. * Unneeded sentence. * Fix typo * Edits from pull request * Edits from David Y.
-
由 David Sharp 提交于
This helps in predictably managing load on concourse. Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Jamie McAtamney 提交于
Previously, any restore that filtered based on table or schema name did not restore user-created CASTs because they were not included in the filter. This commit adds support for restoring casts with table and schema filters and the --change-schema flag.
-
由 Chris Hajas 提交于
Gptransfer will hang at the end of the program if there are existing threads when the --validate=md5 argument is passed on Python 2.7. This adds explicit exit calls. Authors: Chris Hajas and Jamie McAtamney
-
- 20 3月, 2017 5 次提交
-
-
由 Daniel Gustafsson 提交于
Since we're already building pg_upgrade as part of the toplevel GNUmakefile, add to the contrib Makefile as well to allow make -C contrib/ <target> to include pg_upgrade. This adds the support functions required by pg_upgrade as well.
-
由 yanchaozhong 提交于
-
由 mkiyama 提交于
-
由 Peifeng Qiu 提交于
Last call is never guaranteed for aborting transaction, register resource owner callback to ensure proper cleanup. Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Daniel Gustafsson 提交于
Most of the builtin analytical functions added in Greenplum have been deprecated in favour of the corresponding functionality in MADLib. The deprecation notice was committed in a61bf8b7, code as well as tests are removed here. The sum(array[]) function require the matrix_add() backend code and thus it remains. This removes matrix_add(), matrix_multiply(), matrix_transpose(), pinv(), mregr_coef(), mregr_r2(), mregr_pvalues(), mregr_tstats(), nb_classify() and nb_probabilities().
-
- 18 3月, 2017 4 次提交
-
-
functions. Coverity complains that the function `cmp_deformed_tuple` access the datum from expression eval as an array. But we don't have that use case due to the following error. For e.g., the parser will complain that it can't have more than 1 columns for sorting when we use range in window function as shown below. ``` select depname, empno, salary, LAST_VALUE(salary) error. over (partition by depname order by empno, salary range between 5000 preceding and 3900 preceding) as highest_salary from empsalary; ERROR: only one ORDER BY column may be specified when RANGE is used in a window specification ```
-
由 Tushar Dadlani 提交于
- In the current system, greenplum-db-appliance-(.*) collides with greenplum-db-(.*) Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Dave Cramer 提交于
* Multi-dimensional arrays can now be used as arguments to a PL/python function (used to throw an error), and they can be returned as nested Python lists. This makes a backwards-incompatible change to the handling of composite types in arrays. Previously, you could return an array of composite types as "[[col1, col2], [col1, col2]]", but now that is interpreted as a two- dimensional array. Composite types in arrays must now be returned as Python tuples, not lists, to resolve the ambiguity. I.e. "[(col1, col2), (col1, col2)]". To avoid breaking backwards-compatibility, when not necessary, () is still accepted for arrays at the top-level, but it is always treated as a single-dimensional array. Likewise, [] is still accepted for composite types, when they are not in an array. Update the documentation to recommend using [] for arrays, and () for composite types, with a mention that those other things are also accepted in some contexts. The upstream patch is 94aceed3 * Give a hint, when [] is incorrectly used for a composite type in array. That used to be accepted, so let's try to give a hint to users on why their PL/python functions no longer work. see upstream 510e1b8e * add tests for additional conversions. move them into plpython_types move the create function closer to the actual select which is more like upstream
-
由 Daniel Gustafsson 提交于
Since we have a boolean yes/no switch for coverage enabled builds in Makefile.global from autoconf, use that rather than inspecting the CFLAGS for ease of reading the code. Also use the top_builddir variable in setting the libdir, while unlikely to move in the hierarchy it's the right thing to do.
-
- 17 3月, 2017 17 次提交
-
-
由 Haozhou Wang 提交于
The old cdbfast test suite had tests for these, looks good, move to ICW. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Haisheng Yuan 提交于
-
由 Daniel Gustafsson 提交于
This backports commit 79d39420d in it's entirety and the required parts of 3cba8240 (original commitmessages in full below). The difference to upstream is GPDB using SFRM_Materialize instead of _Randomize as well as the required pg_proc entry changes. Includes the upstream tests and bumps the catalog. commit 79d39420d6cd60cab141b1f13185a2415edfa4a3 Author: Andrew Dunstan <andrew@dunslane.net> Date: Fri Mar 29 14:12:13 2013 -0400 Add new JSON processing functions and parser API. The JSON parser is converted into a recursive descent parser, and exposed for use by other modules such as extensions. The API provides hooks for all the significant parser event such as the beginning and end of objects and arrays, and providing functions to handle these hooks allows for fairly simple construction of a wide variety of JSON processing functions. A set of new basic processing functions and operators is also added, which use this API, including operations to extract array elements, object fields, get the length of arrays and the set of keys of a field, deconstruct an object into a set of key/value pairs, and create records from JSON objects and arrays of objects. Catalog version bumped. Andrew Dunstan, with some documentation assistance from Merlin Moncure. commit 3cba8240 Author: Itagaki Takahiro <itagaki.takahiro@gmail.com> Date: Mon Feb 21 14:08:04 2011 +0900 Add ENCODING option to COPY TO/FROM and file_fdw. File encodings can be specified separately from client encoding. If not specified, client encoding is used for backward compatibility. Cases when the encoding doesn't match client encoding are slower than matched cases because we don't have conversion procs for other encodings. Performance improvement would be be a future work. Original patch by Hitoshi Harada, and modified by me.
-
由 Ashwin Agrawal 提交于
With commit abb9dd8c, gpstop -m now behaves similar to without -m, hence add -a to tests so that it doesn't prompt.
-
由 Ashwin Agrawal 提交于
PR github page currently doesn't show PR pipeline is running and incorrectly shows passed if its a rebase. This fixes it so that correctly reflects it has started running the job.
-
由 Roman Shaposhnik 提交于
-
由 C.J. Jameson 提交于
Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Roman Shaposhnik 提交于
-
由 Roman Shaposhnik 提交于
-
由 Ashwin Agrawal 提交于
pg_amop, pg_amproc oids are not synchronized from master and segments, hence adding them to known differences. There are more tables which needs to be cleaned-up here based on function RelationNeedsSynchronizedOIDs() in catalog.c based leaving that for separate commit. Also, since there exist simpler way to skip performing check for column use the same for `indcheckxmin` column of pg_index instead of what was added as part of commit 79caf1c0 Also, cleanup some checks existing for versions prior to 4.1.
-
由 Ashwin Agrawal 提交于
Currently, SPLIT partition of multi-level partition tables's default partition causes catalog inconsistency as constraint is not created for newly created tables. Hence, to enable running gpcheckcat from running after ICW drop the table as temp fix.
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
gpcheckcat would be handy to detect any catalog issues introduced, hence start running the same at end of ICW. Also, make sure different targets do not use regression database but database of their own.
-
由 yanchaozhong 提交于
-
由 Larry Hamel 提交于
-- for guc's that are read-only when not accompanied by --skipvalidation, -- instead of saying a hidden guc doesn't exist, say that it is not changeable and refer user to documentation Signed-off-by: NChumki Roy <croy@pivotal.io>
-
由 Larry Hamel 提交于
-
由 Jingyi Mei 提交于
- BLD_PYTHON -> PYTHONHOME. - New variable PYTHON is the python binary. - Export both PYTHON and PYTHONHOME. - Remove unused BLD_PYTHON_PERFMON variables. Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
- 16 3月, 2017 5 次提交
-
-
由 Pengzhou Tang 提交于
Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Ning Yu 提交于
Any resgroups created by CREATE RESOURCE GROUP syntax can be dropped with DROP RESOURCE GROUP syntax; the default resgroups, default_group and admin_group, can't be dropped; only superuser can drop resgroups; resgroups with roles bound to can't be dropped. -- drop a resource group DROP RESOURCE GROUP rg1; *NOTE*: this commit only implement the DROP RESOURCE GROUP syntax, the actual resource management are not yet supported, which will be provided later based on these syntax commits. *NOTE*: test cases are provided for both CREATE and DROP syntax. Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Pengzhou Tang 提交于
There are two default resource groups 'default_group' and 'admin_group', to create more please use the CREATE RESOURCE GROUP command, group options can be specified with the WITH clause, at the moment 'cpu_rate_limit' and 'memory_limit' are mandatory, other options are all optional. -- create a resource group CREATE RESOURCE GROUP rg1 WITH ( concurrency=1, cpu_rate_limit=.2, memory_limit=.2 ); -- query the resource group SELECT oid FROM pg_resgroup WHERE rsgname='rg1'; SELECT * from gp_toolkit.gp_resgroup_config WHERE groupname='rg1'; -- create/alter a role and assign it to this group CREATE ROLE r1 RESOURCE GROUP rg1; ALTER ROLE r2 RESOURCE GROUP rg1; *NOTE*: this commit only implement the SQL syntax, the actual resource limitation will not take effect at the moment as the resource group is still under development. *NOTE*: test cases are not included in this commit as once a testing resgroup is created it can't be dropped due to lack of DROP syntax, so the test case can't be re-run and will introduce side-effect to the system. So it's better to provide test cases after the DROP RESOURCE GROUP is implemented. Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Adam Lee 提交于
gpfdist waits 5 seconds to close SSL sessions to workaround a system related issue on Solaris, this commit restricts the workaround to cmdline only. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 Tushar Dadlani 提交于
- If you are working on a computer and want to get the entirety of the source code of gpdb including submodules, you cannot do that as you would need your github key to clone the submodules.
-
- 15 3月, 2017 5 次提交
-
-
由 Dave Cramer 提交于
fix tests, add init_file to ignore GPDB segment output use init_file from main regression tests, fix expected results Requires GPDB to be configure with --with-openssl to pass
-
由 Daniel Gustafsson 提交于
-
由 Heikki Linnakangas 提交于
qp_olap_group2 did an elaborate dance with optimizer_log=on, client_min_messages=on, and gpdiff rules to detect whether the queries fell back to the traditional planner. Replace all that with the new simple optimizer_trace_fallback GUC. Also enable optimizer_trace_fallback in the 'gp_optimizer' test. Since this test is all about testing ORCA, seems appropriate to memorize which queries currently fall back and which do not, so that we detect regressions where we start to fall back on queries that ORCA used to be able to plan. There was one existing test that explicitly set client_min_messages=on, like the tests in qp_olap_group2, to detect fallback. I kept the those extra logging GUCs for that one case, so that we have some coverage for that too, although I'm not sure how worthwhile it is anymore. In the passing, in the one remaining test in gp_optimizer that sets client_min_messages='log', don't assume that log_statement is set to 'all'. Setting optimizer_trace_fallback=on for 'gp_optimizer' caught the issue I fixed in previous commit, that one of the ANALYZE queries still used ORCA.
-
由 Heikki Linnakangas 提交于
For the same reasons we disabled ORCA in all the other ANALYZE queries. Missed this one. I belive we don't want to use ORCA for any of the ANALYZE's internal queries, current or future ones, so move the disabling of ORCA one level up, into a wrapper around analyze_rel().
-
由 Eamon 提交于
[ci skip]
-