- 15 9月, 2017 18 次提交
-
-
由 Heikki Linnakangas 提交于
The bzip2 library is only used by the gfile/fstream code, used for external tables and gpfdist. The usage of bzip2 was in #ifndef WIN32 blocks, so it was only built on non-Windows systems. Instead of tying it to the platform, use a proper autoconf check and HAVE_LIBBZ2 flags. This makes it possible to build gpfdist with bzip2 support on Windows, as well as building without bzip2 on non-Windows systems. That makes it easier to test the otherwise Windows-only codepaths on other platforms. --with-libbz2 is still the default, but you can now use --without-libbz2 if you wish. I'm sure that some regression tests will fail if you actually build the server without libbz2, but I'm not going to address that right now. We have similar problems with other features that are in principle optional, but cause some regression tests to fail. Also use "#ifdef HAVE_LIBZ" rather than "#ifndef WIN32" to enable/disable zlib support in gpfdist. Building the server still fails if you use --without-zlib, but at least you can build the client programs without zlib, also on non-Windows systems. Remove obsolete copy of bzlib.h from the repository while we're at it.
-
由 Heikki Linnakangas 提交于
If a the sample of a column consists entirely of "too wide" values, which are left out of the sample when it's passed to the compute_stats function, we pass an empty sample to it. The default compute_stats gets confused by that, and computes the null fraction as 0 / 0 = NaN, so we end up storing NaN as stanullfrac. If all the values in the sample are wide values, then they're surely not NULLs, so the right thing to do is to store stanullfrac = 0. That is a bit non-linear with the normal compute_stats function, which effectively treats too wide values as not existing at all, which artificially inflates the null fraction. Another non-linear thing is that we store stawidth=1024 in this special case, but the normal computation again ignores the wide values in computing stawidth. If we wanted to do something about that, we should adjust the normal computation to take those wide values better into account, but that's a different story, and now we at least won't store NaN in stanullfrac any longer. Fixes github issue #3259.
-
由 Heikki Linnakangas 提交于
Commit b4f125bd changed ALTER TYPE SET DEFAULT ENCODING to no longer accept SQL type aliases. A consequence of that is that "char" no longer meand "character varying", but actual "char" datatype. Change the tests to use the PostgreSQL name for that datatype, "bpchar".
-
由 Zhenghua Lyu 提交于
-
由 Gang Xiong 提交于
ccp tag 1.0.0-beta.1 don't support centos7 revert job 'mpp_resource_group_centos7' from pipeline
-
由 Heikki Linnakangas 提交于
This is a bit unfortunate, in case someone is using them. But as it happens, we haven't even mentioned the ALTER TYPE SET DEFAULT ENCODING command in the documentation, so there probably aren't many people using them, and you can achieve the same thing by using the normal, non-alias, names like "varchar" instead of "character varying".
-
由 Heikki Linnakangas 提交于
This way we don't need the weird half-transformation of WindowDefs. Makes things simpler.
-
由 Heikki Linnakangas 提交于
The 'location' field is just to give better error messages. It should not be considered when testing whether two nodes are equal. (Note that the COMPARE_LOCATION_FIELD() macro that we now consistently use on the 'location' field is a no-op.) I noticed this while working on a patch that would compare two ColumnRefs to see if they are equal, and could be collapsed to one.
-
由 Heikki Linnakangas 提交于
While working on the 8.4 merge, I had a bug that tripped an Insist inside the PG_TRY-CATCH. That was very difficult to track down, because the way the error is logged here. Using ereport() includes filename and line number where it's re-emitted, not the original place. So all I got was "Unexpected internal error" in the log, with meaningless filename & lineno. This rewrites the way the error is reported so that it preserves the original filename and line number. It will also use the original error level and will preserve all the other fields.
-
由 xiong-gang 提交于
Signed-off-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Ming LI 提交于
-
由 Zhenghua Lyu 提交于
The user can config resgroup to make some query's query memory is zero. In such cases, it will use work memory. And since query_mem's type is uint64, we simply remove the assert in spi execution's code.
-
由 Xiaoran Wang 提交于
Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Venkatesh Raghavan 提交于
-
由 Ashwin Agrawal 提交于
Using gp_segment_configuration catalog table easily can find if mirrors exist or not, do not need special table to communicate the same. Earlier gp_fault_strategy used to convey 'n' for mirrorless system, 'f' for replication and 's' for san mirrors. Since support for 's' was removed in 5.0 only purpose gp_fault_strategy served was mirrored or not mirrored system. Hence deleting the gp_fault_strategy table and at required places using gp_segment_configuration to find the required info.
-
由 Omer Arap 提交于
-
由 Omer Arap 提交于
GPORCA should not spend time extracting column statistics that are not needed for cardinality estimation. This commit eliminates this overhead of requesting and generating the statistics for columns that are not used in cardinality estimation unnecessarily. E.g: `CREATE TABLE foo (a int, b int, c int);` For table foo, the query below only needs for stats for column `a` which is the distribution column and column `c` which is the column used in where clause. `select * from foo where c=2;` However, prior to that commit, the column statistics for column `b` is also calculated and passed for the cardinality estimation. The only information needed by the optimizer is the `width` of column `b`. For this tiny information, we transfer every stats information for that column. This commit and its counterpart commit in GPORCA ensures that the column width information is passed and extracted in the `dxl:Relation` metadata information. Preliminary results for short running queries provides up to 65x performance improvement. Signed-off-by: NJemish Patel <jpatel@pivotal.io>
-
由 Lisa Owen 提交于
-
- 14 9月, 2017 13 次提交
-
-
由 Weinan WANG 提交于
* gpload hang due to non-reentrant function invoked in single handler instead of using libapr, we register term signal in libevent, so that the signal handler in the asynchronous model to avoid function non-reentrant problem.
-
由 Yuan Zhao 提交于
1. copy *test/regress/*.pm file to install locaiton to support regression test diff 2. set LIBPATH and GP_LIBPATH_FOR_PYTHON env for AIX Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Heikki Linnakangas 提交于
Although I'm not too familiar with SystemTap, I'm pretty sure that recent versions can do user space tracing better. I don't think anyone is using these hacks anymore, so remove them.
-
由 Ning Yu 提交于
* resgroup: move MyResGroupSharedInfo into MyResGroupProcInfo. MyResGroupSharedInfo is now replaced with MyResGroupProcInfo->group. * resgroup: retire resGranted in PGPROC. when resGranted == false we must have resSlotId == InvalidSlotId, when resGranted != false we must have resSlotId != InvalidSlotId, so we can retire resGranted and keep only resSlotId. * resgroup: rename sharedInfo to group. in resgroup.c there used to be both `group` and `sharedInfo` for the same thing, now only use `group`. * resgroup: rename MyResGroupProcInfo to self. We want to use this variable directly so a short name is better.
-
由 Richard Guo 提交于
running or waiting in resource group.
-
由 Daniel Gustafsson 提交于
* Use built-in JSON parser for PXF fragments Instead of relying on a external library, use the built-in JSON parser in the backend for the PXF fragments parsing. Since this replaces the current implementation with an event-based callback parser, the code is more complicated, but dogfooding the parser that we want extension writers to use is a good thing. This removes the dependency on json-c from autoconf, and enables building PXF on Travis for extra coverage. * Use elog for internal errors, and ereport for user errors Internal errors where we are interested in source filename should use elog() which will decorate the error messages automatically with this information. The connection error is interesting for the user however, use ereport() instead there.
-
由 dyozie 提交于
-
由 Kris Macoskey 提交于
Migrations efforts for concourse 3 that are backwards compatible with concourse 2.7.3 Includes using the image_resource in external task yamls for pipeline jobs that use a sles based docker image. Currently unknown why the sles image in particular causes image_resource issues. Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io> Signed-off-by: NJim Doty <jdoty@pivotal.io>
-
由 Heikki Linnakangas 提交于
This is the spec-compliant spelling, but GPDB has only allowed "agg OVER (window)" so far. With this commit, the parens are still allowed, for backwards-compatibility. Change deparsing code to also use the non-parens syntax in view definitions and EXPLAIN. Adjust expected output of regression tests accordingly.
-
由 Alexander Denissov 提交于
-
由 Jimmy Yih 提交于
We are now on postgres 8.4 which apparently activated some conditional statements in describe.c. This one was missed from the first 8.4 merge chunk which makes psql -l currently error out. Reported by Brian Lu in the Greenplum Developers mailing list: https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/5l7J2j5yla8
-
由 Simon Riggs 提交于
Update README to explain prerequisites for correct access to LSN fields of a page. Independent chunk removed from checksums patch to reduce size of patch. (cherry picked from commit 1c563a2a)
-
由 C.J. Jameson 提交于
-
- 13 9月, 2017 9 次提交
-
-
由 Adam Lee 提交于
'x' is eXtensions, 'E' is the External tables with the relstorage 'x'.
-
由 Xiaoran Wang 提交于
1) Give error meesage about proxy. 2) Change server_side_encryption default value from "none" to empty string. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Adam Lee 提交于
commit 3d009e45 Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Wed Feb 27 18:17:21 2013 +0200 Add support for piping COPY to/from an external program. This includes backend "COPY TO/FROM PROGRAM '...'" syntax, and corresponding psql \copy syntax. Like with reading/writing files, the backend version is superuser-only, and in the psql version, the program is run in the client. In the passing, the psql \copy STDIN/STDOUT syntax is subtly changed: if you the stdin/stdout is quoted, it's now interpreted as a filename. For example, "\copy foo from 'stdin'" now reads from a file called 'stdin', not from standard input. Before this, there was no way to specify a filename called stdin, stdout, pstdin or pstdout. This creates a new function in pgport, wait_result_to_str(), which can be used to convert the exit status of a process, as returned by wait(3), to a human-readable string. Etsuro Fujita, reviewed by Amit Kapila. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NMing LI <mli@apache.org>
-
由 Jimmy Yih 提交于
It seems that we can no longer use ${} in the env var fields so remove them to get our Travis builds green again. Reference: https://github.com/travis-ci/travis-build/commit/a6779473124442fa748795fa9fd47afc529fc9d4
-
由 Nadeem Ghani 提交于
This utility used to confirm data transfered by doing a md5 digest. This commit changes behavior to use sha256 instead. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Shoaib Lari 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Ashwin Agrawal 提交于
Need flexibility to be able to control at what freqeuncy pipeline gets triggerd, one such usecase is same pipleine code runs with asserts on every commit and without asserts daily/weekly. So, adding a time resource via which can control the triggering. Plus, parameterize the trigger to control the things.
-
由 Karen Huddleston 提交于
Some backups with Data Domain contained an incorrect path in their report file. When checking whether the backup timestamp was in a pre or post content-id format, restore would fail since the path in the report file didn't match the expected pattern. Instead, we now check for the timestamp format in a more generalized way to account for this discrepancy. Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Mel Kiyama 提交于
-