- 25 3月, 2016 3 次提交
-
-
由 Atri Sharma 提交于
Change behaviour to use default estimates for relation size for cases when no statistics are available. This avoid having heavy statstistics computation as part of query compilation. Customer can explicitly using ANALYZE to compute the stats before running queries to use more accurate stats instead of default statistics. Currently, the default size of table is: - Internal table: 100 pages - External table: 1000 pages Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Currently, when relation open fails it emits error message which is very dev centric plus stack trace gets printed. The message and behavior is confusing as more hints towards some catastrophic problem. While its more legitimate due to concurrency so best to communicate the same, plus avoid printing stacktrace.
-
由 Jimmy Yih 提交于
This commit adds more filespace and tablespace test coverage in installcheck. Most of these test additions are modified and inspired from Pivotal's own internal tests.
-
- 24 3月, 2016 5 次提交
-
-
由 Pengzhou Tang 提交于
Codes of ic_tcp and ic_udp are not maintained any more, ic_udpifc is the only interconnect type for now. We keep the gp_interconnect_type guc for backward compatibility.
-
由 Adam Lee 提交于
Type name is a fixed-length string padded by '\0', normal(wrong) renaming will make cdbhash() getting different values.
-
由 Ed Espino 提交于
-
由 Jacob Frank 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Ed Espino 提交于
-
- 23 3月, 2016 5 次提交
-
-
由 Daniel Gustafsson 提交于
It seems that the install target for gpAux/extensions which in turn invoke gpmapreduce installation fell away in the rebase and didn't make it in the final commit. Re-adding it such that `make install` also handles gpmapreduce. Thanks to Xin Zhang for reporting.
-
由 Omer Arap 提交于
- change the path to abs_srcdir
-
由 Nikos Armenatzoglou 提交于
-
由 Omer Arap 提交于
-
由 Nikos Armenatzoglou 提交于
-
- 22 3月, 2016 17 次提交
-
-
由 Haozhou Wang 提交于
-
由 Gang Xiong 提交于
When alter table set distributed command specifying "reorganize=true/false", it will set reloptions of pg_class on segments but not on master.
-
由 Gang Xiong 提交于
missed due to incorrect iterating logic.
-
由 Daniel Gustafsson 提交于
This seems to be leftovers from when gpcheckdb was in src/bin and was written in C++. Nothing references this and the directory is empty so remove.
-
由 Haozhou Wang 提交于
Backport below commits from upstream: commit adac22bf Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri Dec 19 05:04:35 2008 +0000 When we added the ability to have zero-element ARRAY[] constructs by adding an explicit cast to show the intended array type, we forgot to teach ruleutils.c to print out such constructs properly. Found by noting bogus output from recent changes in polymorphism regression test. commit 30137bde6db48a8b8c1ffc736eb239bd7381f04d Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Fri Nov 13 19:48:26 2009 +0000 A better fix for the "ARRAY[...]::domain" problem. The previous patch worked, but the transformed ArrayExpr claimed to have a return type of "domain", even though the domain constraint was only checked by the enclosing CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with the base type of the domain. Per gripe by Tom Lane. commit 6b0706ac Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Thu Mar 20 21:42:48 2008 +0000 Arrange for an explicit cast applied to an ARRAY[] constructor to be applied directly to all the member expressions, instead of the previous implementation where the ARRAY[] constructor would infer a common element type and then we'd coerce the finished array after the fact. This has a number of benefits, one being that we can allow an empty ARRAY[] construct so long as its element type is specified by such a cast. Besides, this commit also adds 'location' field in array related structures, but they are not actived yet. Thanks to Heikki's suggestion.
-
由 Andreas Scherbaum 提交于
This bug was introduced in fd2e045f. Thanks to Shoaib Lari for pointing this out.
-
由 Chumki Roy 提交于
Modifying output of gpcheckcat to include the name of the test on the same line to make grepping easier
-
由 Andreas Scherbaum 提交于
This patch slightly changes the way the demo cluster is built. A gpinitdbsystem return code of 1 (warning) is ignored, and the script exits with return code 0. Any error code greather than that is returned. The Makefile is no longer ignoring errors. This will fix #456.
-
由 Andreas Scherbaum 提交于
The data directory and the TCP ports can be changed when "make cluster" is executed to build the demo cluster: DATADIRS=/tmp/gpdb-cluster MASTER_PORT=15432 PORT_BASE=25432 make cluster This fixes #441 and #442 Also make the TCP port for the regression tests configurable.
-
由 Kyle Dunn 提交于
Closes #255.
-
由 Heikki Linnakangas 提交于
These were added by merge commit 60f45387, but we had backported a later version of this file already, where these static functions had been removed again.
-
由 Heikki Linnakangas 提交于
This saves a little bit of memory when parsing massively partitioned CREATE TABLE statements.
-
由 Heikki Linnakangas 提交于
All of the callers are in places where leaking a few bytes of memory to the current memory context will do no harm. Either parsing, or processing a DDL command, or planning. So let's simplify the callers by removing the argument. That makes the code match the upstream again, which makes merging easier. These changes were originally made to reduce the memory consumption when doing parse analysis on a heavily partitioned table, but the previous commit provided a more whole-sale solution for that, so we don't need to nickel-and-dime every allocation anymore.
-
由 Heikki Linnakangas 提交于
With a heavily partitioned table, the parse analysis of each CreateStmt consumes significant amounts of memory. By using a temporary memory context, we can reclaim some of the garbage left behind by the parsing of each one. In very quick testing on my laptop, this reduces the memory consumption of parse analysis of a massively-partitioned CREATE TABLE by about 10%, but YMMV. To make this work, transformAttributeEncoding() and its subroutines have to be more careful to not modify the input CreateStmt, as any new Nodes stored in it would be allocated in the temporary context and be blown away at the end of transformCreateStmt. That's not cool, if there are more partitions to process, which rely on the same CreateStmt.
-
由 Ed Espino 提交于
-
由 Foyzur Rahman 提交于
Signed-off-by: NGeorge Caragea <caragea.work@gmail.com>
-
由 Foyzur Rahman 提交于
Signed-off-by: NGeorge Caragea <gcaragea@pivotal.io>
-
- 21 3月, 2016 1 次提交
-
-
由 Haozhou Wang 提交于
Bump up the catalog number to previous commit: 8954a537
-
- 17 3月, 2016 1 次提交
-
-
由 Shreedhar Hardikar 提交于
Authors: Nikos Armenatzoglou and Shreedhar Hardikar.
-
- 19 3月, 2016 8 次提交
-
-
由 GPDB-gunny 提交于
-
由 Xin Zhang 提交于
-
由 Jimmy Yih 提交于
Add more AO/CO test coverage which covers a fixed bug from v4.1.0.0. Also, add some cardinality and domain tests on bitmap indexes and scans. Most of these test additions are modified/inspired from Pivotal's internal testing.
-
由 Nikos Armenatzoglou 提交于
-
由 Xin Zhang 提交于
- Change the current vagrant to clone the GPDB inside and run it, to avoid confusion between host and guest environment. - Change the VM size to 8GB and 4 cpus to better compile and test performance - At the end of the vagrant the GPDB will up and running with gpdemo (3 seg and 3 mirrors) - use `vagrant up gpdb` to build GPDB with GPORCA using option --enable-orca - use `vagrant up gpdb_without_gporca` to build GPDB without GPORCA Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Marbin Tan 提交于
We do not need to link all of the gpfdist libraries to the backend. Make sure that gpfdist is the only one who uses libyaml, libevent, and libapr-1 for now.
-
由 Ashwin Agrawal 提交于
While fetching the pg_class or pg_type tuple using index, perform sanity check to make sure tuple intended to read, is the tuple index is pointing too. This is just sanity check to make sure index is not broken and not returning some incorrect tuple to contain the damage.
-
由 Ashwin Agrawal 提交于
This commit makes sure while accessing gp_relation_node through its index, sanity check is always performed to verify the tuple being operated on is the intended tuple, else for any reason index is broken and provide bad tuple fail the operation instead of causing damage. For some scenarios like delete gp_relation_code tuple case it adds extra tuple deform call which was not done earlier but doesn't seem heavy enough to be performed on ddl operation.
-