- 15 6月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
The printablePredicate of a static PartitionSelector node contains Var nodes with varno=INNER. That's bogus, because the PartitionSelector node doesn't actually have any child nodes, but works at execution time because the printablePredicate is not only used by EXPLAIN. In most cases, it still worked, because most Var nodes carry a varnoold field, which is used by EXPLAIN for the lookup, but there was one case of "bogus varno" error even memorized in the expected output of the regression suite. (PostgreSQL 8.3 changed the way EXPLAIN resolves the printable name so that varnoold no longer saves the bacon, and you would get a lot more of those errors) To fix, teach the EXPLAIN of a Sequence node to also reach into the static PartitionSelector node, and print the printablePredicate as if that qual was part of the Sequence node directly. The user-visible effect of this is that the static Partition Selector expression now appears in EXPLAIN output as a direct attribute of the Sequence node, not as a separate child node. Also, if a static Partition Selector doesn't have a "printablePredicate", i.e. it doesn't actually do any selection, it's not printed at all.
-
- 13 6月, 2016 1 次提交
-
-
由 Kenan Yao 提交于
Include a map from sliceIndex to gang_id in the dispatched string, and remove the localSlice field, hence QE should get the localSlice from the map now. By this way, we avoid duplicating and modifying the dispatch text string slice by slice, and each QE of a sliced dispatch would get same contents now. The extra space cost is sizeof(int) * SliceNumber bytes, and the extra computing cost is iterating the SliceNumber-size array. Compared with memcpy of text string for each slice in previous implementation, this way is much cheaper, because SliceNumber is much smaller than the size of dispatch text string. Also, since SliceNumber is so small, we just use an array for the map instead of a hash table. Also, clean up some dead code in dispatcher, including: (1) Remove primary_gang_id field of Slice struct and DispatchCommandDtxProtocolParms struct, since dispatch agent is deprecated now; (2) Remove redundant logic in cdbdisp_dispatchX; (3) Clean up buildGpDtxProtocolCommand;
-
- 06 6月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
This is a partial backport of a larger body of work which also already have been partially backported. Remove the GPDB-specific "breadcrumbs" mechanism from the parser. It is made obsolete by the upstream mechanism. We lose context information from a few errors, which is unfortunate, but seems acceptable. Upstream doesn't have context information for those errors either. The backport was originally done by Daniel Gustafsson, on top of the PostgreSQL 8.3 merge. I tweaked it to apply it to master, before the merge. Upstream commit: commit b153c092 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Mon Sep 1 20:42:46 2008 +0000 Add a bunch of new error location reports to parse-analysis error messages. There are still some weak spots around JOIN USING and relation alias lists, but most errors reported within backend/parser/ now have locations.
-
- 03 6月, 2016 3 次提交
-
-
由 Heikki Linnakangas 提交于
The readRangeTblEntry() function was missing the line for 'pseudocols' field. I'm surprised the function worked at all, I thought the read functions don't work if there are any extra fields. Maybe they're more forgiving if it's last field that's missing. In any case, seems like an oversight. It doesn't matter in practice, as the pseudocols field is only used during planning, and we don't serialize nodes at that stage. Rule definitions are serialized before planning, and for the transfer between QD and QEs, we use the 'fast' versions of these functions. In the 'fast' version, the 'pseudocols' is missing from both the out function and the read function. That seems intentional, so add a comment about it.
-
由 Heikki Linnakangas 提交于
The copy/out/read functions for it were wrong: a Bitmapset is not a Node, so one should use e.g. COPY_BITMAPSET_FIELD() instead of COPY_NODE_FIELD() for them. But since the fields are currently unused, let's just remove them. These fields will be resurrected soon, by the PostgreSQL 8.3 merge, as they were introduced in PostgreSQL 8.3. Then they will actually be used, too.
-
由 Heikki Linnakangas 提交于
That includes the slice table, transientRecordTypes, and IntoClause's oidInfo. These are transient information, created in ExecutorStart, not something that should be cached along with the plan. transientRecordTypes and oidInfo in particular were stored in PlannedStmt only so that they can be conveniently dispatched to QEs along with the plan. That's not a problem at the moment, but with the upcoming PostgreSQL 8.3 merge, we'll start keeping the PlannedStmt struct around for many executions, so let's create a new struct to hold that kind of information, which is transmitted from QD to QEs along with the plan (that new struct is called QueryDispatchDesc).
-
- 25 4月, 2016 3 次提交
-
-
由 Heikki Linnakangas 提交于
When I merged the operator family patch, I missed dispatching the new DDL commands to segments. Because of that, the segments didn't have information about operator families. Some operator families would be greated implicitly by CREATE OPERATOR CLASS, but you wouldn't necessarily get the same configuration of families and classes as in the master. Things worked pretty well despite that, because operator families and classes are used for planning, and planning happens in the master. Nevertheless, we really should have the operator family information in segments too, in case you run queries in maintenance mode directly on the segments, or if you execute functions in segments that need to execute expression that depend on them. Also, there were no regression tests for the new DDL commands.
-
由 Heikki Linnakangas 提交于
If you do CREATE OPERATOR, with a commutator or negator operator that doesn't exist yet, the system creates a "shell" entry for the non-existent operator. But those shell operators didn't get the same OID in all segments, which could lead to strange errors later. I couldn't find a test case demonstrating actual bugs from that, but it sure seems sketchy. Given that we take care to synchronize the OID of the primary created operator, surely we should do the same for all operators.
-
由 Heikki Linnakangas 提交于
The out/readfuncs.c support for AlterTableStmt.comptypeArrayOid was missing. Because of that, the segments didn't get the OID of the composite type's array type from master, and allocated it on their own.
-
- 02 4月, 2016 1 次提交
-
-
由 Haozhou Wang 提交于
Backport below commits from upstream: commit 31edbadf Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Tue Jun 5 21:31:09 2007 +0000 Downgrade implicit casts to text to be assignment-only, except for the ones from the other string-category types; this eliminates a lot of surprising interpretations that the parser could formerly make when there was no directly applicable operator. Create a general mechanism that supports casts to and from the standard string types (text,varchar,bpchar) for *every* datatype, by invoking the datatype's I/O functions. These new casts are assignment-only in the to-string direction, explicit-only in the other, and therefore should create no surprising behavior. Remove a bunch of thereby-obsoleted datatype-specific casting functions. The "general mechanism" is a new expression node type CoerceViaIO that can actually convert between *any* two datatypes if their external text representations are compatible. This is more general than needed for the immediate feature, but might be useful in plpgsql or other places in future. This commit does nothing about the issue that applying the concatenation operator || to non-text types will now fail, often with strange error messages due to misinterpreting the operator as array concatenation. Since it often (not always) worked before, we should either make it succeed or at least give a more user-friendly error; but details are still under debate. Peter Eisentraut and Tom Lane commit bf940763 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Tue Mar 27 23:21:12 2007 +0000 Fix array coercion expressions to ensure that the correct volatility is seen by code inspecting the expression. The best way to do this seems to be to drop the original representation as a function invocation, and instead make a special expression node type that represents applying the element-type coercion function to each array element. In this way the element function is exposed and will be checked for volatility. Per report from Guillaume Smet.
-
- 22 3月, 2016 2 次提交
-
-
由 Haozhou Wang 提交于
Backport below commits from upstream: commit adac22bf Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri Dec 19 05:04:35 2008 +0000 When we added the ability to have zero-element ARRAY[] constructs by adding an explicit cast to show the intended array type, we forgot to teach ruleutils.c to print out such constructs properly. Found by noting bogus output from recent changes in polymorphism regression test. commit 30137bde6db48a8b8c1ffc736eb239bd7381f04d Author: Heikki Linnakangas <heikki.linnakangas@iki.fi> Date: Fri Nov 13 19:48:26 2009 +0000 A better fix for the "ARRAY[...]::domain" problem. The previous patch worked, but the transformed ArrayExpr claimed to have a return type of "domain", even though the domain constraint was only checked by the enclosing CoerceToDomain node. With this fix, the ArrayExpr is correctly labeled with the base type of the domain. Per gripe by Tom Lane. commit 6b0706ac Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Thu Mar 20 21:42:48 2008 +0000 Arrange for an explicit cast applied to an ARRAY[] constructor to be applied directly to all the member expressions, instead of the previous implementation where the ARRAY[] constructor would infer a common element type and then we'd coerce the finished array after the fact. This has a number of benefits, one being that we can allow an empty ARRAY[] construct so long as its element type is specified by such a cast. Besides, this commit also adds 'location' field in array related structures, but they are not actived yet. Thanks to Heikki's suggestion.
-
由 Heikki Linnakangas 提交于
This saves a little bit of memory when parsing massively partitioned CREATE TABLE statements.
-
- 16 3月, 2016 1 次提交
-
-
由 Haozhou Wang 提交于
Backport from upstream with this commit: commit bc8036fc Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri May 11 17:57:14 2007 +0000 Support arrays of composite types, including the rowtypes of regular tables and views (but not system catalogs, nor sequences or toast tables). Get rid of the hardwired convention that a type's array type is named exactly "_type", instead using a new column pg_type.typarray to provide the linkage. (It still will be named "_type", though, except in odd corner cases such as maximum-length type names.) Along the way, make tracking of owner and schema dependencies for types more uniform: a type directly created by the user has these dependencies, while a table rowtype or auto-generated array type does not have them, but depends on its parent object instead. David Fetter, Andrew Dunstan, Tom Lane
-
- 29 2月, 2016 1 次提交
-
-
由 Pengzhou Tang 提交于
When applying motion, a merge other than normal gather motion should be added on the top node if it has sort list, this can make sure that tuples are still in order after gathered to QD. Only checking if top level parsetree has sort clauses may miss the implicit order constraint in a view
-
- 19 2月, 2016 1 次提交
-
-
由 Abhijit Subramanya 提交于
Users should use error log functionality for error handling in external tables and during copy since it is more robust and reliable than the old error table functionality.
-
- 22 12月, 2015 1 次提交
-
-
由 Yu Yang 提交于
User could use VARIADIC to specify parameter list when defining UDF if they want to use variadic parameters. It is easier for user to write only one variadic function instead of same name function with different parameters. An example for using variadic: create function concat(text, variadic anyarray) returns text as $$ select array_to_string($2, $1); $$ language sql immutable strict; select concat('%', 1, 2, 3, 4, 5); NOTE: The variadic change set is ported from upstream of PostgreSQL: commit 517ae403 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Thu Dec 18 18:20:35 2008 +0000 Code review for function default parameters patch. Fix numerous problems as per recent discussions. In passing this also fixes a couple of bugs in the previous variadic-parameters patch. commit 6563e9e2 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Wed Jul 16 16:55:24 2008 +0000 Add a "provariadic" column to pg_proc to eliminate the remarkably expensive need to deconstruct proargmodes for each pg_proc entry inspected by FuncnameGetCandidates(). Fixes function lookup performance regression caused by yesterday's variadic-functions patch. In passing, make pg_proc.probin be NULL, rather than a dummy value '-', in cases where it is not actually used for the particular type of function. This should buy back some of the space cost of the extra column. commit d89737d3 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Wed Jul 16 01:30:23 2008 +0000 Support "variadic" functions, which can accept a variable number of arguments so long as all the trailing arguments are of the same (non-array) type. The function receives them as a single array argument (which is why they have to all be the same type). It might be useful to extend this facility to aggregates, but this patch doesn't do that. This patch imposes a noticeable slowdown on function lookup --- a follow-on patch will fix that by adding a redundant column to pg_proc. Conflicts: src/backend/gpopt/gpdbwrappers.cpp
-
- 02 12月, 2015 1 次提交
-
-
由 Gang Xiong 提交于
Alter partitioned table set distributed by will dispatch information from QD to QEs, the information is not correctly coded and decoded on both sides when: 1. reorganize = false. 2. table have index. 3. partition table have "with" option.
-
- 18 11月, 2015 1 次提交
-
-
由 Heikki Linnakangas 提交于
It was just syntax and catalogs, you couldn't actually do anything useful with it. Remove it, so that we have less code to maintain, until it's time to merge this stuff from upstream again when we merge with PostgreSQL 8.4. It's probably easier to merge this back at that point than maintain this backported version in the meanwhile. Less effort now, until we reach that point, and once we get to the point in 8.4 that we merge this in, we'll have all the preceding patches applied already, so it should merge quite smoothly.
-
- 13 11月, 2015 2 次提交
-
-
由 Heikki Linnakangas 提交于
Currently, readfast.c / outfast.c are mostly copy-pasted from readfuncs.c / outfuncs.c. That's a merge hazard: if a new field is added to a struct in upstream, and it's added to readfuncs.c and outfuncs.c, we would need to manually do the same in the readfast.c/outfast.c. If the patch applies cleanly, we will not notice, and we'll have a bug of omission. Refactor the code so that all those node types where the text and binary functions are identical, the duplicate in [read/out]fast.c is removed, and the definition [read/out]funcs.c is used to compile the binary version too. This involves some tricks with #ifdefs and #includes, but cuts a lot of duplicate code. This should avoid the merge hazard. We'll still need to maintain the read/out functions whenever we modify a struct in Greenplum, but that's no different from what needs to be done in PostgreSQL.
-
由 Heikki Linnakangas 提交于
This reduces the diff between readfast.c and readfuncs.c. There are of course parts of these files that need to be different, but the bulk of the functions should be identical, with all the differences hidden in the READ_* macros. This patch reduces the diff and thus makes any suspicious differences between them more obvious.
-
- 28 10月, 2015 1 次提交
-
-