- 11 12月, 2018 1 次提交
-
-
由 Sambitesh Dash 提交于
-
- 10 12月, 2018 15 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
'subqfromlist' field was removed by commit f16deabd.
-
由 Heikki Linnakangas 提交于
We had backported this earlier, and got a second copy as part of upstream merge. Went unnoticed because no one has tried building GPDB on Itanium.
-
由 Heikki Linnakangas 提交于
Fumbled this in upstream merge. Harmless, but let's be tidy.
-
由 Heikki Linnakangas 提交于
Looking at the historical pre-open-sourcing repository, the 'G' message was used by ancient prototype code from 2007-2009. The prototype code was removed in 2009, but this snippet was left over. I don't think it was used in any released version of GPDB. The 'W' type message was removed in commit daf6cdbc, but this was left over.
-
由 Heikki Linnakangas 提交于
If the actual signature of the function/variable changes, with an extern declaration directly in the other .c file, you might not notice. The correct way to use 'extern', is to have only one 'extern' declaration, in a header file.
-
由 Heikki Linnakangas 提交于
Commit b3c50e40 changed the argument from StringInfo to ErrorData *, but neglected the comment in the header file about it. Fix, by copying the up-to-date comment from the .c file.
-
由 Heikki Linnakangas 提交于
It was unused, and we don't support Windows as a server platform, anyway.
-
由 Heikki Linnakangas 提交于
I think these functions were to implement ALTER TABLE MODIFY RANGE/LIST, but that command doesn't exist anymore. Not sure if it was ever fully implemented.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
It was always passed as 'true'.
-
由 Zhenghua Lyu 提交于
`CdbTryOpenRelation` might return a NULL pointer. Each time we use the value returned by it we should always check if the result is NULL. `parserOpenTable` forgot doing such check, this commit adds the check for NULL.
-
由 Tang Pengzhou 提交于
commit 8eed4217 & e0b06678 allow us to do a gpexpand of GPDB cluster without a restart, so we called this strategy as "online expand", those two commits were mainly focus on how to avoid restarting cluster when expanding cluster, this commit will do the left work to improve gpexpand using a few features we merged to master recently. First improvement is, it's no longer necessary to change policy of all non-partition table to random at the beginning of gpexpand. Previously, we couldn't tell the difference between expanded table and non-expaned table, so if the distribution policy of these tables were the same, planner took these tables's data as co-located and produced a incorrect plan. To avoid this, gpexpand used to change policy of all table to random so planner could produce a correct but a non-effective plan, because random policy always means annoying broadcase motion. Now each table has a 'numsegments' attribute introduced by 4eb65a53, GPDB can recognize expanded and non-expand table and produced correct plans. so the first imporvement is removing the randomization step of tables. The second improvement is, use a brand new syntax "ALTER TABLE foo EXPAND TABLE" to focus on the data rebalance of tables. Previously, tables were converted to randomly before rebalance and numsegments of tables were always the same with GPDB cluster size, so we can use a tricky "ALTER TABLE foo SET WITH (REORGNIZE = true) DISTRIBUTED BY (original_key)" command to rebalance data to new added segments, now policy and numsegments are not changed before rebalancing data, so expanding such tables is not match the concept of "SET DISTRIBUTED BY" now, we need a proper syntax to focus on table expanding. New expand syntax named "ALTER TABLE foo EXPAND TABLE", we have two methods to do the real data movement inside, one is CTAS, another is RESHUFFLE, which one is better depends on how much data needs to be moved and whether the table has index and whether the table is a append only table (analyzed by Hekki). A drawback of this commit is we always expand a partition within a transaction, the old behavior is expand the leaf partition in parallel which is faster for a partition table. We don't allow root partition and its leaf partitions to have different numsegments for now, this commit disabled the old behavior temporarily. A topic on how to expanding partition table in parallel is under discussion and we wish to bring the ability back properly in the feature. * Refine name quote Eg: if schema name of table is a.b, table name is c'd, then current gpexpand cann't handle it.
-
- 08 12月, 2018 6 次提交
-
-
由 Lisa Owen 提交于
-
由 Adam Berlin 提交于
-
由 Lisa Owen 提交于
* docs - foreign data wrapper sql and catalog re page updates * misc edits * clarify mpp_execute any value
-
由 Lisa Owen 提交于
* docs - remove operator RECHECK; add note, update sys catalog * add blank line
-
由 Chuck Litzell 提交于
-
由 Jimmy Yih 提交于
These are dead references. Just remove them as simple clean up. Co-authored-by: NJimmy Yih <jyih@pivotal.io> Co-authored-by: NBradford D. Boyle <bboyle@pivotal.io>
-
- 07 12月, 2018 18 次提交
-
-
由 David Yozie 提交于
-
由 Daniel Gustafsson 提交于
The referred to function is called try_relation_open() and nothing else. This typo was introduced in 2011 when CdbTryOpenRelation() was added so it seems about time to correct it.
-
由 Daniel Gustafsson 提交于
get_str_from_chunk() is using normal memory management and bypassing the PostgreSQL memory manager. This means that allocations aren't automatically guarded by the backend, and it was failing to correctly check if allocations were successful thus risking a NULL pointer dereference. Return NULL on out of memory error since there is little else we can do, and the caller can handle it without dereferencing NULL.
-
由 Daniel Gustafsson 提交于
The support for sending alerts via Email or SNMP was quite a kludge, and there are much better external tools for managing alerts than what we can supply in core anyways so this retires the capability. All references to alert sending in the docs are removed, but there needs to be section written about how to migrate off this feature in the release notes or a similar location. Discussion: https://github.com/greenplum-db/gpdb/pull/6384
-
由 Heikki Linnakangas 提交于
Extract the first key column's Datum only once, to avoid the memtuple_getattr / heap_getattr overhead. This is the same optimization we have in tuplesort.c. Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
Avoid palloc+memcpy for "TC_WHOLE" tuples. Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
More modern, and faster too. Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
Htupfifo is used when parsing incoming messages from the interconnect. Each UDP message consists of a number of tuple chunks, which form a number of tuples. When one incoming message is parsed, the htupfifo is used to hold the tuples formed from the single message. Typically, each message contains roughly the same number of tuples, so it is wasteful to palloc/pfree the list node for every tuple. That's why there is a free list in the FIFO. However, with narrow tuples, one message can contain hundreds of tuples, which is much more than the built-in max size on the free list size (10). So in practice, the free list was almost never enough to cover the need. The message size puts a natural limit on how large the FIFO can grow, so I don't think we need a limit on the free list size. Just let it grow as large as needed, and avoid the palloc/pfree overhead. Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
It was dead code, and we have no plans to resurrect it. Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
The non-MK code was abusing the child's result tuple slot. The corresponding MK code was changed back in 2010 to not do that, but that commit missed the non-MK version. This caused a segfault in the 'subselect' regression test. Apparently no one has run the regression tests with 'gp_enable_motion_mk_sort=off' recently. Reviewed-by: NPengzhou Tang <ptang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io>
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The 9.1 merge removed the code that incremented 'num_consec_csv_err'. Since then, it was only ever initialized to 0. Hence, all the related code was dead. I'm not sure how we behave with the kinds of errors that used to trigger this code now. But I don't see a need to treat them specially, the generic error handling code should cope with them. The GUC that controlled the max CSV line length was removed in commit 6ac29fdc already.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
They were only used to pass the information to cdbCopyStart(). Better to pass them directly as arguments.
-
由 Heikki Linnakangas 提交于
Simpler to just pick a constant starting size. It's probably faster, too, to let the hash table expand as needed, than work hard upfront to make a good guess.
-
由 Heikki Linnakangas 提交于
-