- 23 9月, 2017 1 次提交
-
-
由 Kavinder Dhaliwal 提交于
There are cases where during execution a Memory Intensive Operator (MI) may not use all the memory that is allocated to it. This means that this extra memory (quota - allocated) can be relinquished for other MI nodes to use during execution of a statement. For example -> Hash Join -> HashAggregate -> Hash In the above query fragment the HashJoin operator has a MI operator for both its inner and outer subtree. If there ever is the case that the Hash node used much less memory than was given as its quota it will now call MemoryAccounting_DeclareDone() and the difference between its quota and allocated amount will be added to the allocated amount of the RelinquishedPool. Doing this will enable HashAggregate to request memory from this RelinquishedPool if it exhausts its quota to prevent spilling. This PR adds two new API's to the MemoryAccounting Framework MemoryAccounting_DeclareDone(): Add the difference between a memory account's quota and its allocated amount to the long living RelinquishedPool MemoryAccounting_RequestQuotaIncrease(): Retrieve all relinquished memory by incrementing an operator's operatorMemKb and setting the RelinquishedPool to 0 Note: This PR introduces the facility for Hash to relinquish memory to the RelinquishedPool memory account and for the Agg operator (specifically HashAgg) to request an increase to its quota before it builds its hash table. This commit does not generally apply this paradigm to all MI operators Signed-off-by: NSambitesh Dash <sdash@pivotal.io> Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
- 16 9月, 2017 1 次提交
-
-
由 Kavinder Dhaliwal 提交于
Historically this function was used to special case a few operators that were not considered to be MemoryIntensive. However, now it always returns true. This commit removes the function and also moves the case for T_FunctionScan in IsMemoryIntensiveOperator into the group that always returns true, as this is its current behavior
-
- 01 9月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
This bumps the copyright years to the appropriate years after not having been updated for some time. Also reformats existing code headers to match the upstream style to ensure consistency.
-
- 29 8月, 2017 1 次提交
-
-
由 Pengzhou Tang 提交于
The resource group is enabled but not initialized on auxiliary processes and special backends like ftsprobe and filerep, previously we performed resource group operations no matter resource group is initialized or not which leads to some unexpected error.
-
- 25 8月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
In single-user mode, MyQueueId isn't set. But there was an assertion for that in ResourceQueueGetQueryMemoryLimit. To fix, don't apply memory limits in single-user mode.
-
- 02 8月, 2017 1 次提交
-
-
由 Richard Guo 提交于
Resource group memory spill is similar to 'statement_mem' in resource queue, the difference is memory spill is calculated according to the memory quota of the resource group. The related GUCs, variables and functions shared by both resource queue and resource group are moved to the namespace resource manager. Also codes of resource queue relating to memory policy are refactored in this commit. Signed-off-by: NPengzhou Tang <ptang@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 10 2月, 2017 1 次提交
-
-
Due to the plan caching, we might use the same plan again. In that case, operatormem is not 0, to begin with, and it will have previous values. So this assert assumption is wrong and hence removing it. Based on the investigation we found that, 1. Operator Memory is being initialized even if statement_mem get changed when we use the cached plan. 2. We also confirmed that even though we use plan cache, we will still call the executor init. 3. MemoryAccountId may be another potential candidate in Plan to have a similar issue. But w.r.t memory account id, we confirmed that is not serialized from QD to QE and they all get reinitialized anyway when we do the ExecInit. Note: We don't serialize memory account id because it is an index to the account array specific to the current process only.
-
- 22 10月, 2016 1 次提交
-
-
由 Venkatesh Raghavan 提交于
-
- 12 2月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
The correct way is that: * Every .c file does '#include "postgres.h"', before anything else. After that, you can include any system header files (e.g. unistd.h). And after that, any other PostgreSQL/GPDB header files. * Header files should not contain '#include "postgres.h"', as that's done in the .c files.
-
- 28 10月, 2015 1 次提交
-
-