- 11 7月, 2017 15 次提交
-
-
由 Heikki Linnakangas 提交于
There was a mixture of spaces and tabs being used for indentation in aocsam.c, and I finally got fed up with that while doing other changes in that file. I ran pgindent, and did a bunch of manual fixups of the formatting. All the changes in this commit are purely cosmetic. I did the same for appendonlyam.c, although I'm not changing it at the moment, to keep aocsam.c and appendonlyam.c in sync.
-
由 Heikki Linnakangas 提交于
In aocsam.c, there's a block of code that does: if (...) { AOTupleIdInit_rowNum(...); } else { AOTupleIdInit_rowNum(...); } While hacking, I removed the seemingly unnecessary braces, turning that into just: if (...) AOTupleIdInit_rowNum(...); else AOTupleIdInit_rowNum(...); But then I got a compiler error, about 'else' without 'if'. I was baffled for a moment, until I looked at the definition of AOTupleIdInit_rowNum. The way it includes curly braces makes it not work in an if-else construct like above. These macros also have double-evaluation hazards. To make this more robust, turn the macros into static inline functions. Inline functions generally behave more sanely and are more readable than macros.
-
由 Heikki Linnakangas 提交于
This does mean that we don't free the array quite as quickly as we used to, but it's a drop in the sea. The array is very small, there are much bigger data structures involved in evey AOCS scan that are not freed as quickly, and it's freed at the end of the query in any case.
-
由 Heikki Linnakangas 提交于
Commit fa6c2d43 added two functions, but forgot to add prototypes for them.
-
由 Adam Lee 提交于
Which is important for debugging customers' issues. (log level still matters)
-
由 Ming LI 提交于
1. Log raw string if it can't be decoded as unicode. 2. If similar exception issues in log(), continue processing left log with a warning. 3. If other exception issues in CatThread, log thread exit without blocking worker process, and report warning "gpfdist log halt because Log Thread got an exception:".
-
由 Marbin Tan 提交于
Create a more extensive workload for the sql to make it last longer. The previous sql was completing too fast and so when the actual pid read happens, there pid no longer exists and causes the result to be 0.
-
由 Venkatesh Raghavan 提交于
-
Oops we broke the tests sorry :( This reverts commit 97db5bdd.
-
由 Kavinder Dhaliwal 提交于
Signed-off-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 Chuck Litzell 提交于
* Pivotal GSS name change to Pivotal Support * Change Greenplum Customer Support reference to a Warning, as in the user doc
-
由 John Gaskin 提交于
Signed-off-by: NShivram Mani <shivram.mani@gmail.com>
-
由 Nadeem Ghani 提交于
Workaround a problem discovered by a client that noticed intermittent errors for gpssh when some nodes became very cpu-bound. In particular, we override the way the ssh command prompt is validated on a remote machine, within gpssh. The vendored module 'pexpect' tries to match 2 successive prompts from an interactive bash shell. However, if the target host is slow from CPU loading or network loading, these prompts may return late. In that case, the override retries several times, extending the timeout from the default 1 second to up to 125 times that duration. Experimentally, these added retries seem to tolerate about 1 second delay, testing with a 'tc' command that slows network traffic artificially. The number of retries can be configured. --add unit tests to verify happy path of ssh-ing to localhost --add module for gpssh, for overriding pexpect (pxxssh) --add readme to describe testing technique using 'tc' to delay network Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Larry Hamel 提交于
Also, added a unit test. Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 Nadeem Ghani 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
- 10 7月, 2017 2 次提交
-
-
由 xiong-gang 提交于
CREATE RESOURCE GROUP rg1 WITH (concurrency=1, cpu_rate_limit=10, memory_limit=10); CREATE ROLE r1 RESOURCE GROUP rg1; session 1: set role r1; BEGIN; session 2: BEGIN; <--- hang, and then cancel BEGIN; <--- assertion failure Signed-off-by: NNing Yu <nyu@pivotal.io>
-
由 Richard Guo 提交于
Memory usage statistic in resource group is defined as unsigned integer. For subtraction 'a - b' on memory usage, the atomic subtraction function 'pg_atomic_sub_fetch_*' will return the value of 'a' before the subtraction. Then this value is asserted to be no less than 'b'.
-
- 07 7月, 2017 8 次提交
-
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
-
由 Ning Yu 提交于
Change initial contents in pg_resgroupcapability: * Remove memory_redzone_limit; * Add memory_shared_quota, memory_spill_ratio; Change resgroup concurrency range to [1, 'max_connections']: * Original range is [0, 'max_connections'], and -1 means unlimited. * Now the range is [1, 'max_connections'], and -1 is not supported. Change resgroup limit type from float to int. Changed below resgroup resource limit types from float to int percentage value: * cpu_rate_limit; * memory_limit; * memory_shared_quota; * memory_spill_ratio;
-
由 Ashwin Agrawal 提交于
Currently, CommitPrepared xlog record, `xl_xact_commit_prepared` doesn't store information for distributed transaction like distributed transaction id or distributed timestamp. Its extremely helpful to have this information recorded plus also needed to replay/redo the xlog record. Currently, redo of CommitPrepared xlog record is not updating the distributed commit log. Which as of now seems not possing any issues as recovery of primary or failover to mirror will disconnect all existing connections but for consistency its better to redo distributedlog commit as well during redo of CommitPrepared.
-
由 Ashwin Agrawal 提交于
segmentCount variable is unused in TMGXACT_CHECKPOINT structure hence loose it out. Also, removing the union in fspc_agg_state, tspc_agg_state and dbdir_agg_state structures as don't see reason for having the same.
-
由 Daniel Gustafsson 提交于
The minipage attribute was using varbit without actually storing a varbit datum in it. Change over to bytea since it makes reading the value back easier, especially for pg_upgrade. This complements the change in commit dce769fe which performed the same change for the visimap attribute of the AO visimap relation. Since the bitmap hack function is created in the pg_temp schema, exempt it from Oid synchronization during binary upgrades to allow creation. This fix applies to the visimap handling as well.
-
由 Omer Arap 提交于
-
由 Lisa Owen 提交于
-
- 06 7月, 2017 7 次提交
-
-
由 Abhijit Subramanya 提交于
The vpinfo for `aocsvpinfo_decode()` had incorrect data type. Change it from `varbit` to `bytea` to match the column type in pg_aocsseg table.
-
由 Heikki Linnakangas 提交于
It's needed to compile with gpfdist. Per github issue #2739, reported by @flochman.
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Bhuvnesh Chaudhary 提交于
-
由 Daniel Gustafsson 提交于
Commit a8f956c6 removed the old SAN failover code but left the catalogs in place due to catalog change freeze. This removes the no longer used catalogs and the relevant doc entries.
-
由 Daniel Gustafsson 提交于
This adds the ability for the caller of pg_terminate_backend() or pg_cancel_backend() to include an optional message to the process which is being signalled. The message will be appended to the error message returned to the killed process. The new syntax is overloaded as: SELECT pg_terminate_backend(<pid> [, msg]); SELECT pg_cancel_backend(<pid> [, msg]);
-
- 04 7月, 2017 1 次提交
-
-
由 Shoaib Lari 提交于
The creation of filespaces, database, tablespaces, and relfilenodes is logged by a MMXLOG record in the Xlog. Functionality is added in xlogdump to display these records.
-
- 03 7月, 2017 3 次提交
-
-
由 Andreas Scherbaum 提交于
* Add libpq as database driver
-
由 Andreas Scherbaum 提交于
* Add information how to add a sequence as default value
-
由 Andreas Scherbaum 提交于
* Update Encoding information, add example
-
- 01 7月, 2017 4 次提交
-
-
由 Marbin Tan 提交于
There are times where gpperfmon_log_alert_history scenario fails, because there's no data in the log alert history table. This might be due to us copying an empty csv file; gpperfmon writes to a file for log alert in a cadence. We might be copying a file that has not been written into yet -- possibly empty. Make sure that we have something to copy first before proceeding to the next step. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Coefficient of Variation Calculation Coefficient of variation is the standard deviation divided by the mean. We're using the term skew very loosely in our description, as we're actually calculating coefficient of variation. With coefficient of variation, we can tell how dispersed the data points are across the segments. The higher the coefficient of variation, the more non-uniform the distribution of the data in cluster. Coefficient of variation is unitless so it could be used for comparing different clusters and how they are performing relative to each other. CPU Skew calculation: mean(cpu) = sum(per segment cpu cycle) / sum(segments) variance(cpu) = sqrt( sum(((cpu(segment) - mean(cpu))^2) ... ) / sum(segments) ) std_dev(cpu) = sqrt(variance(cpu)) skew(cpu) = coeffecient of variation = std_dev(cpu)/mean(cpu) Row out Skew calculation: mean(row) = sum(per segment row) / sum(segments) variance(row) = sqrt( sum(((row(segment) - mean(row))^2) ... ) / sum(segments) ) std_dev(row) = sqrt(variance(row)) skew(row) = coeffecient of variation = std_dev(row)/mean(row) Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-
由 Tushar Dadlani 提交于
Doing any kind of intensive work should show up in gpperfmon for both cpu skew and row skew, so putting them together as they can be tested at the same time. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
sigar_proc_cpu_get is the function which takes a pid and goes to the cpu and gets the cpu_elapsed value for that pid at a given moment. get_pid_metrics saves the information into a hash table within the gpsmon process. That pid only gets wiped out of the hashtable when gpmmon requests a dump command to gpsmon. gpsmon process sends tcp packets with information about cpu_elapsed (amount of cpu cycles spent on a given slice/process). Doing a 'dump' clears out the gpsmon hashtables. This could lead to scenarios where a query has ended, the process has died, and the sigar_proc_cpu_get function can't find the pid so it puts 0 into the struct which is updating the hashtable so the hashtable has a 0 for cpu_elapsed and then this is what is in queries_tail and so queries_history ends up having a 0 for cpu_elapsed since it was the last entry in queries_tail Fix: Ensure that we validate the functions that request pid metrics from libsigar and log the issues if they occur. Signed-off-by: NTushar Dadlani <tdadlani@pivotal.io>
-