- 21 5月, 2016 3 次提交
-
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
This closes #764
-
- 20 5月, 2016 7 次提交
-
-
-
由 Daniel Gustafsson 提交于
The -i option is a noop in pg_dump and pg_dumpall, remove documentation from reference page. Andreas Scherbaum
-
由 Daniel Gustafsson 提交于
Make the description text match the summary.
-
由 Daniel Gustafsson 提交于
This attempts to clean up the autoconf script a bit and follow the upstream division of generic code in config/ with the actual lookup configuration in configure.in. Also updated our installation to rely on a more modern version of autoconf by backporting parts of upstream commit 7cc514ac. This commit consist of: * Decouple --enable-codegen and --with-codegen-prefix to not silently ignore prefixes if the enable flag isn't passed. Emit a warning if configuring prefixes without codegen. Also moves --with-codegen-prefix to require an argument since --with-codegen-prefix without an argument is likely to hide either a scripting bug or a misunderstanding from the user * Move program checks for cmake and apr-1-config to programs.m4 and allow for path overrides and ensure to use the resolved path when invoking cmake for --enable-codegen * Propagate the apr-1-config flags and objects to where used via Makefile.global rather than performing another lookup * Remove check for unused arguments since autoconf does that automatically since 2.63 * Remove backported fseeko handling since that isn't relevant for modern autoconf versions * Minor help output tidying and spelling fixes
-
由 Adam Lee 提交于
-
由 Pengcheng Tang 提交于
information. Authors: Christopher Hajas, Pengcheng Tang
-
由 Pengcheng Tang 提交于
If peer of failed segment is in ChangeTrackingDisabled state, its change tracking log is corrupted. This commit is for gprecoverseg to stop recovering such segments in incremental mode, instead it warns user to run a full recovery. Also enable gprecoverseg unit and behave tests. Authors: Pengcheng Tang, Chumki Roy, Christopher Hajas
-
- 19 5月, 2016 11 次提交
-
-
由 Andreas Scherbaum 提交于
This will no longer spill logfiles from the demo cluster into the users ~/gpAdminLogs directory. Also it makes it easier to identify which logfile was created by the last regression test run. Closes #689 Closes #523
-
由 Adam Lee 提交于
To get fast feedback. Also deleted some dead codes and improved tests for s3conf.
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
-
由 Pengzhou Tang 提交于
dispatcher/ directory This commit has no logic change, it just contains movement of code across files, to make dispatcher code clearer, and easier for unit testing. Signed-off-by: Kenan Yao
-
-
由 Adam Lee 提交于
Usage: s3chkcfg -c "s3://endpoint/bucket/prefix config=path_to_config_file", to check the configuration. s3chkcfg -d "s3://endpoint/bucket/prefix config=path_to_config_file", to download and output to stdout. s3chkcfg -t, to show the config template. s3chkcfg -h, to show this help. Also did refactoring to reuse functions common with s3ext module.
-
由 Shreedhar Hardikar 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The codegen-prefix path list is semicolon separated and must thus be quoted when passed to autoconf as it otherwise breaks up the commandline. Add missing quote character in the README documentation.
-
-
- 18 5月, 2016 5 次提交
-
-
由 Heikki Linnakangas 提交于
There were a bunch of changes vs. upstream in the way the PGPROC free list was managed, and the way backend exit was handled. They seemed largely unnecessary, and somewhat buggy, so I reverted them. Avoiding unnecessary differences makes merging with upstream easier too. * The freelist was protected by atomic operations instead of a spinlock. There was an ABA problem in the implementation, however. In Prepend(), if another backend grabbed the PGPROC we were just about to grab for ourselves, and returned it to the freelist before we iterate and notice, we might set the head of the free list to a PGPROC that's actually already in use. It's a tight window, and backend startup is quite heavy, so that's unlikely to happen in practice. Still, it's a bug. Because backend start up is such a heavy operation, this codepath is not so performance-critical that you would gain anything from using atomic operations instead of a spinlock, so just switch back to using a spinlock like in the upstream. * When a backend exited, the responsibility to recycle the PGPROC entry to the free list was moved to the postmaster, from the backend itself. That's not broken per se, AFAICS, but it violates the general principle of avoiding shared memory access in postmaster. * There was a dead-man's switch, in the form of the postmasterResetRequired flag in the PGPROC entry. If a backend died unexpectedly, and the flag was set, postmaster would restart the whole server. If the flag was not set, it would clean up only the PGPROC entry that was left behind and let the system run normally. However, the flag was in fact always set, except after ProcKill had already run, i.e. when the process had exited normally. So I don't see the point of that, we might as well rely on the exit status to signal normal/abnormal exit, like we do in the upstream. That has worked fine for PostgreSQL. * There was one more case where the dead-man's switch was activated, even though the backend exited normally: In AuxiliaryProcKill(), if a filerep subprocess died, and it didn't have a parent process anymore. That means that the master filerep process had already died unexpectedly (filerep subprocesses are children of the are not direct children of postmaster). That seems unnecessary, however: if the filerep process had died unexpectedly, the postmaster should wake up to that, and would restart the server. To play it safe, though, make the subprocess exit with non-zero exit status in that case, so that the postmaster will wake up to that, if it didn't notice the master filerep process dying for some reason. * HaveNFreeProcs() was rewritten by maintaining the number of entries in the free list in a variable, instead of walking the list to count them. Presumably to make backend startup cheaper, when max_connections is high. I kept that, but it's slightly simpler now that we use a spinlock to protect the free list again: no need to use atomic ops for the variable anymore. * The autovacFreeProcs list was not used. Autovacuum workers got their PGPROC entry from the regular free list. Fix that, and also add missing InitSharedLatch() call to the initialization of the autovacuum workers list.
-
-
由 Shreedhar Hardikar 提交于
This is specially useful for using to get a pointer to a member in a structure that is an embedded array.
-
由 Shreedhar Hardikar 提交于
-
由 Chumki Roy 提交于
-
- 17 5月, 2016 3 次提交
-
-
由 Foyzur Rahman 提交于
Signed-off-by: NKarthikeyan Jambu Rajaraman <karthi.jrk@gmail.com>
-
由 Heikki Linnakangas 提交于
You get warnings about relcache reference leaks if you run something like "select readindex('pg_class_oid_index'::regclass) limit 1;". To fix, don't hold the relcache entries or the buffer pin across calls. Looking up the relcache entry on every call adds some overhead, of course, but a full index scan isn't exactly cheap anyway. And this is just a debugging function, not performance critical. Spotted by Ashwin Agrawal.
-
由 Jimmy Yih 提交于
Previously, previous free TID validation was done under the GUC persistent_integrity_checks. This commit extracts the previous free TID validation into another GUC validate_previous_free_tid and is enabled by default. If the validation detects a corruption in the free TID list, we will now switch to a new free TID list and leave the corrupted one detached for cleanup during persistent table rebuild or during crash recovery. Authors: Jimmy Yih and Abhijit Subramanya
-
- 16 5月, 2016 1 次提交
-
-
由 water32 提交于
Old code query instance's database data directory, one query per instance. This will generate many logs to log file, example: gpstate New code, query all data about database's data directory, map date to key -> array, then set to segments object. So use one query replace many query avoid many times query in database.
-
- 13 5月, 2016 10 次提交
-
-
由 Heikki Linnakangas 提交于
These tests use the existing fault injection mechanism built into the server, to cause errors to happen at strategic places, and checks the results. This is almost just a placeholder, there are very few actual tests for now. But it's a start. The suite uses plain old pg_regress to run the tests and check the results. That's enough for the tests included here, but in the future we'll probably want to do server restarts, crashes, etc. as part of the suite, and will have to refactor this to something that can do those things more easily. But let's cross that bridge when we get there. Also, the test actually leaves the connections to the segments in a funny state, which shouldn't really happen. The test fails currently because of that; let's fix it together with the state issue. But even in this state, this has been useful to me right now, to reproduce an issue on the merge_8_3_22 branch that I'm working on at the same time (this test currently causes a PANIC there). This also isn't hooked up to any top-level targets yet; you have to run the suite manually from the src/test/dtm directory.
-
由 Heikki Linnakangas 提交于
While hacking, I ran into the "Expected a CREATE for file-system object name" error. But instead of printing the error, it got into an infinite loop. smgrSortDeletesList() elog(ERROR)'d out of the function, while it was in the middle of putting the linked list back together, leaving pendingDeletes list corrupt, with a loop. AbortTransaction() processing called smgrIsPendingFileSysWork(), which traversed the list, and it got stuck in the loop. To avoid that, don't leave the list in an invalid state on error. I don't know why I ran into the error in the first place, but that's a different story.
-
由 Heikki Linnakangas 提交于
I saw the "nresults < nslots" assertion fail, while hacking on something else. It happened when a Distributed Prepare command failed, and there were several error result sets from a segment. I'm not sure how normal it is to receive multiple ERROR responses to a single query, but the protocol certainly allows it, and I don't see any explanation for why the code used to assume that there can be at most 2 result sets from each segment. Remove that assumption, and make the code cope with more than two result sets from a segment, by calculating the required size of the array accurately. In the passing, remove the NULL-terminator from the array, and change the callers that depended on it to use the returned size variable instead. Makes the loops in the callers look less funky.
-
由 Heikki Linnakangas 提交于
The code incorrectly called free() on last+1 element of the array. The array returned by cdbdisp_dispatchRMCommand() always has a NULL element as terminator, and free(NULL) is a no-op, which is why this didn't outright crash. But clearly the intention here was to free() the array itself, otherwise it's leaked.
-
由 Daniel Gustafsson 提交于
This is a follow-up to commit b7365f58 which replaced the PostgreSQL bug report email with the Greenplum one.
-
由 Daniel Gustafsson 提交于
Removes unused CVS $Header$ tags and moves comments closer to where they make sense as well as updates a few comments to match reality.
-
由 Daniel Gustafsson 提交于
Rather than storing the full 100kb string in the outfile (which adds 200kb for the header) and passing to diff instead compare the tuple with the expected value and store the boolean result in the outfile instead. This shaves 1.25 seconds off the testsuite on my laptop but the primary win is to shrink the size of the outfiles. Tests on Pulse show consistently lower diff times.
-
由 Daniel Gustafsson 提交于
Invoking gpdiff -version was broken since it relied on an old CVS $Revision$ tag in the sourcecode to be replaced with an actual value. Since this clearly isn't the most important part I for now copied in the contents of VERSION which seems like enough attention to spend on this.
-
由 Daniel Gustafsson 提交于
These variables are not used since the split into a program and a module.
-
由 Daniel Gustafsson 提交于
Rather than our own bespoke code, use the Getopt::Long core module for parsing the command line options. By specifying pass_through in the Getopt configuration we can preserve the options to pass down to the diff command while extracting the gpdiff specific options. The variants that were previously allowed are added as aliases to the primary option names.
-