- 19 5月, 2016 7 次提交
-
-
由 Pengzhou Tang 提交于
dispatcher/ directory This commit has no logic change, it just contains movement of code across files, to make dispatcher code clearer, and easier for unit testing. Signed-off-by: Kenan Yao
-
-
由 Adam Lee 提交于
Usage: s3chkcfg -c "s3://endpoint/bucket/prefix config=path_to_config_file", to check the configuration. s3chkcfg -d "s3://endpoint/bucket/prefix config=path_to_config_file", to download and output to stdout. s3chkcfg -t, to show the config template. s3chkcfg -h, to show this help. Also did refactoring to reuse functions common with s3ext module.
-
由 Shreedhar Hardikar 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The codegen-prefix path list is semicolon separated and must thus be quoted when passed to autoconf as it otherwise breaks up the commandline. Add missing quote character in the README documentation.
-
-
- 18 5月, 2016 5 次提交
-
-
由 Heikki Linnakangas 提交于
There were a bunch of changes vs. upstream in the way the PGPROC free list was managed, and the way backend exit was handled. They seemed largely unnecessary, and somewhat buggy, so I reverted them. Avoiding unnecessary differences makes merging with upstream easier too. * The freelist was protected by atomic operations instead of a spinlock. There was an ABA problem in the implementation, however. In Prepend(), if another backend grabbed the PGPROC we were just about to grab for ourselves, and returned it to the freelist before we iterate and notice, we might set the head of the free list to a PGPROC that's actually already in use. It's a tight window, and backend startup is quite heavy, so that's unlikely to happen in practice. Still, it's a bug. Because backend start up is such a heavy operation, this codepath is not so performance-critical that you would gain anything from using atomic operations instead of a spinlock, so just switch back to using a spinlock like in the upstream. * When a backend exited, the responsibility to recycle the PGPROC entry to the free list was moved to the postmaster, from the backend itself. That's not broken per se, AFAICS, but it violates the general principle of avoiding shared memory access in postmaster. * There was a dead-man's switch, in the form of the postmasterResetRequired flag in the PGPROC entry. If a backend died unexpectedly, and the flag was set, postmaster would restart the whole server. If the flag was not set, it would clean up only the PGPROC entry that was left behind and let the system run normally. However, the flag was in fact always set, except after ProcKill had already run, i.e. when the process had exited normally. So I don't see the point of that, we might as well rely on the exit status to signal normal/abnormal exit, like we do in the upstream. That has worked fine for PostgreSQL. * There was one more case where the dead-man's switch was activated, even though the backend exited normally: In AuxiliaryProcKill(), if a filerep subprocess died, and it didn't have a parent process anymore. That means that the master filerep process had already died unexpectedly (filerep subprocesses are children of the are not direct children of postmaster). That seems unnecessary, however: if the filerep process had died unexpectedly, the postmaster should wake up to that, and would restart the server. To play it safe, though, make the subprocess exit with non-zero exit status in that case, so that the postmaster will wake up to that, if it didn't notice the master filerep process dying for some reason. * HaveNFreeProcs() was rewritten by maintaining the number of entries in the free list in a variable, instead of walking the list to count them. Presumably to make backend startup cheaper, when max_connections is high. I kept that, but it's slightly simpler now that we use a spinlock to protect the free list again: no need to use atomic ops for the variable anymore. * The autovacFreeProcs list was not used. Autovacuum workers got their PGPROC entry from the regular free list. Fix that, and also add missing InitSharedLatch() call to the initialization of the autovacuum workers list.
-
-
由 Shreedhar Hardikar 提交于
This is specially useful for using to get a pointer to a member in a structure that is an embedded array.
-
由 Shreedhar Hardikar 提交于
-
由 Chumki Roy 提交于
-
- 17 5月, 2016 3 次提交
-
-
由 Foyzur Rahman 提交于
Signed-off-by: NKarthikeyan Jambu Rajaraman <karthi.jrk@gmail.com>
-
由 Heikki Linnakangas 提交于
You get warnings about relcache reference leaks if you run something like "select readindex('pg_class_oid_index'::regclass) limit 1;". To fix, don't hold the relcache entries or the buffer pin across calls. Looking up the relcache entry on every call adds some overhead, of course, but a full index scan isn't exactly cheap anyway. And this is just a debugging function, not performance critical. Spotted by Ashwin Agrawal.
-
由 Jimmy Yih 提交于
Previously, previous free TID validation was done under the GUC persistent_integrity_checks. This commit extracts the previous free TID validation into another GUC validate_previous_free_tid and is enabled by default. If the validation detects a corruption in the free TID list, we will now switch to a new free TID list and leave the corrupted one detached for cleanup during persistent table rebuild or during crash recovery. Authors: Jimmy Yih and Abhijit Subramanya
-
- 16 5月, 2016 1 次提交
-
-
由 water32 提交于
Old code query instance's database data directory, one query per instance. This will generate many logs to log file, example: gpstate New code, query all data about database's data directory, map date to key -> array, then set to segments object. So use one query replace many query avoid many times query in database.
-
- 13 5月, 2016 16 次提交
-
-
由 Heikki Linnakangas 提交于
These tests use the existing fault injection mechanism built into the server, to cause errors to happen at strategic places, and checks the results. This is almost just a placeholder, there are very few actual tests for now. But it's a start. The suite uses plain old pg_regress to run the tests and check the results. That's enough for the tests included here, but in the future we'll probably want to do server restarts, crashes, etc. as part of the suite, and will have to refactor this to something that can do those things more easily. But let's cross that bridge when we get there. Also, the test actually leaves the connections to the segments in a funny state, which shouldn't really happen. The test fails currently because of that; let's fix it together with the state issue. But even in this state, this has been useful to me right now, to reproduce an issue on the merge_8_3_22 branch that I'm working on at the same time (this test currently causes a PANIC there). This also isn't hooked up to any top-level targets yet; you have to run the suite manually from the src/test/dtm directory.
-
由 Heikki Linnakangas 提交于
While hacking, I ran into the "Expected a CREATE for file-system object name" error. But instead of printing the error, it got into an infinite loop. smgrSortDeletesList() elog(ERROR)'d out of the function, while it was in the middle of putting the linked list back together, leaving pendingDeletes list corrupt, with a loop. AbortTransaction() processing called smgrIsPendingFileSysWork(), which traversed the list, and it got stuck in the loop. To avoid that, don't leave the list in an invalid state on error. I don't know why I ran into the error in the first place, but that's a different story.
-
由 Heikki Linnakangas 提交于
I saw the "nresults < nslots" assertion fail, while hacking on something else. It happened when a Distributed Prepare command failed, and there were several error result sets from a segment. I'm not sure how normal it is to receive multiple ERROR responses to a single query, but the protocol certainly allows it, and I don't see any explanation for why the code used to assume that there can be at most 2 result sets from each segment. Remove that assumption, and make the code cope with more than two result sets from a segment, by calculating the required size of the array accurately. In the passing, remove the NULL-terminator from the array, and change the callers that depended on it to use the returned size variable instead. Makes the loops in the callers look less funky.
-
由 Heikki Linnakangas 提交于
The code incorrectly called free() on last+1 element of the array. The array returned by cdbdisp_dispatchRMCommand() always has a NULL element as terminator, and free(NULL) is a no-op, which is why this didn't outright crash. But clearly the intention here was to free() the array itself, otherwise it's leaked.
-
由 Daniel Gustafsson 提交于
This is a follow-up to commit b7365f58 which replaced the PostgreSQL bug report email with the Greenplum one.
-
由 Daniel Gustafsson 提交于
Removes unused CVS $Header$ tags and moves comments closer to where they make sense as well as updates a few comments to match reality.
-
由 Daniel Gustafsson 提交于
Rather than storing the full 100kb string in the outfile (which adds 200kb for the header) and passing to diff instead compare the tuple with the expected value and store the boolean result in the outfile instead. This shaves 1.25 seconds off the testsuite on my laptop but the primary win is to shrink the size of the outfiles. Tests on Pulse show consistently lower diff times.
-
由 Daniel Gustafsson 提交于
Invoking gpdiff -version was broken since it relied on an old CVS $Revision$ tag in the sourcecode to be replaced with an actual value. Since this clearly isn't the most important part I for now copied in the contents of VERSION which seems like enough attention to spend on this.
-
由 Daniel Gustafsson 提交于
These variables are not used since the split into a program and a module.
-
由 Daniel Gustafsson 提交于
Rather than our own bespoke code, use the Getopt::Long core module for parsing the command line options. By specifying pass_through in the Getopt configuration we can preserve the options to pass down to the diff command while extracting the gpdiff specific options. The variants that were previously allowed are added as aliases to the primary option names.
-
由 Daniel Gustafsson 提交于
The tempfile() interface in File::Temp is race free and has been available as a core module since Perl 5.6.1 (released in April 2001) so replace to simplify the code and avoid excessive looping around a solved problem.
-
由 Daniel Gustafsson 提交于
Before freeing the CdbComponentDatabase struct we need to copy the hostname member since we are passing that to the QE's. Should the memory be reclaimed before the command is serialized to be passed down the hostname part will contain rubbish and either not work or crash the backend.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The PartitionNode tree returned from RelationBuildPartitionDescByOid will be NULL in case the OID passed isn't present in pg_partition so we must abort with error to avoid segfaulting on NULL pointer deref. Also add a test in the partition suite for this. Reported by Github user @liruto.
-
Also adding comments in lc_numeric guc for not to remove GUC_GPDB_ADDOPT
-
由 Daniel Gustafsson 提交于
We resolve the path for gpstringsubs.pl with find_other_exec() so use the outcome of that rather than hardcoding it at invocation.
-
- 12 5月, 2016 2 次提交
-
-
由 Asim R P 提交于
Validation 1: look up new tuple's heap tid in unique index before insert. The tuple being inserted is already inserted in heap. Before its entry is added to a unique index, we want to see if the index already has an entry with this heap tid. This should catch duplicate entries created in index but not in the heap relation. The validation is enabled by GUC "gp_indexcheck_insert". Validation 2: index should point to all visible tuples after vacuum. For each entry in index after it was vacuumed, fetch the heap tuple and validate that it is visible. For specific tables, validate that the key is the same. The validation is controlled by GUC "gp_indexcheck_vacuum". Closes #673.
-
由 Shreedhar Hardikar 提交于
These were initialized by constructors earlier. To pass any parameters for initializing GPOPT or any of its dependendencies, we need to do that explicitly.
-
- 11 5月, 2016 6 次提交
-
-
由 Heikki Linnakangas 提交于
There were a lot of unused tables and functions, and chaff like comments that are not needed for the actual tests in the file. At first glance, some of the things seemed marginally useful to test on their own right, like loading data with non-ASCII characters in it, but all the setup stuff was in a large ignore-block, so any failures there would go unnoticed anyway. Removing unnecessary stuff is a virtue of its own, but this also speeds up the test nicely.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
The stuff that's inherited from upstream stays in create_table, while the stuff that we've added in GPDB is split off to gp_create_table. Separating them makes merging and diffing with upstream easier.
-
由 Heikki Linnakangas 提交于
The test with stress_test() function (and accompanying tables) was created and executed once. Then it was dropped, and recreated, and then executed two times. Executing the same function twice might reveal bugs in plan caching, so I kept that (although TBH we have better coverage for that elsewhere). But I don't see the point of dropping and recreating it in between: surely it's good enough to just create the function once, and execute it twice. This reduces the runtime of qp_functions test by about 1/3 (from 3 minutes to 2 minutes on my laptop).
-
由 Daniel Gustafsson 提交于
The exponent in the pow calculation is integer and can thus not be infinity, remove for logical OR in check. Andreas Scherbaum and Atri Sharma
-
由 Heikki Linnakangas 提交于
An earlier attempt at this checked AmIInSIGUSR1Handler() to see if we are currently processing a catchup event. But that's not good enough: we also process catchup interrupts outside the signal handler, in EnableCatchupInterrupt(). I saw lockups during "make installcheck-good" with a stack trace that shows a backend waiting for lock on a temporary relation, while trying to truncate it when committing the transaction opened for processing a catchup event. For reference, the commit message for the commit that introduced the AmIInSIGUSR1Handler check said: Recent parallel installcheck-good revealed we have a chance to process catchup interrupt while waiting for commit-prepare, and if the prepared transaction has created a temporary table with on commit option, the newly opened transaction for the sake of AcceptInvalidationMessages() cannot see and fails before the commit-prepare. It's even not clear if we are safe to open and commit another transaction between prepare and commit-prepare, but for now just skip the oncommit operation as it doesn't have any effect anyway.
-