- 25 1月, 2015 1 次提交
-
-
由 Tom Lane 提交于
strncpy() has a well-deserved reputation for being unsafe, so make an effort to get rid of nearly all occurrences in HEAD. A large fraction of the remaining uses were passing length less than or equal to the known strlen() of the source, in which case no null-padding can occur and the behavior is equivalent to memcpy(), though doubtless slower and certainly harder to reason about. So just use memcpy() in these cases. In other cases, use either StrNCpy() or strlcpy() as appropriate (depending on whether padding to the full length of the destination buffer seems useful). I left a few strncpy() calls alone in the src/timezone/ code, to keep it in sync with upstream (the IANA tzcode distribution). There are also a few such calls in ecpg that could possibly do with more analysis. AFAICT, none of these changes are more than cosmetic, except for the four occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength source leads to a non-null-terminated destination buffer and ensuing misbehavior. These don't seem like security issues, first because no stack clobber is possible and second because if your values of sslcert etc are coming from untrusted sources then you've got problems way worse than this. Still, it's undesirable to have unpredictable behavior for overlength inputs, so back-patch those four changes to all active branches.
-
- 24 1月, 2015 1 次提交
-
-
由 Robert Haas 提交于
Peter Geoghegan
-
- 23 1月, 2015 2 次提交
-
-
由 Robert Haas 提交于
The split between which things need to happen in the C-locale case and which needed to happen in the locale-aware case was a few bricks short of a load. Try to fix that.
-
由 Robert Haas 提交于
First, when LC_COLLATE = C, bttext_abbrev_convert should use memcpy() rather than strxfrm() to construct the abbreviated key, because the authoritative comparator uses memcpy(). If we do anything else here, we might get inconsistent answers, and the buildfarm says this risk is not theoretical. It should be faster this way, too. Second, while I'm looking at bttext_abbrev_convert, convert a needless use of goto into the loop it's trying to implement into an actual loop. Both of the above problems date to the original commit of abbreviated keys, commit 4ea51cdf. Third, fix a bogus assignment to tss->locale before tss is set up. That's a new goof in commit b529b65d.
-
- 22 1月, 2015 1 次提交
-
-
由 Robert Haas 提交于
Prior to commit 4ea51cdf, this function only had one job, which was to decide whether we could avoid trampolining through the fmgr layer when performing sort comparisons. As of that commit, it has a second job, which is to decide whether we can use abbreviated keys. Unfortunately, those two tasks are somewhat intertwined in the existing coding, which is likely why neither Peter Geoghegan nor I noticed prior to commit that this calls pg_newlocale_from_collation() in cases where it didn't previously. The buildfarm noticed, though. To fix, rewrite the logic so that the decision as to which comparator to use is more cleanly separated from the decision about abbreviation.
-
- 21 1月, 2015 1 次提交
-
-
由 Robert Haas 提交于
Most of the Windows buildfarm members (bowerbird, hamerkop, currawong, jacana, brolga) are unhappy with yesterday's abbreviated keys patch, although there are some (narwhal, frogmouth) that seem OK with it. Since there's no obvious pattern to explain why some are working and others are failing, just disable this across-the-board on Windows for now. This is a bit unfortunate since the optimization will be a big win in some cases, but we can't leave the buildfarm broken.
-
- 20 1月, 2015 1 次提交
-
-
由 Robert Haas 提交于
This commit extends the SortSupport infrastructure to allow operator classes the option to provide abbreviated representations of Datums; in the case of text, we abbreviate by taking the first few characters of the strxfrm() blob. If the abbreviated comparison is insufficent to resolve the comparison, we fall back on the normal comparator. This can be much faster than the old way of doing sorting if the first few bytes of the string are usually sufficient to resolve the comparison. There is the potential for a performance regression if all of the strings to be sorted are identical for the first 8+ characters and differ only in later positions; therefore, the SortSupport machinery now provides an infrastructure to abort the use of abbreviation if it appears that abbreviation is producing comparatively few distinct keys. HyperLogLog, a streaming cardinality estimator, is included in this commit and used to make that determination for text. Peter Geoghegan, reviewed by me.
-
- 16 1月, 2015 1 次提交
-
-
由 Tom Lane 提交于
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column aliases used in its output are unique. Of course this takes more time than was required before, which in itself isn't fatal. However, EXPLAIN was set up so that recalculation of the unique aliases was repeated for each subexpression printed in a plan. That results in O(N^2) time and memory consumption for large plan trees, which did not happen in older branches. Fortunately, the expensive work is the same across a whole plan tree, so there is no need to repeat it; we can do most of the initialization just once per query and re-use it for each subexpression. This buys back most (not all) of the performance loss since 9.2. We need an extra ExplainState field to hold the precalculated deparse context. That's no problem in HEAD, but in the back branches, expanding sizeof(ExplainState) seems risky because third-party extensions might have local variables of that struct type. So, in 9.4 and 9.3, introduce an auxiliary struct to keep sizeof(ExplainState) the same. We should refactor the APIs to avoid such local variables in future, but that's material for a separate HEAD-only commit. Per gripe from Alexey Bashtanov. Back-patch to 9.3 where the issue was introduced.
-
- 15 1月, 2015 1 次提交
-
-
由 Andres Freund 提交于
To do so, move InitializeLatchSupport() into the new common process initialization functions, and add a new global variable MyLatch. MyLatch is usable as soon InitPostmasterChild() has been called (i.e. very early during startup). Initially it points to a process local latch that exists in all processes. InitProcess/InitAuxiliaryProcess then replaces that local latch with PGPROC->procLatch. During shutdown the reverse happens. This is primarily advantageous for two reasons: For one it simplifies dealing with the shared process latch, especially in signal handlers, because instead of having to check for MyProc, MyLatch can be used unconditionally. For another, a later patch that makes FEs/BE communication use latches, now can rely on the existence of a latch, even before having gone through InitProcess. Discussion: 20140927191243.GD5423@alap3.anarazel.de
-
- 07 1月, 2015 2 次提交
-
-
由 Peter Eisentraut 提交于
Previously, the xml value resulting from an xpath query would not have namespace declarations if the namespace declarations were attached to an ancestor element in the input xml value. That means the output value was not correct XML. Fix that by running the result value through xmlCopyNode(), which produces the correct namespace declarations. Author: Ali Akbar <the.apaan@gmail.com>
-
由 Bruce Momjian 提交于
Backpatch certain files through 9.0
-
- 31 12月, 2014 1 次提交
-
-
由 Alvaro Herrera 提交于
This function returns object type and objname/objargs arrays, which can be passed to pg_get_object_address. This is especially useful because the textual representation can be copied to a remote server in order to obtain the corresponding OID-based address. In essence, this function is the inverse of recently added pg_get_object_address(). Catalog version bumped due to the addition of the new function. Also add docs to pg_get_object_address.
-
- 26 12月, 2014 1 次提交
-
- 25 12月, 2014 1 次提交
-
-
由 Fujii Masao 提交于
Exposing compression and decompression APIs of pglz makes possible its use by extensions and contrib modules. pglz_decompress contained a call to elog to emit an error message in case of corrupted data. This function is changed to return a status code to let its callers return an error instead. This commit is required for upcoming WAL compression feature so that the WAL reader facility can decompress the WAL data by using pglz_decompress. Michael Paquier
-
- 24 12月, 2014 1 次提交
-
-
由 Alvaro Herrera 提交于
This reverts commit 1826987a. The overall design was deemed unacceptable, in discussion following the previous commit message; we might find some parts of it still salvageable, but I don't want to be on the hook for fixing it, so let's wait until we have a new patch.
-
- 23 12月, 2014 1 次提交
-
-
由 Alvaro Herrera 提交于
The previous representation using a boolean column for each attribute would not scale as well as we want to add further attributes. Extra auxilliary functions are added to go along with this change, to make up for the lost convenience of access of the old representation. Catalog version bumped due to change in catalogs and the new functions. Author: Adam Brightwell, minor tweaks by Álvaro Reviewed by: Stephen Frost, Andres Freund, Álvaro Herrera
-
- 19 12月, 2014 1 次提交
-
-
由 Tom Lane 提交于
Previously, if you wanted anything besides C-string hash keys, you had to specify a custom hashing function to hash_create(). Nearly all such callers were specifying tag_hash or oid_hash; which is tedious, and rather error-prone, since a caller could easily miss the opportunity to optimize by using hash_uint32 when appropriate. Replace this with a design whereby callers using simple binary-data keys just specify HASH_BLOBS and don't need to mess with specific support functions. hash_create() itself will take care of optimizing when the key size is four bytes. This nets out saving a few hundred bytes of code space, and offers a measurable performance improvement in tidbitmap.c (which was not exploiting the opportunity to use hash_uint32 for its 4-byte keys). There might be some wins elsewhere too, I didn't analyze closely. In future we could look into offering a similar optimized hashing function for 8-byte keys. Under this design that could be done in a centralized and machine-independent fashion, whereas getting it right for keys of platform-dependent sizes would've been notationally painful before. For the moment, the old way still works fine, so as not to break source code compatibility for loadable modules. Eventually we might want to remove tag_hash and friends from the exported API altogether, since there's no real need for them to be explicitly referenced from outside dynahash.c. Teodor Sigaev and Tom Lane
-
- 18 12月, 2014 1 次提交
-
-
由 Fujii Masao 提交于
In generate_series_step_numeric(), the variables "start_num" and "stop_num" may be potentially freed until the next call. So they should be put in the location which can survive across calls. But previously they were not, and which could cause incorrect behavior of generate_series(numeric, numeric). This commit fixes this problem by copying them on multi_call_memory_ctx. Andrew Gierth
-
- 16 12月, 2014 2 次提交
-
-
由 Andrew Dunstan 提交于
Mostly these issues concern the non-use of function results. These have been changed to use (void) pushJsonbValue(...) instead of assigning the result to a variable that gets overwritten before it is used. There is a larger issue that we should possibly examine the API for pushJsonbValue(), so that instead of returning a value it modifies a state argument. The current idiom is rather clumsy. However, changing that requires quite a bit more work, so this change should do for the moment.
-
由 Tom Lane 提交于
"PG_RETURN_FLOAT8(x)" is not "return x", except perhaps by accident on some platforms.
-
- 15 12月, 2014 1 次提交
-
-
由 Heikki Linnakangas 提交于
Alexander Korotkov, reviewed by Emre Hasegeli.
-
- 14 12月, 2014 1 次提交
-
-
由 Tom Lane 提交于
The code for advancing through the input rows overlooked the case that we might already be past the first row of the row pair now being considered, in case the previous percentile also fell between the same two input rows. Report and patch by Andrew Gierth; logic rewritten a bit for clarity by me.
-
- 13 12月, 2014 1 次提交
-
-
由 Andrew Dunstan 提交于
The functions are: to_jsonb() jsonb_object() jsonb_build_object() jsonb_build_array() jsonb_agg() jsonb_object_agg() Also along the way some better logic is implemented in json_categorize_type() to match that in the newly implemented jsonb_categorize_type(). Andrew Dunstan, reviewed by Pavel Stehule and Alvaro Herrera.
-
- 12 12月, 2014 1 次提交
-
-
由 Andrew Dunstan 提交于
The functions remove object fields, including in nested objects, that have null as a value. In certain cases this can lead to considerably smaller datums, with no loss of semantic information. Andrew Dunstan, reviewed by Pavel Stehule.
-
- 11 12月, 2014 1 次提交
-
-
由 Tom Lane 提交于
The amount of space to reserve for the value's varlena header is VARHDRSZ, not sizeof(VARHDRSZ). The latter coding accidentally failed to fail because of the way the VARHDRSZ macro is currently defined; but if we ever change it to return size_t (as one might reasonably expect it to do), convertToJsonb() would have failed. Spotted by Mark Dilger.
-
- 03 12月, 2014 2 次提交
-
-
由 Tom Lane 提交于
Make the error messages issued by array_in() uniformly follow the style ERROR: malformed array literal: "actual input string" DETAIL: specific complaint here and rewrite many of the specific complaints to be clearer. The immediate motivation for doing this is a complaint from Josh Berkus that json_to_record() produced an unintelligible error message when dealing with an array item, because it tries to feed the JSON-format array value to array_in(). Really it ought to be smart enough to perform JSON-to-Postgres array conversion, but that's a future feature not a bug fix. In the meantime, this change is something we agreed we could back-patch into 9.4, and it should help de-confuse things a bit.
-
由 Tom Lane 提交于
Davide S. reported that json_agg() sometimes produced multiple trailing right brackets. This turns out to be because json_agg_finalfn() attaches the final right bracket, and was doing so by modifying the aggregate state in-place. That's verboten, though unfortunately it seems there's no way for nodeAgg.c to check for such mistakes. Fix that back to 9.3 where the broken code was introduced. In 9.4 and HEAD, likewise fix json_object_agg(), which had copied the erroneous logic. Make some cosmetic cleanups as well.
-
- 02 12月, 2014 2 次提交
-
-
由 Tom Lane 提交于
We were not checking to see if the supplied dscale was valid for the given digit array when receiving binary-format numeric values. While dscale can validly be more than the number of nonzero fractional digits, it shouldn't be less; that case causes fractional digits to be hidden on display even though they're there and participate in arithmetic. Bug #12053 from Tommaso Sala indicates that there's at least one broken client library out there that sometimes supplies an incorrect dscale value, leading to strange behavior. This suggests that simply throwing an error might not be the best response; it would lead to failures in applications that might seem to be working fine today. What seems the least risky fix is to truncate away any digits that would be hidden by dscale. This preserves the existing behavior in terms of what will be printed for the transmitted value, while preventing subsequent arithmetic from producing results inconsistent with that. In passing, throw a specific error for the case of dscale being outside the range that will fit into a numeric's header. Before you got "value overflows numeric format", which is a bit misleading. Back-patch to all supported branches.
-
由 Andrew Dunstan 提交于
We expose a function IsValidJsonNumber that internally calls the lexer for json numbers. That allows us to use the same test everywhere, instead of inventing a broken test for hstore conversions. The new function is also used in datum_to_json, replacing the code that is now moved to the new function. Backpatch to 9.3 where hstore_to_json_loose was introduced.
-
- 26 11月, 2014 1 次提交
-
-
由 Tom Lane 提交于
These cases formerly failed with errors about "could not find array type for data type". Now they yield arrays of the same element type and one higher dimension. The implementation involves creating functions with API similar to the existing accumArrayResult() family. I (tgl) also extended the base family by adding an initArrayResult() function, which allows callers to avoid special-casing the zero-inputs case if they just want an empty array as result. (Not all do, so the previous calling convention remains valid.) This allowed simplifying some existing code in xml.c and plperl.c. Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
-
- 25 11月, 2014 1 次提交
-
-
由 Robert Haas 提交于
This is further infrastructure for parallelism. Amit Khandekar, Noah Misch, Robert Haas
-
- 22 11月, 2014 1 次提交
-
-
由 Tom Lane 提交于
Make it work more like FDW plans do: instead of assuming that there are expressions in a CustomScan plan node that the core code doesn't know about, insist that all subexpressions that need planner attention be in a "custom_exprs" list in the Plan representation. (Of course, the custom plugin can break the list apart again at executor initialization.) This lets us revert the parts of the patch that exposed setrefs.c and subselect.c processing to the outside world. Also revert the GetSpecialCustomVar stuff in ruleutils.c; that concept may work in future, but it's far from fully baked right now.
-
- 14 11月, 2014 1 次提交
-
-
由 Robert Haas 提交于
The hope is that we can use this to produce better diagnostics in some cases. Peter Geoghegan, reviewed by Michael Paquier, with some further changes by me.
-
- 11 11月, 2014 1 次提交
-
-
由 Fujii Masao 提交于
Платон Малюгин Reviewed by Michael Paquier, Ali Akbar and Marti Raudsepp
-
- 08 11月, 2014 2 次提交
-
-
由 Robert Haas 提交于
This allows extension modules to define their own methods for scanning a relation, and get the core code to use them. It's unclear as yet how much use this capability will find, but we won't find out if we never commit it. KaiGai Kohei, reviewed at various times and in various levels of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro Herrera, and myself.
-
由 Alvaro Herrera 提交于
BRIN is a new index access method intended to accelerate scans of very large tables, without the maintenance overhead of btrees or other traditional indexes. They work by maintaining "summary" data about block ranges. Bitmap index scans work by reading each summary tuple and comparing them with the query quals; all pages in the range are returned in a lossy TID bitmap if the quals are consistent with the values in the summary tuple, otherwise not. Normal index scans are not supported because these indexes do not store TIDs. As new tuples are added into the index, the summary information is updated (if the block range in which the tuple is added is already summarized) or not; in the latter case, a subsequent pass of VACUUM or the brin_summarize_new_values() function will create the summary information. For data types with natural 1-D sort orders, the summary info consists of the maximum and the minimum values of each indexed column within each page range. This type of operator class we call "Minmax", and we supply a bunch of them for most data types with B-tree opclasses. Since the BRIN code is generalized, other approaches are possible for things such as arrays, geometric types, ranges, etc; even for things such as enum types we could do something different than minmax with better results. In this commit I only include minmax. Catalog version bumped due to new builtin catalog entries. There's more that could be done here, but this is a good step forwards. Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera, with contribution by Heikki Linnakangas. Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas. Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo. PS: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 318633.
-
- 07 11月, 2014 1 次提交
-
-
由 Tom Lane 提交于
The default JSONB GIN opclass (jsonb_ops) converts numeric data values to strings for storage in the index. It must ensure that numeric values that would compare equal (such as 12 and 12.00) produce identical strings, else index searches would have behavior different from regular JSONB comparisons. Unfortunately the function charged with doing this was completely wrong: it could reduce distinct numeric values to the same string, or reduce equivalent numeric values to different strings. The former type of error would only lead to search inefficiency, but the latter type of error would cause index entries that should be found by a search to not be found. Repairing this bug therefore means that it will be necessary for 9.4 beta testers to reindex GIN jsonb_ops indexes, if they care about getting correct results from index searches involving numeric data values within the comparison JSONB object. Per report from Thomas Fanghaenel.
-
- 06 11月, 2014 1 次提交
-
-
由 Heikki Linnakangas 提交于
xlog.c is huge, this makes it a little bit smaller, which is nice. Functions related to putting together the WAL record are in xloginsert.c, and the lower level stuff for managing WAL buffers and such are in xlog.c. Also move the definition of XLogRecord to a separate header file. This causes churn in the #includes of all the files that write WAL records, and redo routines, but it avoids pulling in xlog.h into most places. Reviewed by Michael Paquier, Alvaro Herrera, Andres Freund and Amit Kapila.
-
- 04 11月, 2014 1 次提交
-
-
由 Heikki Linnakangas 提交于
The old algorithm was found to not be the usual CRC-32 algorithm, used by Ethernet et al. We were using a non-reflected lookup table with code meant for a reflected lookup table. That's a strange combination that AFAICS does not correspond to any bit-wise CRC calculation, which makes it difficult to reason about its properties. Although it has worked well in practice, seems safer to use a well-known algorithm. Since we're changing the algorithm anyway, we might as well choose a different polynomial. The Castagnoli polynomial has better error-correcting properties than the traditional CRC-32 polynomial, even if we had implemented it correctly. Another reason for picking that is that some new CPUs have hardware support for calculating CRC-32C, but not CRC-32, let alone our strange variant of it. This patch doesn't add any support for such hardware, but a future patch could now do that. The old algorithm is kept around for tsquery and pg_trgm, which use the values in indexes that need to remain compatible so that pg_upgrade works. While we're at it, share the old lookup table for CRC-32 calculation between hstore, ltree and core. They all use the same table, so might as well.
-
- 01 11月, 2014 1 次提交
-
-
由 Robert Haas 提交于
A background worker can use pq_redirect_to_shm_mq() to direct protocol that would normally be sent to the frontend to a shm_mq so that another process may read them. The receiving process may use pq_parse_errornotice() to parse an ErrorResponse or NoticeResponse from the background worker and, if it wishes, ThrowErrorData() to propagate the error (with or without further modification). Patch by me. Review by Andres Freund.
-