- 05 9月, 2017 1 次提交
-
-
由 Ning Yu 提交于
* Simplify tuple serialization in Motion nodes. There is a fast-path for tuples that contain no toasted attributes, which writes the raw tuple almost as is. However, the slow path is significantly more complicated, calling each attribute's binary send/receive functions (although there's a fast-path for a few built-in datatypes). I don't see any need for calling I/O functions here. We can just write the raw Datum on the wire. If that works for tuples with no toasted attributes, it should work for all tuples, if we just detoast any toasted attributes first. This makes the code a lot simpler, and also fixes a bug with data types that don't have a binary send/receive routines. We used to call the regular (text) I/O functions in that case, but didn't handle the resulting cstring correctly. Diagnosis and test case by Foyzur Rahman. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io> Signed-off-by: NNing Yu <nyu@pivotal.io>
-
- 03 9月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
When extensions were introduced in PostgreSQL, a call to recordDependencyOnCurrentExtension() was added in the creation codepath of all the object types. However, external protocols is a GPDB-specific object type, and we missed making that change there. Fixes github issue #2942.
-
- 02 9月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
Backport to 5X_STABLE, in order to make backporting future fixes easier.
-
由 Bhuvnesh Chaudhary 提交于
Queries in which there is a IN clause on top of a correlated subquery containing limit/offset clause, planner tries to identify if the IN clause can be converted to a join in convert_IN_to_join() and creates a RTE for the join if its possible and does not consider limit/offset clause while making a decision. However, later in pull_up_subqueries(), check enforced by is_simple_subquery causes the subquery containing limit/offset clauses to be not pulled up. This inconsistency causes a plan to be generated with a param, however, with no corresponding subplan. The patch fixes the issues by adding the relevant checks in convert_IN_to_join() to identify if the subquery is correlated and contains limit/offset clause, in such cases the sublink will not be converted to a join and a plan with subplan will be created.
-
由 Jemish Patel 提交于
Signed-off-by: NOmer Arap <oarap@pivotal.io> Signed-off-by: NDhanashree Kashid <dkashid@pivotal.io> (cherry picked from commit 21b7f934)
-
由 Dhanashree Kashid 提交于
While gathering metadata information about a partitioned relation, the relcache translator in GPDB constructs and sends partition constraints in `CMDPartConstraintGPDB` object. CMDPartConstraintGPDB contains following: 1. m_pdrgpulDefaultParts = List of partition levels that have default partitions 2. m_fUnbounded = indicate if the constraint is unbounded 3. m_pdxln = Part Constraint expression When you have more partitioning levels, the part constraint expression can grow large and we end up spending significant amount of time in translating this expression to DXL format. For instance, for the following query, relcache translator spends `18895.444000 ms` in fetching the metadata information on debug build: ``` DROP TABLE IF EXISTS date_parts; CREATE TABLE DATE_PARTS (id int, year int, month int, day int, region text) DISTRIBUTED BY (id) PARTITION BY RANGE (year) SUBPARTITION BY LIST (month) SUBPARTITION TEMPLATE ( SUBPARTITION Q1 VALUES (1, 2, 3), SUBPARTITION Q2 VALUES (4 ,5 ,6), SUBPARTITION Q3 VALUES (7, 8, 9), SUBPARTITION Q4 VALUES (10, 11, 12), DEFAULT SUBPARTITION other_months ) SUBPARTITION BY RANGE(day) SUBPARTITION TEMPLATE ( START (1) END (31) EVERY (10), DEFAULT SUBPARTITION other_days) ( START (2002) END (2012) EVERY (1), DEFAULT PARTITION outlying_years ); INSERT INTO date_parts SELECT i, EXTRACT(year FROM dt), EXTRACT(month FROM dt), EXTRACT(day FROM dt), NULL FROM (SELECT i, '2002-01-01'::DATE + i * INTERVAL '1 day' day AS dt FROM GENERATE_SERIES(1, 3650) AS i) AS t; EXPLAIN SELECT * FROM date_parts WHERE month BETWEEN 1 AND 4; ``` At ORCA end, however, we do not use this part constraints expression unless the relation has indices built on it. This is evident from CTranslatorDXLToExpr.cpp + L565 (`CTranslatorDXLToExpr::PexprLogicalGet`). Hence we do not need to send the PartConstraint expression from GPDB Recache Translator since: A. It will not be consumed if there are no indices B. The DXL translation is expensive. This commit fixes the Relcache Translator to send the Partition Constraint Expression to ORCA only in the cases listed below : IsPartTable Index DefaultParts ShouldSendPartConstraint NO - - - YES YES YES YES YES NO NO NO YES NO YES YES (but only default levels info) After fix the metadata fetch and translation time is reduced to `87.828000ms`. We also need changes in ORCA ParseHandler since we will be sending the part constraint expression in some cases only. [Ref #149769559] Signed-off-by: NOmer Arap <oarap@pivotal.io> (cherry picked from commit 752e06f6)
-
由 Shivram Mani 提交于
-
由 Shivram Mani 提交于
* README updates for PXF setup instructions * PXF README updates for PXF setup instructions * test update * README updates for PXF setup instructions * PXF README updates for PXF setup instructions * note on curl depedancy * minor update
-
由 Lav Jain 提交于
-
由 David Yozie 提交于
* updating security guide landing page and main sections to use consistent short descriptions. * making security title consistent with others * updating best practices guide landing page and main sections to use consistent short descriptions. * updating utility guide main sections to use consistent short descriptions.
-
由 Lisa Owen 提交于
-
由 Lisa Owen 提交于
* docs - resource mgmt restructure, resgroup initial content * incorporate edits from david; some reorg * add memory allotment diagram * minor edit * make resource group experimental note a warning * incorporate edits from simon * update base color of resource group graphic * add missing ;
-
- 01 9月, 2017 28 次提交
-
-
由 Mel Kiyama 提交于
-
由 Daniel Gustafsson 提交于
This reverts commit 1596d323 since it broke cluster startup, clearly it needs more attention before being ready.
-
由 Daniel Gustafsson 提交于
This puts on on par with upstream PostgreSQL 9.1.20, released on 2016-02-11, for tzdata and the below referenced commit. The reason for not updating further is that there are tzdata format changes in 9.1.21, 2016a is the last release we can import without merging code changes as well. commit 6887d72d06a2f36508d5be9cca316088d4c60b26 Author: Tom Lane <tgl@sss.pgh.pa.us> Date: Fri Feb 5 10:59:09 2016 -0500 Update time zone data files to tzdata release 2016a. DST law changes in Cayman Islands, Metlakatla, Trans-Baikal Territory (Zabaykalsky Krai). Historical corrections for Pakistan.
-
由 Daniel Gustafsson 提交于
This bumps the copyright years to the appropriate years after not having been updated for some time. Also reformats existing code headers to match the upstream style to ensure consistency.
-
由 Daniel Gustafsson 提交于
The linenumbers are prone to change (and ideally shouldn't be there at all since we should use ereport instead), so scrub them from the test output with a matchsubs rule. The recent changes for copyright notices broke this since it added a line to the file.
-
由 Daniel Gustafsson 提交于
The missing errcode makes the ereport call include the line number of the invocation from the .c file, which not only isn't too useful but cause the tests to fail on adding/removing code from the file.
-
由 Peifeng Qiu 提交于
The original gpload test suites is intended to be run locally, so gpdb cluster and test must be on the same machine. For platform that doesn't have GPDB server available but have loader package, we need to run the test remotely.
-
由 Heikki Linnakangas 提交于
* Use ereport() with a proper error code, rather than elog(), so that you don't get the source file name and line number in the message, and the serious-looking backtrace in the log. * Remove the hint that advised "SET gp_enable_segment_copy_checking=off", when a row failed the check that it's being loaded to the correct segment. Ignoring the mismatch seems like very bad idea, because if your rows are in incorrect segments, all bets are off, and you'll likely get incorrect query results when you try to query the table.
-
由 Pengzhou Tang 提交于
-
由 Pengzhou Tang 提交于
When define index concurrently, DefineIndex() commits current transaction and dispatch a command to segments without resource group info, therefore, QE get a capability info with zero configured, the problem is groupReleaseMemQuota() use the empty capability info wrong and caused a floating-point exception. According to the code, it should use prevSharedInfo->caps instead
-
由 Lav Jain 提交于
* Make PXF part of the gpdb binary * Fix popd error
-
由 Mel Kiyama 提交于
* DOCS: SELECT - Add RECURSIVE keyword to WITH clause * docs: recursive CTE - add experimental designation. Add GUC gp_recursive_cte_prototype * docs: updated recursive CTE docs based on review comments.
-
由 Marbin Tan 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 John Gaskin 提交于
-
由 Daniel Gustafsson 提交于
-1 is a valid value for errordata_stack_depth until errstart() has been called which inits the stack. In correct usage it's thus fine to use it as an array subscript without inspecting it, since demote and dismiss are useless outside an error context. Incorrect usage would however cause a memory error so guard against that since it's the right thing to do.
-
由 Daniel Gustafsson 提交于
The memory allocated is immediately orhpaned since the pointer is overwritten. Remove and avoid leaking.
-
由 Daniel Gustafsson 提交于
strncpy() can render the buffer non-NULL terminated on input which is MAXPGPATH characters long. This is not only incorrect code, it's a mismerge since upstream has never used strncpy here, but strlcpy. Fix by replacing with strlcpy calls, thus aligning us with upstream better as well.
-
由 Daniel Gustafsson 提交于
The num_cwords member was left blank which cause issues for the freeing function which operates based on its value.
-
由 Daniel Gustafsson 提交于
{f}stat() can fail and reading the stat buffer without checking for status is bad hygiene. Ensure to always test return value and take the appropriate error path in case of stat error.
-
由 Daniel Gustafsson 提交于
Iff the value isn't a varlena, then dataLen and dataStart won't be set but we may still come to inspect them later on. Avoid reading uninitialized variables by zeroing them in that case and check for that before reading.
-
由 Daniel Gustafsson 提交于
The strncpy() will make path2 non-NULL terminated in the (rare?) cases where filespaceLocation2 is MAXPGPATH chars long. Use strlcpy instead like other places in the codepath does and ensure we always have a NULL terminated string.
-
由 Larry Hamel 提交于
As part of a previous commit, the WorkerPool of threads will raise if the number of workers is set to 0. This prevents a coding error from resulting in no work, where work was expected. This commit reaches through code to protect against the numWorkers parameter being 0. In several cases, the number of workers is set to the number of segments or the number of databases. We do not protect those with the expectation that something is more significantly wrong if those sums are 0, and that an exception would be fitting in those cases. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Signed-off-by: NNadeem Ghani <nghani@pivotal.io>
-
由 mkiyama 提交于
-
由 Larry Hamel 提交于
Rework how environment variables transmitted in local bash and remote ssh commands. If you have a bash command that starts with a conditional, you'll get a syntax error. To fix this, we added ampersands to join the setting of the environment variables and the command itself. NOTE: We removed an ExecutionContext static variable that recorded environment variables. This feature was not used anywhere. Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Shoaib Lari 提交于
Previously, RemoveDirectories and RemoveFiles used the unix command "rm -rf", but this is inefficient for huge numbers of files. Also, these functions accepted any globbed path. Instead, use "rsync" to optimize deletion of files in a directory. On a DCA using 1 million files, this increased speed by about 3x. Also, this commit breaks up the different use-cases of deletion into separate methods, adding methods RemoveDirectoryContents() and RemoveFile() and RemoveGlob() to help isolate the assumptions of each case and optimize for them. Signed-off-by: NLarry Hamel <lhamel@pivotal.io> Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Larry Hamel 提交于
This utility is slated for removal in v6. It is unused in general. Fix obvious compile errors and basic logic for logging. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Shoaib Lari 提交于
Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-