- 11 8月, 2016 13 次提交
-
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
genUniqueKeyName() makes sure the file for every segment to upload has a unique name. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
1. add PUT functions in S3RESTfulService class 2. add dummyHTTPServer.py to handle PUT requests 3. add unit tests (those depend on dummyHTTPServer are disabled) Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
Only to create, not functional yet. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Haozhou Wang 提交于
1. Add url checking messages for gpcheckcloud, 2. List all bucket content with '-c' option. 3. Add clear config checking messages for gpcheckcloud. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: Adam Lee ali@pivotal.io
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
Use `openssl md5` instead of `md5sum` for compatibility. Define the array in another way, which makes it easy to be updated.
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Jimmy Yih 提交于
TINC is an internal Pivotal test framework which is used for testing Greenplum. These regression tests are used regularly to validate internal and external commits. With this commit, nearly all Greenplum test code will be available for public usage.
-
- 10 8月, 2016 2 次提交
-
-
由 zhaoanan 提交于
-
由 Marc Spehlmann 提交于
This fixes the naming in c85f858e A new feature of ORCA is to more efficiently handle array constraints. It includes a new preprocessing stage, and a new way of internally representing array constraints. This feature can be enabled by use of this GUC.
-
- 09 8月, 2016 6 次提交
-
-
由 Haisheng Yuan 提交于
Orca couldn't pickup plan that uses index scan for the following cases: select * from btree_tbl where a in (1,2); --> Orca generated table scan instead of index scan select * from bitmap_tbl where a in (1,2); --> Orca generated table scan instead of bitmap scan Orca failed to consider the case that uses ArrayComp when trying to pick up index. The issue has been fixed in this patch. Closes #993
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
High level theory of the issue if checkpoint happens after recording COMMIT to clog, xactHashTable won't know the status of the xact during recovery, since no REDO records corresponding to the same would be looked into. But its incorrect without consulting CLOG to ABORT the xact based on having CREATE_PENDING entry in PT. Hence should check CLOG to verify if it was COMMITTED. If we don't perform this check, the recovery would try to mark COMMITED Xact Aborted, seeing Create-Pending entry associated with the Xact and double fault. Do not have repro as wasn't able to find scenario in which this can happen, but was seen in field and hence better to add protection to avoid double fault.
-
We just need to pass the query tree, we don't need source sql text and other arguments, so change to QueryRewrite here. Previously an implicit cast was causing a compiler warning. Also, the method pg_analyze_and_rewrite is overkill, and in fact calls QueryRewrite.
-
由 Marc Spehlmann 提交于
A new feature of ORCA is to more efficiently handle array constraints. It includes a new preprocessing stage, and a new way of internally representing array constraints. This feature can be enabled by use of this GUC.
-
由 Chumki Roy 提交于
-
- 08 8月, 2016 2 次提交
-
-
由 Chumki Roy 提交于
During pt rebuild when the mirror is down and the mirror data directory has some missing files, persistent rebuild will fail with the error "missing files from source". To mitigate this, we will be skipping the pertsistent rebuild for mirror if it is down. We don't need to backup the files if the segment is already down anyways. Add unit tests and behave test. Authors: Chumki Roy & Marbin Tan
-
由 Chumki Roy 提交于
-
- 06 8月, 2016 2 次提交
-
-
由 Gang Xiong 提交于
-
由 Haisheng Yuan 提交于
This patch changes gp_dump_query_oids by traversing the parsed query tree instead of traversing the query tree struct, which has too many node type and corner cases to consider. Even it is a little bit risky to traverse the pased query tree string, we haven't seen any sign that postgresql upstream is going to change the format. In addition, we also fix a minirepro python script bug when column stats has text type most common values that containing single quote, minirepro failed to escape that text, which causes sql grammar error and can't insert that statistics. Update minirepro to deal with error messages correctly, set PGUSER as default user, let output file accept relative path. Also updated minirepro behave test, make it pass. Closes #1024
-
- 05 8月, 2016 6 次提交
-
-
由 Heikki Linnakangas 提交于
Unused leftover stuff.
-
由 Heikki Linnakangas 提交于
Now that tidycat.pl doesn't exist anymore, we don't need its test cases anymore either.
-
由 Heikki Linnakangas 提交于
A jump table like this to speed up a switch-case statement is a good idea. In fact, it's so good, that a modern compiler will do the transformation for you :-). I checked the assembly generated by gcc 5.4.0 and clang 3.6.2, which I had readily available on my laptop, and they both produced a jump table for this with -O2. To reduce the diff vs. upstream, and to make this more readable, revert this to a switch-case table, like it is in the upstream. Let's trust the compiler for optimizing.
-
由 Heikki Linnakangas 提交于
System catalogs are now defined like in the upstream, without any special tidycat headers. Note: This doesn't change the way pg_proc_gp.h is generated from pg_proc.sql, by catullus.pl. This comes with a replacement for generating 4.3.json: src/backend/catalog/process_foreign_keys.pl. The foreign keys were the only thing that the json file was used for. AFAIK gpcheckcat is the only tool that reads that file, and it only paid attention to the foreign key information. While working on this, I noticed that a few tables were missing foreign key declarations. I added FIXME comments on them. Also, a few tables used to have "vector" references, e.g. pg_index.indclass was an oidvector, where each element of the array points to pg_class.oid. AFAICS, those declarations were actually not used for anything. I left those in place as comments, in case we want to add support in gpcheckcat and the new process_foreign_keys.pl tool for them, but for now they're just documentation. This also removes pablopcatso.pl. It read the json file (Ok, so there was one more user besides gpcheckcat for it), which is now gone. We could create a tool that reads the same information straight from the header files, or from a live database, but it was just a developer aid, and I don't think anyone's used it for quite a while, so I don't think we need a replacement.
-
由 foyzur 提交于
Extracting column names from ShareInputScan in ORCA plans to support proper column name resolution using RTE_CTE (#992) * Extracting column names from ShareInputScan in ORCA plans to support proper column name resolution using RTE_CTE. * Code review on PR 992. * PlannerGlobal has out-function support in the upstream, so removing it altogether doesn't seem like a good idea. I'm not sure if it get printed out with suitable verbose or debug flags, but I remember seeing it being printed out during debugging somehow, and it can be useful. * Refactor the functions in cdbmutate.c, so that there's a separate function to do the DAG to Tree conversion, and a separate function for just collecting the producer nodes. It seems like a bad idea that a function called "apply_dag_to_tree" actually does something different, depending on a flag in a struct. * Now that we have a separate array of producers, no need to hold the colnames etc. lists in ShareInputScan node itself. Since we can look up the producer node at will, we might as well look at the producer node's sub-tree directly every time we construct the CTE RTE. * One complication from the previous change is that we can't call get_tle_name() in replace_shareinput_targetlists(), because that runs after the post-processing in setrefs.c, so all Vars have already been changed to use INNER/OUTER. get_tle_name() doesn't work with those. On closer inspection, I think this was a bit fiddly in the ORCA case before too, because in ORCA-generated plans, Vars always use the INNER/OUTER notation, so calling get_tle_name() on an ORCA-generated plan was always questionable. It happened to work, becuase ORCA also makes seems to always fill in TargetEntry.resname, so get_tle_name() always just picked that, rather than looking up the range table entry. This new structuring of the code avoids relying on that assumption. * Refactored the code to create the fake CTE RTE to a separate function. replace_shareinput_targetlists_walker() had grown quite complex. * Use the producers array in setrefs.c * Get rid of separate sharedNodes list. Now that we have an array of producers, conveniently indexed by share_id, just use that. Mostly for sake of readability, although you might see a performance gain in corner-cases involving a huge number of share input scans.
-
由 Omer Arap 提交于
-
- 04 8月, 2016 5 次提交
-
-
由 Alexey Grishchenko 提交于
Error was introduced by PG 8.3 merge. In Postgres there is only one log, and its name pattern is set by global variable Log_filename. But in GPDB there is a separate alert log used by gpperfmon, and its file name is also generated by logfile_getname() function, and you have to use the passed pattern instead of global variable as a file name pattern, as alert filename pattern is "gpdb-alert-..."
-
由 Shreedhar Hardikar 提交于
-
由 foyzur 提交于
* Fixing DXL Translator bug where we lose canSetTag during Query object mutation and the translator ends up using wrong canSetTag. * Adding ICG test for verifying the the ORCA translator uses correct canSetTag.
-
由 Shreedhar Hardikar 提交于
We can avoid generating multiple versions of the slot_getattr. Once we deform any of the attributes in the tuples, we make it a virtual tuple. At code generation time, we know exactly how many need to be deformed and can in fact go ahead deform all the way. This way we don't need to worry about the case when slot_getattr is called on a virtual tuple with attnum > nvalid - that is deformation is partially complete. To enable this, we need to collect information from all the code generators that depend on SlotGetAttrCodegen before it is generated. We maintain a static map (keyed on the manager and the slot) to instances of SlotGetAttrCodegen. We also introduce a InitDependencies phase that happens before the GenerateCode phase, when dependants of SlotGetAttrCodegen can retrieve instances from the static map.
-
由 Shreedhar Hardikar 提交于
-
- 03 8月, 2016 4 次提交
-
-
由 Alexey Grishchenko 提交于
This reverts commit 41857ff3.
-
由 Gang Xiong 提交于
Regression test find a deadlock issue, the test is as follow: BEGIN; CREATE TABLE dtm_plpg_foo (C_CUSTKEY INTEGER, C_NAME VARCHAR(25), C_ADDRESS VARCHAR(40)) partition by range (c_custkey) (partition p1 start(0) end(100000) every(1000)); INSERT INTO dtm_plpg_foo SELECT * FROM dtm_plpg_foo LIMIT 10000; COMMIT; The create statement leaked a ROW EXCLUSIVE lock on pg_class. If some other session request and wait on ACCESS EXCLUSIVE lock before the insert statement, the insert statement will not be able to get the ACCESS SHARE lock. So the entryDB reader gang will wait the lock holding by QD process, while the QD process will wait the results from primary reader gangs.
-
由 Alexey Grishchenko 提交于
This patch adds support for multi-dimensional arrays as both input and output parameters for PL/Python functions. The number of dimensions is limited by Postgres MAXDIM macrovariable, by default equal to 6. Both input and output multi-dimensional arrays should have fixed dimension sizes, i.e. 2-d arrays should represent MxN matrix, 3-d arrays representing MxNxK cube, etc. Patch includes regression tests for both correct multi-dimensional array use cases and errorneous ones.
-
由 Heikki Linnakangas 提交于
There was some code left over from the 8.3 merge, that prematurely reset ActiveSnapshot. Remove the extraneous code. Fixes github issue #1001. Thank you @clmyyclm for the report!
-