- 30 10月, 2017 4 次提交
-
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
commit b0328d5631088cca5f80acc8dd85b859f062ebb0 Author: mcdevc <a@b> Date: Fri Mar 6 16:28:45 2009 -0800 Separate our internal libpq front end from the client libpq library upgrade libpq to the latest to pick up bug fixes and support for more client authentication types (GSSAPI, KRB5, etc) Upgrade all files dependent on libpq to handle new version. Above is the initial commit of gp_libpq_fe, seems no good reasons still having it. Key things this PR do: 1, remove the gp_libpq_fe directory. 2, build libpq source codes into two versions, for frontend and backend, check the macro FRONTEND. 3, libpq for backend still bypasses local authentication, SSL and some environment variables, and these are the whole differences. Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Pengzhou Tang 提交于
-
由 Heikki Linnakangas 提交于
In commit 226e8867, I changed the shape of the result set passed to the processMissingDuplicateEntryResult() function, removing the "exists" column. But I failed to update the line that extracts the primary key columns from the result set for that change. Fix. This should fix the failures in the gpcheckcat behave tests.
-
- 29 10月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
The test had become useless somewhere along the years. The bug was that if ORCA fell back to the planner, then the check that you cannot update a distribution key column with the planner would not be made, and you could end up with incorrectly distributed rows. The test used a multi-level partitioned table as the target, because when the test was originally written, multi-level partitioning was not supported by ORCA. But at some point, support for that was added, so the test no longer tested the original bug it was written for. Rewrite the test using a different feature that ORCA falls back on, add comments to make it more clear what this is supposed to test so that it won't be broken so easily again. And finally, move the test out of TINC, into the main regression suite, which is what I was doing when I realized that it was broken altogether.
-
由 Taylor Vesely 提交于
The ephemeral port range is given by net.ipv4.ip_local_port_range kernel parameter. It is set to 32768 --> 60999. If GPDB uses port numbers in this range, FTS probe request may not get a response, resulting in FTS incorrectly marking a primary down. We change the example configuration files to lower the port number to proper range. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Heikki Linnakangas 提交于
Since commit 4a95afc1, a serializable transaction no longer establishes the snapshot at the SET TRANSACTION ISOLATION LEVEL SERIALIZABLE command. Now it establishes a snapshot at the first "real" query that requires a snapshot. The new behavior matches PostgreSQL, and is a good thing. So silence the test failures, by adding dummy queries to establish snapshots at the same spots as before. I can't make all of these tests pass on my laptop, even before that commit, so I'm not sure if this fixes them all correctly. But I think so, and a few of these I could even verify locally.
-
- 28 10月, 2017 15 次提交
-
-
由 Heikki Linnakangas 提交于
If the caller specifies DF_WITH_SNAPSHOT, so that the command is dispatched to the segments with a snapshot, but it currently has no active snapshot in the QD itself, that seems like a mistake. In qdSerializeDtxContextInfo(), the comment talked about which snapshot to use when the transaction has already been aborted. I didn't quite understand that. I don't think the function is used to dispatch the "ABORT" statement itself, and we shouldn't be dispatching anything else in an already-aborted transaction. This makes it more clear which snapshot is dispatched along with the command. In theory, the latest or serializable snapshot can be different from the one being used when the command is dispatched, although I'm not sure if there are any such cases in practice. In the upcoming 8.4 merge, there are more changes coming up to snapshot management, which make it more difficult to get hold of the latest acquired snapshot in the transaction, so changing this now will ease the pain of merging that. I don't know why, but after making the change in qdSerializeDtxContextInfo, I started to get a lot of "Too many distributed transactions for snapshot (maxCount %d, count %d)" errors. Looking at the code, I don't understand how it ever worked. I don't see any no guarantee that the array in TempQDDtxContextInfo or TempDtxContextInfo was pre-allocated correctly. Or maybe it got allocated big enough to hold max_prepared_xacts, which was always large enough, but it seemed rather haphazard to me. So in the spirit of "if you don't understand it, rewrite it until you do", I changed the way the allocation of the inProgressXidArray array works. In statically allocated snapshots, i.e. SerializableSnapshot and LatestSnapshot, the array is malloc'd. In a snapshot copied with CopySnapshot(), it is points to a part of the palloc'd space for the snapshot. Nothing new so far, but I changed CopySnapshot() to set "maxCount" to -1 to indicate that it's not malloc'd. Then I modified DistributedSnapshot_Copy and DistributedSnapshot_Deserialize to not give up if the target array is not large enough, but enlarge it as needed. Finally, I made a little optimization in GetSnapshotData() when running in a QE, to move the copying of the distributed snapshot data to outside the section guarded by ProcArrayLock. ProcArrayLock can be heavily contended, so that's a nice little optimization anyway, but especially now that DistributedSnapshot_Copy() might need to realloc the array.
-
由 Heikki Linnakangas 提交于
The new query is simpler. There was a comment about using the temp table to avoid gathering all the data to the master, but I don't think that is a good tradeoff. Creating a temp table is pretty expensive, and even with the temp table, the master needs to broadcast all the master's entries from to the segments. For comparison, with the Gather node, all the segments need to send their entries to the master. Isn't that roughly the same amount of traffic? A long time ago, the query was made to use the temp table, after a report from a huge cluster with over 1000 segments, where the total size of pg_attribute, across all the nodes, was over 200 GB. So the catalogs can be large. But even then, I don't think this query can get much better than this. The new query moves some of the logic from SQL to the Python code. Seems simpler that way. The real reason to do this right now is that in the next commit, I'm going to change the way snapshots are dispatched with a query, and that change will change the visibility of the temp table that was created in the same command. In a nutshell, currently, if you do "CREATE TABLE mytemp AS SELECT oid FROM pg_class WHERE relname='mytemp'", the oid of the table being created is included. On PostgreSQL, and after the snapshot changes I'm working on, it will not be. And would confuse this gpcheckcat query.
-
由 Heikki Linnakangas 提交于
Commit ce6aafb0 removed these tests.
-
由 Dhanashree Kashid 提交于
The original issue for which the FIXME was added is fixed in ORCA v2.46.2 and commit 8978e73c updated the test answer files with correct plan and results. Hence the FIXME is no longer valid.
-
由 Karen Huddleston 提交于
In the backup_43_restore_5 test, which uses different python versions, we were not properly removing the previous python packages. When running behave on the restore side, packages for python 2.7 were not installed since the directory was already present. Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Dhanashree Kashid 提交于
Fix failure by non-proper has_oids value when executing nodeSplitUpdate when optimizer=on. has_oids is different between nodes across Motion, so that receiver motion cannot retrieve correct value from sender motion. Signed-off-by: NYu Yang <macroyuyang@pivotal.io>
-
由 Melanie Plageman 提交于
Partition exchange for a partition table with sub-partitions should throw an error as soon as the incompatability is schemas between the partition table and the candidate table to exchange is found. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Melanie Plageman 提交于
- Non-recursive CTE nested in non-recursive CTE - Non-recursive CTE nested in recursive CTE - Recursive CTE nested in non-recursive CTE - Recursive CTE nested in recursive CTE
-
由 Dhanashree Kashid 提交于
Move CTE tests out of TINC and add (the ones that are not already present) to main test suite. optimizer_functional_part1 runs 58 CTE tests which can be categorized as : - cte_queries_<1-24> Almost all these tests (with one exception as described next) are already in the main test suite in `qp_with_clause.sql` which means we had duplicate tests. Hence this commit removes them from TINC. Two queries in `cte_queries24.sql`, query#6 and query#11 were missing in `qp_with-clause.sql`. This commit adds them. - cte_functest_<58-60> In the old commit 629f7e76, these tests were not moved since they produced different results on different invocations and were marked as skipped tests. They indeed have different behavior with Planner and ORCA. ORCA does not support updating rows when there are multiple matches in the join condition and erros out. Planner on the other hand allows this but the results are non-deterministic. There is already coverage for this error in qp_dml_joins.sql and DML_over_joins.sql. Hence this commit drops these tests and does not move to main test suite. - enable_cte_plan_space This is a pure ORCA test which tests the number of plan alternatives produced by ORCA when optimizer_cte_inlining is ON and OFF. This will be moved in ORCA test suite - icg_cte_with_values The test is already present in `with.sql`. - cte_functest_<22-23>_inlining_<enabled/disabled> Moved these tests to `qp_with_functional.sql`. qp_with_functional is tested with both inlining and noinlining. This commit thus removes the `cte` test folder from TINC
-
由 Jimmy Yih 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Xin Zhang 提交于
Signed-off-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jimmy Yih 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Jimmy Yih 提交于
This will help remove the aoco_compression TINC tests but still keep the same code coverage. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Xin Zhang 提交于
The aoco_compression TINC tests had this coverage. Migrate it here to help remove the TINC tests later. Signed-off-by: NJimmy Yih <jyih@pivotal.io>
-
- 27 10月, 2017 14 次提交
-
-
由 Adam Lee 提交于
cdbpartition.c: In function ‘rel_has_external_partition’: cdbpartition.c:489:32: warning: passing argument 1 of ‘has_external_partition’ from incompatible pointer type [-Wincompatible-pointer-types] return has_external_partition(n->rules); ^ cdbpartition.c:106:13: note: expected ‘PartitionRule * {aka struct PartitionRule *}’ but argument is of type ‘List * {aka struct List *}’ static bool has_external_partition(PartitionRule *rules); ^~~~~~~~~~~~~~~~~~~~~~
-
由 Marbin Tan 提交于
We are having issues downloading packages in setuptools version 0.6c11 due to it defaulting in an unsecured HTTP link. The website enforced https and http no longer works. Update setuptools to enable us to download in a secured way. Signed-off-by: NShoaib Lari <slari@pivotal.io>
-
由 Melanie Plageman 提交于
This reverts commit 73619008.
-
由 Melanie Plageman 提交于
- Non-recursive CTE nested in non-recursive CTE - Non-recursive CTE nested in recursive CTE - Recursive CTE nested in non-recursive CTE - Recursive CTE nested in recursive CTE
-
由 Alexander Denissov 提交于
* removed pxf init options * fixed substitution regex * updated regex * install hadoop clients via RPMs * exit on error * reorder operations * copy cluster config files * refactored to use HADOOP_ROOT * added mapred-site.xml to test setup * added 2 new pipeline jobs * updated PXF job names in pipeline
-
由 Amil Khanzada 提交于
This guide is based on the centos-gpdb-dev docker images that Concourse uses, thus there are less things we need to add on top. Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Asim R P 提交于
[ci-skip]
-
由 Asim R P 提交于
The content is copied from an internal document, barring formatting changes. Thank you Xin for digging out that document. Let's bring the content close to source code so that it's alive and useful. [ci-skip]
-
由 Melanie 提交于
* Ban split of list-partitioned sub-partitions Currently, GPDB does not support splitting a multi-level partition when the leaf partition is partitioned by list. However, in many cases, we did not correctly error out and instead crashed. This commit detects a query attempting to split a list partitioned sub-partition and correctly errors out. Signed-off-by: NSam Dash <sdash@pivotal.io> * Ban split of list-partitioned sub-partitions Currently, GPDB does not support splitting a multi-level partition when the leaf partition is partitioned by list. However, in many cases, we did not correctly error out and instead crashed. This commit detects a query attempting to split a list partitioned sub-partition and correctly errors out. Signed-off-by: NSam Dash <sdash@pivotal.io> * Ban split of list-partitioned sub-partitions Currently, GPDB does not support splitting a multi-level partition when the leaf partition is partitioned by list. However, in many cases, we did not correctly error out and instead crashed. This commit detects a query attempting to split a list partitioned sub-partition and correctly errors out. Signed-off-by: NSam Dash <sdash@pivotal.io>
-
由 Abhijit Subramanya 提交于
CID 178114: Performance inefficiencies (PASS_BY_VALUE) The `ProbeConnectionInfo` parameter was being passed to the `probeSegmentHelper()` function by value. This is very inefficient. Fixed it by passing the parameter as a reference using a pointer. CID 178115: Calling "pqGetInt" without checking return value (as is done elsewhere 52 out of 57 times). The return value of `pqGetInt()` function was not being checked in the `processProbeResponse()` function. Fixed it by checking for the return value. If EOF is returned, a message is logged and false is returned to the caller. Signed-off-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Karen Huddleston 提交于
Signed-off-by: NChris Hajas <chajas@pivotal.io> Signed-off-by: NJamie McAtamney <jmcatamney@pivotal.io>
-
由 Karen Huddleston 提交于
-
由 Kavinder Dhaliwal 提交于
-
由 sambitesh 提交于
* Fix crash for query with cube and having clause We were not handling the case when a NULL outerslot was produced in agg_retrieve_direct. * Adding test case for testing CUBE with HAVING clause Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
- 26 10月, 2017 4 次提交
-
-
由 Todd Sedano 提交于
-
由 Adam Lee 提交于
``` $ wget http://ftp.jaist.ac.jp/pub/apache/apr/${APR}.tar.gz --2017-10-26 09:33:48-- http://ftp.jaist.ac.jp/pub/apache/apr/apr-1.6.2.tar.gz Resolving ftp.jaist.ac.jp (ftp.jaist.ac.jp)... 150.65.7.130, 2001:df0:2ed:feed::feed Connecting to ftp.jaist.ac.jp (ftp.jaist.ac.jp)|150.65.7.130|:80... connected. HTTP request sent, awaiting response... 404 Not Found 2017-10-26 09:33:49 ERROR 404: Not Found. ```
-
由 Heikki Linnakangas 提交于
I'm not sure we need all of these, but let's at least move them into the main regression suite. I did leave out a few queries that were identical to existing tests that I spotted, and merged a few that were almost identical.
-
由 Richard Guo 提交于
When resource group is enabled, initialization of SPIMemReserved should not depend on statement_mem.
-