- 31 10月, 2017 10 次提交
-
-
由 Zhenghua Lyu 提交于
Non-superuser should not be able to execute pg_resgroup_get_status_kv.
-
由 Dhanashree Kashid 提交于
Tests "qp_query_execution" and "qp_correlated_query" are run in parallel in ICG. Excerpt from greenplum_schedule file: ``` test: qp_functions qp_misc_rio_join_small qp_misc_rio qp_correlated_query qp_targeted_dispatch qp_gist_indexes2 qp_gist_indexes3 qp_gist_indexes4 qp_query_execution ``` If the timing is wrong, they conflict with each other and causes plan differences in qp_correlated_query. Both these tests create a relation named "B" in their own namespaces; however qp_query_execution later updates the reltuples in pg_class for "B". This update command only uses relname to locate entry for "B" in pg_class and updates its tuple count to a large value. This update results in updating the reltuples for both relations "B" (in namespace qp_query_execution and qp_correlated_query). This causes intermittent EXPLAIN test failures in qp_correlated_query making it flaky. This commit fixes the problem by using relnamespace as well while updating the pg_class to uniquely identify "B". Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Zhenghua Lyu 提交于
The default value is actually a recommendation. To be safe to cluster and make the segment memory usage is close to old experienced value, we will set this GUC default to 0.7.
-
由 Dhanashree Kashid 提交于
Signed-off-by: NSambitesh Dash <sdash@pivotal.io>
-
由 Chuck Litzell 提交于
* Consolidate XML transform examples with gpfdist docs in the admin guide * Minor edit * Review comments and relocate to load section * Fixes links to relocated file
-
由 Chris Hajas 提交于
These tags were never used. This will be backported to 5X_STABLE.
-
由 Heikki Linnakangas 提交于
Despite the name, "dml_boundary_intarray", this test doesn't go anywhere near any limits. And we have tests for UPDATEs on arrays in the 'arrays' test.
-
由 Lisa Owen 提交于
-
由 Todd Sedano 提交于
Prior to this change, make create-demo-cluster would fail since the command line could not find pip.
-
由 Marbin Tan 提交于
gpcheckcat is already on terraform. This change was missed when moving gpcheckcat from pulse to terraform.
-
- 30 10月, 2017 14 次提交
-
-
由 Chuck Litzell 提交于
* Consolidate XML transform examples with gpfdist docs in the admin guide * Minor edit * Review comments and relocate to load section
-
由 Adam Lee 提交于
-
由 Heikki Linnakangas 提交于
In commit 226e8867, I changed the CatMissingIssue object to hold the content IDs of segments where an entry is missing in a Python list, instead of the string representation of a PostgreSQL array (e.g. "{1,2,-1}") that was used before. That was a nice simplification, but it turns out that there was more code that accessed the CatMissingIssue.segids field that I missed. It would make sense to change the rest of the code, IMHO, but to make the CI pipeline happy quickly, this commit just changes the code back to using a string representation of a PostgreSQL array again. This hopefully fixes the MM_gpcheckcat behave test failures.
-
由 Adam Lee 提交于
Use macro WIN32 to bypass some codes like poll.
-
由 Jialun 提交于
gpload error count is incorrect when more than one segment has format error, for the cmdtime is different, and only errors with the newest cmdtime is counted. So we add startTime which will be used for counting all the errors occured during the same gpload operation.
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
SUSE needs header files for off_t and Windows has no poll. (cherry picked from commit 222d9c6dc63421c6aa2006ee02f4a18848cfc2f8)
-
由 Ning Yu 提交于
On low end system with 1~2 cpu cores the new queries in a cold resgroup can suffer from a high latency when the overall load is very high. The root cause is that we used to set very high cpu priority for gpdb cgroups, so non gpdb process are scheduled with very low priority and high latency. GPDB processes are also affected by this because postmaster and other auxiliary are not put into gpdb cgroups. Even for QD and QEs they are not put into a gpdb cgroup until their transaction is began. To fix this we made below changes: * put postmaster and all its children processes into the toplevel gpdb cgroup; * provide a GUC to control the cgroup cpu priority for gpdb processes when resgroup is enabled; * set a lower cpu priority by default;
-
由 Adam Lee 提交于
1, QD to QD's connection user is environment variable PGUSER, we need to set it to session user in dblink. 2, QD to QD's unix domain socket connection doesn't require any authentication, request non-superuser to provide host to use TCP/UDP connections. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Adam Lee 提交于
mock.mk before this has trouble to filter out mocked objects from src/backend/objfiles.txt, because the filenames in it have redundant "src/backend/../../" and suffix "_for_backend". This commit removes them before mocking to make it work.
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
commit b0328d5631088cca5f80acc8dd85b859f062ebb0 Author: mcdevc <a@b> Date: Fri Mar 6 16:28:45 2009 -0800 Separate our internal libpq front end from the client libpq library upgrade libpq to the latest to pick up bug fixes and support for more client authentication types (GSSAPI, KRB5, etc) Upgrade all files dependent on libpq to handle new version. Above is the initial commit of gp_libpq_fe, seems no good reasons still having it. Key things this PR do: 1, remove the gp_libpq_fe directory. 2, build libpq source codes into two versions, for frontend and backend, check the macro FRONTEND. 3, libpq for backend still bypasses local authentication, SSL and some environment variables, and these are the whole differences. Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Pengzhou Tang 提交于
-
由 Heikki Linnakangas 提交于
In commit 226e8867, I changed the shape of the result set passed to the processMissingDuplicateEntryResult() function, removing the "exists" column. But I failed to update the line that extracts the primary key columns from the result set for that change. Fix. This should fix the failures in the gpcheckcat behave tests.
-
- 29 10月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
The test had become useless somewhere along the years. The bug was that if ORCA fell back to the planner, then the check that you cannot update a distribution key column with the planner would not be made, and you could end up with incorrectly distributed rows. The test used a multi-level partitioned table as the target, because when the test was originally written, multi-level partitioning was not supported by ORCA. But at some point, support for that was added, so the test no longer tested the original bug it was written for. Rewrite the test using a different feature that ORCA falls back on, add comments to make it more clear what this is supposed to test so that it won't be broken so easily again. And finally, move the test out of TINC, into the main regression suite, which is what I was doing when I realized that it was broken altogether.
-
由 Taylor Vesely 提交于
The ephemeral port range is given by net.ipv4.ip_local_port_range kernel parameter. It is set to 32768 --> 60999. If GPDB uses port numbers in this range, FTS probe request may not get a response, resulting in FTS incorrectly marking a primary down. We change the example configuration files to lower the port number to proper range. Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Heikki Linnakangas 提交于
Since commit 4a95afc1, a serializable transaction no longer establishes the snapshot at the SET TRANSACTION ISOLATION LEVEL SERIALIZABLE command. Now it establishes a snapshot at the first "real" query that requires a snapshot. The new behavior matches PostgreSQL, and is a good thing. So silence the test failures, by adding dummy queries to establish snapshots at the same spots as before. I can't make all of these tests pass on my laptop, even before that commit, so I'm not sure if this fixes them all correctly. But I think so, and a few of these I could even verify locally.
-
- 28 10月, 2017 13 次提交
-
-
由 Heikki Linnakangas 提交于
If the caller specifies DF_WITH_SNAPSHOT, so that the command is dispatched to the segments with a snapshot, but it currently has no active snapshot in the QD itself, that seems like a mistake. In qdSerializeDtxContextInfo(), the comment talked about which snapshot to use when the transaction has already been aborted. I didn't quite understand that. I don't think the function is used to dispatch the "ABORT" statement itself, and we shouldn't be dispatching anything else in an already-aborted transaction. This makes it more clear which snapshot is dispatched along with the command. In theory, the latest or serializable snapshot can be different from the one being used when the command is dispatched, although I'm not sure if there are any such cases in practice. In the upcoming 8.4 merge, there are more changes coming up to snapshot management, which make it more difficult to get hold of the latest acquired snapshot in the transaction, so changing this now will ease the pain of merging that. I don't know why, but after making the change in qdSerializeDtxContextInfo, I started to get a lot of "Too many distributed transactions for snapshot (maxCount %d, count %d)" errors. Looking at the code, I don't understand how it ever worked. I don't see any no guarantee that the array in TempQDDtxContextInfo or TempDtxContextInfo was pre-allocated correctly. Or maybe it got allocated big enough to hold max_prepared_xacts, which was always large enough, but it seemed rather haphazard to me. So in the spirit of "if you don't understand it, rewrite it until you do", I changed the way the allocation of the inProgressXidArray array works. In statically allocated snapshots, i.e. SerializableSnapshot and LatestSnapshot, the array is malloc'd. In a snapshot copied with CopySnapshot(), it is points to a part of the palloc'd space for the snapshot. Nothing new so far, but I changed CopySnapshot() to set "maxCount" to -1 to indicate that it's not malloc'd. Then I modified DistributedSnapshot_Copy and DistributedSnapshot_Deserialize to not give up if the target array is not large enough, but enlarge it as needed. Finally, I made a little optimization in GetSnapshotData() when running in a QE, to move the copying of the distributed snapshot data to outside the section guarded by ProcArrayLock. ProcArrayLock can be heavily contended, so that's a nice little optimization anyway, but especially now that DistributedSnapshot_Copy() might need to realloc the array.
-
由 Heikki Linnakangas 提交于
The new query is simpler. There was a comment about using the temp table to avoid gathering all the data to the master, but I don't think that is a good tradeoff. Creating a temp table is pretty expensive, and even with the temp table, the master needs to broadcast all the master's entries from to the segments. For comparison, with the Gather node, all the segments need to send their entries to the master. Isn't that roughly the same amount of traffic? A long time ago, the query was made to use the temp table, after a report from a huge cluster with over 1000 segments, where the total size of pg_attribute, across all the nodes, was over 200 GB. So the catalogs can be large. But even then, I don't think this query can get much better than this. The new query moves some of the logic from SQL to the Python code. Seems simpler that way. The real reason to do this right now is that in the next commit, I'm going to change the way snapshots are dispatched with a query, and that change will change the visibility of the temp table that was created in the same command. In a nutshell, currently, if you do "CREATE TABLE mytemp AS SELECT oid FROM pg_class WHERE relname='mytemp'", the oid of the table being created is included. On PostgreSQL, and after the snapshot changes I'm working on, it will not be. And would confuse this gpcheckcat query.
-
由 Heikki Linnakangas 提交于
Commit ce6aafb0 removed these tests.
-
由 Dhanashree Kashid 提交于
The original issue for which the FIXME was added is fixed in ORCA v2.46.2 and commit 8978e73c updated the test answer files with correct plan and results. Hence the FIXME is no longer valid.
-
由 Karen Huddleston 提交于
In the backup_43_restore_5 test, which uses different python versions, we were not properly removing the previous python packages. When running behave on the restore side, packages for python 2.7 were not installed since the directory was already present. Signed-off-by: NChris Hajas <chajas@pivotal.io>
-
由 Lisa Owen 提交于
-
由 Dhanashree Kashid 提交于
Fix failure by non-proper has_oids value when executing nodeSplitUpdate when optimizer=on. has_oids is different between nodes across Motion, so that receiver motion cannot retrieve correct value from sender motion. Signed-off-by: NYu Yang <macroyuyang@pivotal.io>
-
由 Melanie Plageman 提交于
Partition exchange for a partition table with sub-partitions should throw an error as soon as the incompatability is schemas between the partition table and the candidate table to exchange is found. Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Melanie Plageman 提交于
- Non-recursive CTE nested in non-recursive CTE - Non-recursive CTE nested in recursive CTE - Recursive CTE nested in non-recursive CTE - Recursive CTE nested in recursive CTE
-
由 Dhanashree Kashid 提交于
Move CTE tests out of TINC and add (the ones that are not already present) to main test suite. optimizer_functional_part1 runs 58 CTE tests which can be categorized as : - cte_queries_<1-24> Almost all these tests (with one exception as described next) are already in the main test suite in `qp_with_clause.sql` which means we had duplicate tests. Hence this commit removes them from TINC. Two queries in `cte_queries24.sql`, query#6 and query#11 were missing in `qp_with-clause.sql`. This commit adds them. - cte_functest_<58-60> In the old commit 629f7e76, these tests were not moved since they produced different results on different invocations and were marked as skipped tests. They indeed have different behavior with Planner and ORCA. ORCA does not support updating rows when there are multiple matches in the join condition and erros out. Planner on the other hand allows this but the results are non-deterministic. There is already coverage for this error in qp_dml_joins.sql and DML_over_joins.sql. Hence this commit drops these tests and does not move to main test suite. - enable_cte_plan_space This is a pure ORCA test which tests the number of plan alternatives produced by ORCA when optimizer_cte_inlining is ON and OFF. This will be moved in ORCA test suite - icg_cte_with_values The test is already present in `with.sql`. - cte_functest_<22-23>_inlining_<enabled/disabled> Moved these tests to `qp_with_functional.sql`. qp_with_functional is tested with both inlining and noinlining. This commit thus removes the `cte` test folder from TINC
-
由 Jimmy Yih 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Xin Zhang 提交于
Signed-off-by: NJimmy Yih <jyih@pivotal.io>
-
由 Jimmy Yih 提交于
Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-