- 07 12月, 2017 20 次提交
-
-
由 Pengzhou Tang 提交于
In GPDB, a web external scan is mainly divided to 3 steps: 1. url_execute_fopen () forks a child process and creates a pipe so that child process can execute a command and sends the output back through the pipe. 2. url_execute_fread () reads data from the pipe until child closed the peer of the pipe. 3. url_execute_fclose () close the pipe firstly, then wait for the child process to exit if failOnError is true. However, for queries with LIMIT clause, QE may receive a query finish signal after url_execute_fopen () and url_execute_fread() may be skipped which means parent may close read peer of the pipe before child writing some data and child will exit with SIGPIPE error. To fixed this, we set failOnError to false if QueryFinishPending is true so that any errors when closing external file will be ignored, QueryFinishPending means QD have got enough tuples and query can return correctly, so it should be fine to ignore the error in such case. This fixes issue #4064
-
由 Pengzhou Tang 提交于
Command "alter table xxxx alter partition xxx split default partition into xxx" splits an existing partition into two partitions, previously split_rows() only checked the first constraint of the new partitions to decide the target partition of the row, it was proved to be incorrect and rows were split to unexpected partitions. This fixes the issue #4051 which is hard to caught, per an analysis by Jesse Zhang: in CheckConstraintFetch where the tuple descriptor was constructed and later passed into split_rows, we were enumerating the constraints using the index pg_constraint_conrelid_index, which is only indexed on conrelid. This means that the traversal order is not guaranteed to be the insertion order: a B-tree node split can render an index insertion out-of-order. That also means that depending on the timing of when the test was run, we may or may not get in-order traversal of the constraints, where the "first" constraint often happens to be the leaf-level constraint (which is the one that will determine the split direction of the old partition). Signed-off-by: NJesse Zhang <jzhang@pivotal.io>
-
由 Pengzhou Tang 提交于
This fixes issue #4022, so any OOM errors of current query will not muddle following queries.
-
由 Pengzhou Tang 提交于
Previously, dispatcher only send cancel/finish signal to QEs once, so if the signal arrives faster than the query or is omitted by the secure_read(), the QE may have no chance to quit if the QE is assigned to execute a MOTION node and it's peer has been canceled. This fixes issue #3950
-
由 Shreedhar Hardikar 提交于
[#153460065] Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Jesse Zhang 提交于
Those also should switch to the new image. [#153460065] [ci skip] Signed-off-by: NShreedhar Hardikar <shardikar@pivotal.io>
-
由 Shreedhar Hardikar 提交于
[#153460065] [ci skip] Signed-off-by: NJesse Zhang <sbjesse@gmail.com>
-
由 Chuck Litzell 提交于
* Add documentation for citext module * Move citext before dblink in the nav * Edits, formatting, mention on data types reference topic * Correct xref file name
-
由 mkiyama 提交于
-
由 mkiyama 提交于
-
由 Shoaib Lari 提交于
The parallelism tests were too stringent, wanting to observe maximum parallelism in order to go green. An example failure we saw was as follows, when 3 threads had been "In Progress" together, but not all four (because one thread finished really quickly): `Worker GpExpandTests.check_number_of_parallel_tables_expanded_case_1 failed execution: AssertionError: The specified value was never reached.` Now, we simply assert that some parallelism is observed at some point (2 or more "In Progress" at a time). If a test run "flakes" such that there never were two "In Progress" at a time, that would be indistinguishable from serial execution, so the test would still fail. Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 dyozie 提交于
-
由 C.J. Jameson 提交于
Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io>
-
由 Shoaib Lari 提交于
Run a distributed query across all segments to force FTS to detect and mark all downed segments. Author: Nadeem Ghani <nghani@pivotal.io> Author: Marbin Tan <mtan@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> Author: C.J. Jameson <cjameson@pivotal.io>
-
由 C.J. Jameson 提交于
If we did stop all primaries on that host, the cluster would be down anyway. Best to just do a full-cluster gpstop, then bring it all back up together.
-
由 C.J. Jameson 提交于
underlying pylib code identifies master and standby by content id `gpstop --host localhost` will fail differently: it will simply not find the host in the set of hostnames (unless that's how you configured things at first)
-
由 Shoaib Lari 提交于
For interaction with `-r`: Since we don't stop the master with --host, restart will fail anyways, so we don't allow it from the get go. For interaction with `-m`: If someone is using `--host` and then thinking they want to stop the master but not the segments on a particular host, they should just do a full gpstop and then bring everything back up. If someone is using `-m` and then thinking they need to specify the host for the `-m` flag, they don't need to -- the tool infers from the system and shell state. Author: C.J. Jameson <cjameson@pivotal.io> Author: Shoaib Lari <slari@pivotal.io> Author: Marbin Tan <mtan@pivotal.io>
-
由 C.J. Jameson 提交于
Signed-off-by: NMarbin Tan <mtan@pivotal.io>
-
由 Marbin Tan 提交于
Add a flag `--host` which stops all segments on the specified host. An easy way to take down a set of segments without having to ssh and kill processes. Refuse to stop specific host if any primary isn't synched Signed-off-by: NNadeem Ghani <nghani@pivotal.io> Signed-off-by: NC.J. Jameson <cjameson@pivotal.io>
-
由 Nadeem Ghani 提交于
This class was handling the hasMirrors field incorrectly. Now we're correctly setting the hasMirrors flag. This change broke the gpaddmirrors test, so fixed it.
-
- 06 12月, 2017 10 次提交
-
-
由 Daniel Gustafsson 提交于
The check for whether the remote server is a Greenplum instance was using the current version as an optimization, but with the current strategy of aggressively merging that will become a problem. Remove the version check and only go by the server version() output instead.
-
由 yanchaozhong 提交于
-
由 yanchaozhong 提交于
-
由 xiong-gang 提交于
-
由 David Yozie 提交于
* add -compression-level * add -single-data-file option * add gprestore -include-schema and -include-table-file options * removing statement about using -compression-level and -no-compression together * remove restriction about using both -include-schema and -redirect together * remove restriction about using both -include-table-file and -globals together * add note re: default compression level
-
由 David Sharp 提交于
eg: if(big_condition_a && big_condition_b) not: if(big_condition_a && big_condition_b)
-
由 Ben Christel 提交于
- This is not intended to replace pgindent, but can help get close to Postgres style during development. - It is not completely matching the Postgres style, so please update it as you use it. Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io>
-
由 Chuck Litzell 提交于
* docs: optimizer_join_order guc * Implement suggested edits * Note iteractions with other params; correct inaccurate statement. * Clarify this is a GPORCA guc
-
由 Mel Kiyama 提交于
* docs: postGIS - add GDAL raster driver information * docs: postgis GDAL - fix typos.
-
由 Daniel Gustafsson 提交于
-
- 05 12月, 2017 10 次提交
-
-
由 Heikki Linnakangas 提交于
Instead of throwing a NOTICE on every object/role combination, track whether anything at all was revoked, and only issue one NOTICE for the whole command, if nothing was revoked. This reduces the noise if the REVOKE lists multiple objects and/or roles. This refactoring makes it easier to carry this diff vs. upstream, as we merge the column-level permissions feature from upstream. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/Ttn_UJb4Otg/LS1cFrDiAwAJ
-
由 Daniel Gustafsson 提交于
Commit 3fe43b8a introduced a lock upgrade in the plan revalidation for UDFs. This makes the lock acquire in RevalidateCachedPlanWithParams() match CdbTryOpenRelation() closer in order to avoid distributed deadlock for UPDATE/DELETE DMLs. It does however also upgrade the lock for INSERT which is overly aggressive. Fix by only upgrading the lock for the two specified DML commands. Also includes an isolationtest test that cause distributed deadlock without this patch. This solves reported cases of deadlock introduced around INSERTs in UDFs.
-
由 Daniel Gustafsson 提交于
The simplejson library was partially imported, but no longer used so remove. Suds seemed to be vendored more intact, but also seems unused to remove as well.
-
由 Karen Huddleston 提交于
This was accidentally removed in the commit that changed debug_sleep
-
由 Jesse Zhang 提交于
-
由 Venkatesh Raghavan 提交于
While porting the test from tinc, we added a schema for each test. During refactoring we forgot to add the schema name and correct table name in the test query.
-
由 David Yozie 提交于
-
由 PA Toolsmiths 提交于
-
由 Divya Bhargov 提交于
The failed cluster now will remain running for some time and can be accessed Signed-off-by: NEd Espino <eespino@pivotal.io>
-
由 Mel Kiyama 提交于
* docs: PL/Container - add information about disk quotas The information is added to the Notes section. Also --edited some existing information --fixed description of plcontainer_refresh_config and plcontainer_show_config to be views, not functions * docs: plcontainer - fix typo * docs: pl/container - clarified when base device size is displayed by docker info.
-