- 10 5月, 2019 10 次提交
-
-
由 Daniel Gustafsson 提交于
This moves the errorpath for EXTERNAL TABLE in LIKE INCLUDING in under the context aware error handling. This makes the error when creating an external table the same as when creating a foreign table. As a bonus it avoids a few checks in case of error. Reviewed-by: Jimmy Yih Reviewed-by: Asim R P
-
由 Daniel Gustafsson 提交于
INCLUDING STORAGE was introduced in PostgreSQL 9.0, but support for setting the AO/AOCS/ENCODING storage options on the created table was never supported as we merged upstream. This extends support for copying over the table storage type as well as attribute encodings when creating a table with INCLUDING STORAGE or ALL. Reported-by: Cyrille Lintz Reviewed-by: Jimmy Yih Reviewed-by: Asim R P
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The pageinspect code only knows how to handle heap relations, so make sure to error out gracefully on AO/AOCS and EXTERNAL relations. Reported-by: NLisa Owen <lowen@pivotal.io> Reviewed-by: NAsim R P <apraveen@pivotal.io>
-
由 Daniel Gustafsson 提交于
s/specifed/specified/
-
由 Weinan WANG 提交于
At present, `gpstop` check `pg_stat_activity` table before stop cluster in smart model. However, we need to ignore those background workers that are part of the database implementation itself. For this purpose, we maintain a whitelist using its `application_name` field.
-
由 Lisa Owen 提交于
-
由 Jesse Zhang 提交于
The "combine" function for the int4 sum/avg aggregate functions is backported to Greenplum 6.0 in commit 313cef6e (from postgres/postgres@11c8669c0cc) but we made the inevitable omission of setting the "strictness" of int4_avg_combine to false. This by itself is harmless as the actual function body *does* guard against NULL input, but it also prohibits a whole host of optimizations when the executor and planner can detect NULL input early on and short-circuit the execution. Oops. This patch flips the `proisstrict` flag back to true for int4_avg_combine. Backpatch to 6X_STABLE.
-
由 Sambitesh Dash 提交于
optimizer_enable_dml is set to true by default. When set to false, ORCA will fall back to planner for all DML queries.
-
由 Lisa Owen 提交于
* docs - add to pxf upgrade procedure (v5.0.1 to v5.3.2) * remove extraneous to
-
- 09 5月, 2019 11 次提交
-
-
由 Daniel Gustafsson 提交于
When referring to the product name and not a database instance running Greenplum, the capitalization should be "Greenplum Database".
-
由 Asim R P 提交于
-
由 Asim R P 提交于
The --load-extension option was misspelt as load_extension. Resource group tests are difficult to run locally, neither are they run in PR pipeline. That makes it difficult to catch such errors.
-
由 Asim R P 提交于
This test has failed at least once due to the terminate query being executed before the to be terminated 'create table' statement. This was evident from master logs. The commit makes it more reliable by injecting a fault and waiting for the fault to be trigggered before executing pg_terminate_backend(). As a side benefit, we no longer need to create any additional table.
-
由 Asim R P 提交于
The extension is created by the test harness, after creating the regression / isolation2test databases. Tests should directly start using the extension and not attempt to create it. In addition to simplifying tests a little bit, this change avoids an error (duplicate key value violates unique constraint on pg_extension) when two or more tests execute create extension command concurrently. The error is not a problem in practice, it expected because of the way create extension DDL works. It is, however, unacceptable for regress-style tests that expect a deterministic response to each SQL command.
-
由 Sambitesh Dash 提交于
-
由 Pengzhou Tang 提交于
In pg_get_expr(), after getting the relname, if the table that the relid tells is dropped, an error will raise later when opening the relation to get column names. pg_get_expr() is used by GPDB add-on view 'pg_partitions' which is widely used by regression tests for partition tables. Lots of parallel test cases issue view pg_partitions and drop partition tables concurrently, so those cases are very flaky. Serialize test cases will cost more testing time and be fragile, so GPDB holds a AccessShareLock here to make tests stable.
-
由 Pengzhou Tang 提交于
Assume user1 has the privilege to database db1 and user2 has not, when user1 try to create a schema in db1 and authorize it to user2, a permission denied error is reported in QE. The RCA is QD set current user to user2 before dispatching the query to QEs, so QE will also set current user to user2, however, user2 has no provilege to create schema in database db1. To fix this, we delay setting the current user to user2 until the query is dispatched to QEs.
-
由 Pengzhou Tang 提交于
GPDB used to allow command like "START_REPLICATION %X/%X [SYNC]" to start a replication, user can specify SYNC option to skip waiting synchronous in replications. Now start replication command is made similar to upstream, the SYNC option is not supported, however, the internal flag "synchronous" is still used and always be false which make master and standby never synchronized.
-
由 Karen Huddleston 提交于
Some of the task files were used by jobs in the orca pipeline, but those jobs have been removed so the files are not being used anymore. Co-authored-by: NKaren Huddleston <khuddleston@pivotal.io> Co-authored-by: NDavid Sharp <dsharp@pivotal.io>
-
由 David Yozie 提交于
-
- 08 5月, 2019 13 次提交
-
-
由 Daniel Gustafsson 提交于
Ensure that error messages in src/backend/parser follow the upstream guidelines for formatting, styling and content. * Start hints and details with uppercase and end with period * Start messages with lowercase with no ending period * Avoid breaking messages in code to make grepping easier This also cleans up the worst whitespace offences around error messages. Reviewed-by: NTang Pengzhou <ptang@pivotal.io>
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
Initializing a struct with zeroes using {0} is legal in C99, but cause a warning in Clang when the first member of the struct isn't a scalar variable: gpfdist.c:1299:44: warning: suggest braces around initialization of subobject [-Wmissing-braces] struct fstream_filename_and_offset fos = {0}; ^ {} This is a bug in Clang, but we also don't want to turn off this class of warnings due to a this trivial false positive, as that might hide actual warnings somewhere. Since the struct in question is guaranteed to be set before being read, we can avoid the explicit initialization.
-
由 Daniel Gustafsson 提交于
The value set in default_pos was never used, so remove the variable entirely. The consumers were most likely removed in a refactoring at some point. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Daniel Gustafsson 提交于
Allocating memory manually with malloc() requires checking that the allocation was granted before dereferencing the pointer. To fix use pg_malloc() which is guaranteed to return a valid pointer or error out. Also update the reclaim to use a matching pg_free() call. Reviewed-by: NJimmy Yih <jyih@pivotal.io>
-
由 Peifeng Qiu 提交于
- Copy and package dependency DLLs - Add gpload.bat to make gpload runnable in cmd - Fix pygresql build script - Replace gpload version variable
-
由 Zhenghua Lyu 提交于
Lockmode of Update|Delete or Select-for-update Statement is controlled by: whether the table is AO or heap and the GUC gp_enable_global_deadlock_detector. The logic for lockmode is: 1. Select-for-update always hold ExclusiveLock 2. UPDATE|DELETE on AO tables always hold ExclusiveLock 3. UPDATE|DELETE on heap tables hold ExclusiveLock when gp_enable_global_deadlock_detector is off, otherwise hold RowExclusiveLock We hold locks in parser stage and Initplan before executing, the lockmode should be the same at the two stages. This commit fixes lockmode issues to make things correct. Co-authored-by: NShujie Zhang <shzhang@pivotal.io>
-
由 David Yozie 提交于
* Docs: fixing several broken links * Docs: fixing several broken links * Adding info about GPORCA support notice * add missing comma
-
由 Taylor Vesely 提交于
Force the mirror to create a restartpoint, and as a side-effect replay the DROP TABLESPACE DDL before removing the tablespace directory. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
由 Soumyadeep Chakraborty 提交于
The test was intended to confirm that pg_basebackup wouldn't overwrite and existing tablespace directory (without --force-overwrite) Before: The command was failing because -1 is an invalid dbid Now: The command is failing as originally intended. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Alexandra Wang 提交于
The server now removes the source dbid from the end of any tablespace symlink target found in its response to BASE_BACKUP: This means that it removes it from the tablespace header as well as any tar directory entries for a given tablespace. The pg_basebackup client now adds the target dbid at the end of the symlink target returned from the server response in order to create the correct symlink: <target_datadir>/pg_tblspc/<tablespace_oid> -> <tablespace_location>/<target_db_id> Note: All of the above is applicable to user-defined tablespaces. Also, we renamed some of the ddl objects in the isolation2 tests for tablespaces as they were too long. Also, reducing the length of the LOCATION clause of the tablespace object helped us avoid the tar header limit of 100 characters for symlinks. Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
由 Taylor Vesely 提交于
This commit includes changes to the server to ensure that the utilities: pg_rewind and pg_basebackup can be changed to support recovery in a multi-segment-singular-host setting. We link pg_tblspc to a <dbid> subdirectory of the tablespace, rather than to the path of the tablespace directly, and we remove the <dbid> from the tablespace version directory. At the same time, we have designed towards preserving the response to pg_tablespace_location(<tablespace_oid>) such that it does not return the dbid suffix. The design is such that it is the responsibility of the utilities to append the dbid as and when required. Before this commit: * the symlink to the tablespace directory looks like: pg_tblspc/spcoid/ -> /<tablespace_location>/ * Under the symlink target, we would have the following: GPDB_MAJORVER_CATVER_db<dbid>/dboid/relfilenode * pg_tablespace_location(tsoid) returns: <tablespace_location> e.g. * pg_tblspc/20981/ -> /data1/tsp1 * Under /data1/tsp1: GPDB_6_201902061_db1/19849/192814 * pg_tablespace_location(20981) returns: /data1/tsp1 After this commit: * the symlink to the tablespace directory looks like: pg_tblspc/spcoid/ -> /<tablespace_location>/<dbid> * Under the symlink target, we would have the following: GPDB_MAJORVER_CATVER/dboid/relfilenode * pg_tablespace_location(tsoid) returns: <tablespace_location> e.g. * pg_tblspc/20981/ -> /data1/tsp1/1 * Under /data1/tsp1/1: GPDB_6_201902061/19849/192814 * pg_tablespace_location(20981) returns: /data1/tsp1 Motivation: When tablespaces were aligned to upstream postgres, while removing filespaces, we added the `tablespace_version_directory()` function to supply each segment with a unique tablespace directory name. This was accomplished by appending the 'magic' `GpIdentity.dbid` global variable to the `GP_TABLESPACE_VERSION_DIRECTORY` in `tablespace_version_directory()`. This is problematic for several reasons- but perhaps most severely is the fact that in order to use any code in libpgcommon.so that references this value, you need to first set the `GpIdentity.dbid` global, otherwise any functions that deal with tablespaces will be broken in unpredictable ways. An example is pg_rewind- where `GetRelationPath()` will not return a valid relation unless you repeatedly toggle the `GpIdentity.dbid` between the value of the source or target segment dependant on the context of which relfiles are being examined. This commit bumps the catalog version here we have made breaking changes in the tablespace filesystem layout. Co-authored-by: NAdam Berlin <aberlin@pivotal.io> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io> Co-authored-by: NSoumyadeep Chakraborty <sochakraborty@pivotal.io>
-
由 David Sharp 提交于
Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NGoutam Tadi <gtadi@pivotal.io>
-
- 07 5月, 2019 6 次提交
-
-
由 Lisa Owen 提交于
* docs - address lock exhaustion shared mem error msg o * capitalize Out in title
-
由 Lisa Owen 提交于
* docs - enhance pxf jdbc partitioning content * add missing comma * simplify some content
-
由 Adam Berlin 提交于
There was a concern that an exception during GetNonHistoricCatalogSnapshot would be problematic after setting the global variable and not resetting it back to its original value. This patch threads the desired distributed transaction context into GetNonHistoricCatalogSnapshot without modifying global state.
-
由 Adam Berlin 提交于
No longer rely on a global variable to determine the distributed snapshot context.
-
由 Ning Yu 提交于
A motion hazard is a deadlock between motions, a classic motion hazard in a join executor is formed by its inner and outer motions, it can be prevented by prefetching the inner plan, refer to motion_sanity_check() for details. A similar motion hazard can be formed by the outer motion and the join qual motion. A join executor fetches a outer tuple, filters it with the join qual, then repeat the process on all the outer tuples. When there are motions in both outer plan and the join qual then below state is possible: 0. processes A and B belong to the join slice, process C belongs to the outer slice, process D belongs to the JoinQual slice; 1. A has read the first outer tuple and is fetching tuples from D; 2. D is waiting for ACK from B; 3. B is fetching the first outer tuple from C; 4. C is waiting for ACK from A; So a deadlock is formed A->D->B->C->A. We can prevent it also by prefetching the join qual. Reviewed-by: NJesse Zhang <jzhang@pivotal.io> Reviewed-by: NGang Xiong <gxiong@pivotal.io> Reviewed-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 David Sharp 提交于
Had set to 1 for a test pipeline and forgot to put it back. Co-authored-by: NDavid Sharp <dsharp@pivotal.io> Co-authored-by: NJason Vigil <jvigil@pivotal.io>
-