- 29 11月, 2018 15 次提交
-
-
由 Asim R P 提交于
This reverts commit dc906b6c. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/S2WL1FtJEJ0/6ngoCuQNCwAJ
-
由 Asim R P 提交于
This reverts commit b948676c. Discussion: https://groups.google.com/a/greenplum.org/d/msg/gpdb-dev/S2WL1FtJEJ0/6ngoCuQNCwAJ
-
由 Ning Yu 提交于
By loading this hook CREATE TABLE will create tables with random numsegments by using the gp_debug_numsegments extension. It can be enabled via make command like this: make installcheck EXTRA_REGRESS_OPTS=--prehook=randomize_create_table_default_numsegments However as the plans can be different with random numsegments it is recommended to also ignore the plan diffs, so the make command becomes this: make installcheck EXTRA_REGRESS_OPTS="--prehook=randomize_create_table_default_numsegments --ignore-plans"
-
由 Ning Yu 提交于
This is a preparation of an upcoming random ICG pipeline job. In that pipeline job we will have tables created on different segments, so far the only way to hack this behavior is by using the gp_debug_numsegments extension. We want to load and execute that extension for all the tests without modifying them directly, so a hook mechanism is needed. Added a pg_regress argument --prehook to set a hook. A hook script should be put under src/test/regress/{sql,input}/hooks/ directory, depending on whether it need pg_regress to substitute the @@ tokens. At most one hook can be specied. It can be set via make command like this: # suppose there is src/test/regress/sql/hooks/hookname.sql make installcheck EXTRA_REGRESS_OPTS=--prehook=hookname
-
由 Ning Yu 提交于
This is a preparation of an upcoming random ICG pipeline job. In that pipeline job we will have tables created on different segments, so the plans might be different expectation. In atmsort.pm there is already an argument -gpd_ignore_plans to ignore the plan diffs, but there was no equivalence in pg_regress. Added a pg_regress argument --ignore-plans to ignore plan diffs. It can be enabled via make command like this: make installcheck EXTRA_REGRESS_OPTS=--ignore-plans
-
由 Rahul Iyer 提交于
-
由 Chuck Litzell 提交于
-
由 Mel Kiyama 提交于
-
由 Rahul Iyer 提交于
-
由 Heikki Linnakangas 提交于
The PostgreSQL 9.4 brought us the 'pg_lsn' datatype, which is functionally equivalent. And we weren't actually using 'gpxlogloc' for anything, anyway.
-
由 Ashwin Agrawal 提交于
Given the online gpexpand work, the gp_num_contents_in_cluster GUC is unused. So, delete the same from code to avoid confusions and eliminate this long argument required to start a postgres instance in gpdb.
-
由 Mel Kiyama 提交于
* docs - gpcopy add --parallelize-leaf-partitions option Will be backported to 5X_STABLE * docs - gpcopy supports copying leaf partitions of a partitioned table. * docs - gpcopy fix typos.
-
由 Ekta Khanna 提交于
Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
由 Ekta Khanna 提交于
Prior to this commit, when creating an index without specifying the name, for partitioned tables, the index would only be created on the root. This commit ensures that the indexes get generated for the leaf partitions aswell similar to how it gets created when creating indexes specifying names for partitioned tables. Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
由 Ekta Khanna 提交于
Prior to this commit, we always did a lookup for the relationOid in QE. The original intent of the code was to avoid lookup for relationOid by QE if it has already been stashed in the IndexStmt. This commit updates code for it along with `partition_locking` test expected file. Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 28 11月, 2018 8 次提交
-
-
由 Daniel Gustafsson 提交于
Add a missing raise command for an Exception invokation which otherwise is a no-op, and properly add an Exception where a bare string was being passed to raise() to avoid a Type Exception error. Reviewed-by: Asim R P
-
由 Bhuvnesh Chaudhary 提交于
-
由 Pengzhou Tang 提交于
Recently, ICW case 'select_into' failed intermittently, the error looks like "CREATE TABLE selinto_schema.tmp3 (a,b,c) AS SELECT oid,relname,relacl FROM pg_class WHERE relname like '%c%'; -- OK ERROR: role "30973" does not exist" The RCA is, case 'lock' should not run concurrently with 'select_into', a minirepo is: 1. in 'lock' case CREATE TABLE lock_tbl1 (a BIGINT); CREATE ROLE regress_rol_lock1; GRANT UPDATE ON TABLE lock_tbl1 TO regress_rol_lock1; select oid, relname, relacl from pg_class where relname like '%c%'; oid | relname | relacl --------+-----------+------------------------------------------------------- 180367 | lock_tbl1 | {gpadmin=arwdDxt/gpadmin,regress_rol_lock1=w/gpadmin} 2. in 'select_into' test case CREATE TABLE selinto_schema.tmp3 (a,b,c) AS SELECT oid,relname,relacl FROM pg_class WHERE relname like '%c%'; 3. in 'lock' test case DROP TABLE lock_tbl1; DROP ROLE regress_rol_lock1; 4. in 'select_into' test case select * from tmp3; a | b | c --------+-----------+-------------------------------------------- 180367 | lock_tbl1 | {gpadmin=arwdDxt/gpadmin,180370=w/gpadmin} create table tmp4 as select * from tmp3; NOTICE: Table doesn't have 'DISTRIBUTED BY' clause. Creating a NULL policy entry. ERROR: role "180370" does not exist In Step4, role 'regress_rol_lock1' has been dropped, so when it tries to parse acl, it finds role "180370" does not exist, the error seems to be reasonable. In upstream, test 'lock' doesn't run concurrently with 'select_into', I also notice that we have two 'lock' test within parallel_schedule, so remove the first one to match upstream and avoid the intermittent failure of 'select_into'
-
由 Ashwin Agrawal 提交于
This reverts commit 647cf58c and f8d9b525. Commit 647cf58c introduced a wrapper around PG_MODULE_MAGIC, which is errorneous, as described in https://groups.google.com/a/greenplum.org/forum/?utm_source=digest&utm_medium=email#!topic/gpdb-dev/YVuYL1BK-QQ. Also, the test segfaults and not seem worth.
-
由 Lisa Owen 提交于
-
由 Ashwin Agrawal 提交于
Code fails to compile if WAL_DEBUG is enabled, as rm_desc is called with 3 arguments (as upstream) instead of 2 arguments in gpdb. Fixing the same by copying header and then rmgr specific data and then passing the same to `rm_desc()`. This logging is very helpful debugging tool for xlog related issues. I wished to make `rm_desc()` function take 3 arguments as upstream but functions like btree and checkpoint desc routines depend on data from record header currently and hence can't get away from passing just rmgr data to desc routines. This can be revisited anyways with 9.5 merge. Also, modified xlog INSERT message to match REDO message style. It's very confusing to read INSERT @ where it actually displays end location and not really start @.
-
由 Ivan Leskin 提交于
Commit 647cf58c introduced a wrapper around PG_MODULE_MAGIC, which is erroneous, as described in https://groups.google.com/a/greenplum.org/forum/?utm_source=digest&utm_medium=email#!topic/gpdb-dev/YVuYL1BK-QQ. The correct wrapper around that definition is with 'UNIT_TESTING'. This allows to include zstd_compression.c in test/zstd_compression_test.c, which is necessary to build unit test. Co-authored-by: NHeikki Linnakangas <hlinnakangas@pivotal.io>
-
由 Lav Jain 提交于
Also sync centos6 and centos7 dockerfiles.
-
- 27 11月, 2018 10 次提交
-
-
由 Daniel Gustafsson 提交于
This moves the HyperLogLog license from THIRDPARTY into the correct NOTICE file and lso fixes a few related typos and minor differences with upstream license files. Reviewed-by: Heikki Linnakangas
-
由 Heikki Linnakangas 提交于
In PostgreSQL, a PathKey represents sort ordering, but we have been using it in GPDB to also represent the distribution keys of hash-distributed data in the planner. (i.e. the keys in DISTRIBUTED BY of a table, but also when data is redistributed by some other key on the fly). That's been convenient, and there's some precedent for that, since PostgreSQL also uses PathKey to represent GROUP BY columns, which is quite similar to DISTRIBUTED BY. However, there are some differences. The opfamily, strategy and nulls_first fields in PathKey are not applicable to distribution keys. Using the same struct to represent ordering and hash distribution is sometimes convenient, for example when we need to test whether the sort order or grouping is "compatible" with the distribution. But at other times, it's confusing. To clarify that, introduce a new DistributionKey struct, to represent a hashed distribution. While we're at it, simplify the representation of HashedOJ locus types, by including a List of EquivalenceClasses in DistributionKey, rather than just one EC like a PathKey has. CdbPathLocus now has only one 'distkey' list that is used for both Hashed and HashedOJ locus, and it's a list of DistributionKeys. Each DistributionKey in turn can contain multiple EquivalenceClasses. Looking ahead, I'm working on a patch to generalize the "cdbhash" mechanism, so that we'd use the normal Postgres hash opclasses for distribution keys, instead of hard-coding support for specific datatypes. With that, the hash operator class or family will be an important part of the distribution key, in addition to the datatype. The plan is to store that also in DistributionKey. Reviewed-by: NMelanie Plageman <mplageman@pivotal.io>
-
由 xiong-gang 提交于
EvalPlanQual materializes the slot to a heap tuple, PRIVATE_tts_values point to freed memory. We need to reset PRIVATE_tts_nvalid in ExecMaterializeSlot, to prevent the following ExecFilterJunk from referencing the dangling pointer.
-
由 Zhenghua Lyu 提交于
Previously the reshuffle node's numsegments is always set to the cluster size. Now we have flexible gang & dispath API, we should correct the numsegments field of reshuffle node to set it as the its lefttree's flow->numsegments. Co-authored-by: NShujie Zhang <shzhang@pivotal.io> Co-authored-by: NZhenghua Lyu <zlv@pivotal.io>
-
由 Zhenghua Lyu 提交于
When we expand a partial replicated table via `alter table t expand table`, internally we use the split-update framework to implement the expansion. That framework is designed for hash-distribtued tables at first. For replicated table, we do not need the reshuffle_expr(filter condition) at all because we need to transfer all data in a replicated table.
-
由 Kalen Krempely 提交于
This allows the data to be visible on the segments. The segements should not interpret any transaction id from master during or after upgrade. Co-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Kalen Krempely 提交于
Without this commit auxiliary tables such as toast and aoseg are skipped during vacuum when run in utility mode (such as during pg_upgrade). Co-authored-by: NAsim R P <apraveen@pivotal.io> Co-authored-by: NKalen Krempely <kkrempely@pivotal.io>
-
由 Ashwin Agrawal 提交于
-
由 Ivan Leskin 提交于
Add a unit test (and its infrastructure) for 'zstd_compress()'. The test checks whether 'zstd_compress()' returns correct output in case compression fails (compressed data is larger than uncompressed). To do that, 'ZSTD_compressCCtx()' is mocked to always return 'ZSTD_error_dstSize_tooSmall'. Also, an 'ifndef' is added around 'PG_MODULE_MAGIC' in zstd_compression.c
-
由 Ivan Leskin 提交于
When ZSTD compression is used for AO CO tables, insertion of data may cause an error "Destination buffer is too small". This happens when compressed data is larger than uncompressed input data. This commit adds handling of this situation: do not change output buffer and return size used equal to source size. The caller (e.g., 'cdbappendonlystoragewrite.c') is able to handle such output; in this case, it copies data from input to output itself.
-
- 26 11月, 2018 4 次提交
-
-
由 Ning Yu 提交于
In CREATE TABLE we used to decide numsegments from LIKE, INHERITS and DISTRIBUTED BY clauses. However we do not want partially distributed tables to be created by end users, so change the logic to always create tables with DEFAULT as numsegments. We still allow developers to hack the DEFAULT numsegments with the gp_debug_numsegments extension.
-
由 Daniel Gustafsson 提交于
Commit 226e8867 removed oidcasted_pk and max_content from the SQL query, but didn't remove the arguments. While they don't cause an issue as they will be unused, remove to avoid confusing readers. Reviewed-by: Heikki Linnakangas
-
- 25 11月, 2018 3 次提交
-
-
由 Daniel Gustafsson 提交于
Commit 17f9b7f070dbe17b2844a8b4dd428 in the pgweb repository removed the /static/ portion on all doc URLs, leaving a redirect in place. To avoid incurring a needless redirect, remove the /static/ part in the links to the PostgreSQL documentation. The /static/ URLs stem from a time when there were interactive docs that had comment functionality. These docs were removed a very long time ago, but the static differentiator was left in place until now. Reviewed-by: Mel Kiyama
-
由 Heikki Linnakangas 提交于
With OpenSSL 1.1.0 and above, CRYPTO_set_id_callback and CRYPTO_set_locking_callback are no-op macros, which rendered id_function() and locking_function() unused. That produced compiler warnings. Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-
由 Heikki Linnakangas 提交于
I was getting these compiler warnings: src/s3log.cpp: In function ‘void _LogMessage(const char*, __va_list_tag*)’: src/s3log.cpp:17:42: warning: function might be possible candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] vsnprintf(buf, sizeof(buf), fmt, args); ^ src/s3log.cpp: In function ‘void _send_to_remote(const char*, __va_list_tag*)’: src/s3log.cpp:27:55: warning: function might be possible candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] size_t len = vsnprintf(buf, sizeof(buf), fmt, args); ^ src/s3log.cpp: In function ‘void LogMessage(LOGLEVEL, const char*, ...)’: src/s3log.cpp:41:39: warning: function might be possible candidate for ‘gnu_printf’ format attribute [-Wsuggest-attribute=format] vfprintf(stderr, fmt, args); ^ Those are good suggestions. I couldn't figure out the correct way to mark the _LogMessage() and _send_to_remote() local functions, so I decided to inline them into the caller, LogMessage(), instead. They were almost one-liners, and LogMessage() is still very small, too, so I don't think there's any significant loss to readability. A few format strings in debugging messages were treating pthread_self() as a pointer, while others were treating it as a wrong kind of integer. Harmonize by casting it to "uint64_t", and using PRIX64 as the format string. This isn't totally portable: pthread_t can be an arithmetic type, or a struct, and casting a struct to unsigned int won't work. In principle, that was a problem before this patch already, but now you should get a compiler error, if you try to compile on a platform where pthread_t is not an arithmetic type. I think that's better than silent type confusion. Reviewed-by: NPaul Guo <pguo@pivotal.io> Reviewed-by: NDaniel Gustafsson <dgustafsson@pivotal.io>
-