- 21 1月, 2017 2 次提交
-
-
由 Ashwin Agrawal 提交于
Originally, the return code was ignored. Now, if unlink failed for any reason, the error message will be logged, and also adjust the CheckpointStats properly. Now MirroredFlatFile_Drop() returned the same return code as unlink(), and also the errno is preserved, so the correct error message can be logged by the caller. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Omer Arap 提交于
The legacy query planner might generate a plan that returns incorrect results for window queries that have a subquery that contains a table valued function and the function contains a non-correlated subquery. This commit fixes this issue. Root Cause: Before the fix, `params_in_rtable` was searching for `paramids` in unchanged `root->parse->rtable` instead of flattened `root->glob->finaltable`. This was causing `paramids` not found and marking initPlans unused. Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
-
- 20 1月, 2017 16 次提交
-
-
由 Daniel Gustafsson 提交于
Support for BINARY mode in COPY was removed some time ago but most of the documentation in the ref page was kept; re-add the keyword in the syntax description to reflect that support has been enabled again.
-
由 alldefector 提交于
Binary COPY was previously disabled in Greenplum, this commit re-enables the binary mode by incorporating the upstream code from PostgreSQL. Patch by Github user alldefector with additional hacking by Daniel Gustafsson
-
由 Daniel Gustafsson 提交于
For some reason the function_extensions test was removed from the regress schedule in the past but it seems quite handy since it does excercise extensions to upstream. Add test for the recently caught PlannedStmt issue, fix the existing tests and add to greenplum schedule.
-
由 Daniel Gustafsson 提交于
When hitting the assertion it's highly useful to see the WARNING printed by the elog() as context for debugging. Move the assertion to after the elog() call to ensure it's visible before erroring out.
-
由 Daniel Gustafsson 提交于
Commit 9cbd0c15 which was a part of 8.3 removed the Query structure from the executor API and replaced with PlannedStmt. This hunk seems to have gone missing in the merge causing updates in non-volatile functions to hit assertion failure. Below is a sample query which triggered the error: CREATE TABLE bar (c int, d int); CREATE FUNCTION func1_mod_int_stb(x int) RETURNS int AS $$ BEGIN UPDATE bar SET d = d+1 WHERE c = $1; RETURN $1 + 1; END $$ LANGUAGE plpgsql STABLE MODIFIES SQL DATA; SELECT * FROM func1_mod_int_stb(5) order by 1;
-
由 Heikki Linnakangas 提交于
Using a StringInfo just to copy a string is quite pointless. Simplify by changing OptVersion() to return a plain palloc'd string instead. This fixes a memory management bug too: OptVersion() is called like a normal Postgres C function, not as a subroutine of PplStmtOptimize. As a result, if OptVersion() throws a C++ exception, there is nothing to catch it, and it will cause the process to exit, bringing down the server. The gpdb::SiMakeStringInfo() wrapper, used in OptVersion(), would translate any ereport() (e.g. out-of-memory error) into a C++ error, but that's not what we want in this context. A plain makeStringInfo() would be correct here, and LibraryVersion() got that right, but for OptVersion it's simpler to just return a plain string anyway.
-
由 Heikki Linnakangas 提交于
To avoid having to duplicate all the flags passed to CC also for CXX, also apply CFLAGS to the C++ compiler. Not all of CFLAGS might be applicable to C++ code, however, so construct CXXFLAGS from CFLAGS, by testing each flag to see if it also works with CXX. By default, this adds -Wall and a bunch of other flags to the C++ command line.
-
由 Heikki Linnakangas 提交于
Starting with ORCA version 2.2, there's a gpos/config.h file that contains flags describing the compile-time options used to build the ORCA library. Those flags affect binary compatibility, so it's important that e.g. if ORCA was built with GPOS_DEBUG, the code that uses it (src/backend/gpopt in this case) is also built with GPOS_DEBUG. Use the new gpos/config.h for that, instead of deriving them ourselves and hoping that we reach the same conclusions as whoever built ORCA. This requires ORCA v2.2, so update releng.mk to download that.
-
由 Heikki Linnakangas 提交于
This was copy-pasted from PGAC_PROG_CC_CFLAGS_OPT, but for the g++ compiler, we need to set ac_cxx_werror rather than ac_c_werror. The point of this dance with Werror is to detect if the compiler accepts a flags, but causes warnings, like: cc1plus: warning: command line option ‘-Wmissing-prototypes’ is valid for C/ObjC but not for C++ We don't want to use such a flag.
-
由 Heikki Linnakangas 提交于
It gives compiler warnings: /home/heikki/gpdb/orca-install/include/gpos/common/CDynamicPtrArray.inl:382:3: warning: nonnull argument ‘this’ compared to NULL [-Wnonnull-compare] if (NULL == this) ^~
-
由 Heikki Linnakangas 提交于
These warnings are not enabled by default, but you'll see them with -Wall.
-
由 Heikki Linnakangas 提交于
I don't know what this was used for, but it's dead now.
-
由 Adam Lee 提交于
Refactor S3Url class Add new config file parameters `version` and `verifycert`, location parameter `region` Support other endpoints than AWS For example: 's3://s3-us-west-2.amazonaws.com/bucket/prefix config=/path/to/config/file region=us-west-2' 's3://s3.amazonaws.com/bucket/prefix config=/path/to/config/file region=us-east-1' 's3://HOST_OF_ECS[:PORT]/bucket/prefix config=/path/to/config/file section=ECS' Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 David Sharp 提交于
Running ./configure and make in gpdb_src only builds Postgres. Build in gpAux instead. This requires sync_tools and java environment settings. For sync_tools, don't tar up the output, since we don't need it. Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Tom Meyer 提交于
-
- 19 1月, 2017 11 次提交
-
-
由 Heikki Linnakangas 提交于
I'm seeing these warnings: execProcnode.c: In function ‘ExecInitNode’: ../../../src/include/codegen/codegen_wrapper.h:56:62: warning: statement with no effect [-Wunused-value] #define CodeGeneratorManagerAccumulateExplainString(manager) 1 ^ execProcnode.c:803:5: note: in expansion of macro ‘CodeGeneratorManagerAccumulateExplainString’ CodeGeneratorManagerAccumulateExplainString(CodegenManager); ^ ../../../src/include/codegen/codegen_wrapper.h:54:64: warning: statement with no effect [-Wunused-value] #define CodeGeneratorManagerPrepareGeneratedFunctions(manager) 1 ^ execProcnode.c:807:5: note: in expansion of macro ‘CodeGeneratorManagerPrepareGeneratedFunctions’ CodeGeneratorManagerPrepareGeneratedFunctions(CodegenManager); ^ To fix, define each dummy macro to return a value that's consistent with the actual function. For functions returning void, use "((void) 1)" to hint the compiler that it's OK to ignore the result. The void functions is what the warnings actually complained about, but fix them all for tidyness.
-
由 David Sharp 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Abhijit Subramanya 提交于
- Update the answer files for vacuum related tests. - Update the SQL used to increase the age of the table in mpp24168 test. - Remove tests that use gp_filedump.
-
由 Haisheng Yuan 提交于
If ANALYZE ROOTPARTITION ALL is requrested and there are no partitioned tables in the database inform the user Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
-
由 Chris Hajas 提交于
-
由 Heikki Linnakangas 提交于
The .so files in the ORCA release tarballs are directly in a "lib" directory, so the OBJDIR_DEFAULT stuff, to search in a directory like ".obj.linux-x86_64.opt", is obsolete. The XERCES, OPTIMIZER et al. environment variables were not set by anything. We rely on the --with-libs and --with-includes to be set correctly in the configure command line, so remove the env variable stuff.
-
由 Heikki Linnakangas 提交于
The output from the Concourse pipeline actually threw warnings, saying that these directories don't exist. We extract the tarball directly into the "ext" directory, which is in the include path already, so these are unnecessary.
-
由 Heikki Linnakangas 提交于
I'm not sure what the purpose of this was, but we don't need it to install ORCA nowadays.
-
由 Abhijit Subramanya 提交于
In GPDB, vacuum full on heap relations is divided into two steps. The first step moves the tuples of the relation to free the last pages. The second step truncates the relation. Both the steps are performed in two separate transactions so we don't need to commit the transaction in `repair_frag()`.
-
由 Heikki Linnakangas 提交于
Issue was introduced in 5b0d517b. After that commit was backported, gpstop -a -M smart would result in a 2 minute delay before a SIGQUIT was sent to the walsender. This backport completes the patch originally intended by Heikki. Original Postgres commit 9c0e2b91: Fix walsender handling of postmaster shutdown, to not go into endless loop. This bug was introduced by my patch to use the regular die/quickdie signal handlers in walsender processes. I tried to make walsender exit at next CHECK_FOR_INTERRUPTS() by setting ProcDiePending, but that's not enough, you need to set InterruptPending too. On second thoght, it was not a very good way to make walsender exit anyway, so use proc_exit(0) instead. Also, send a CommandComplete message before exiting; that's what we did before, and you get a nicer error message in the standby that way. Reported by Thom Brown.
-
- 18 1月, 2017 11 次提交
-
-
由 Daniel Gustafsson 提交于
There was previously no support added for CREATE OPERATOR in the binary upgrade code, add Oid pre-assignment support for operators.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The binary-upgrade support was added with the pg_upgrade patch and allows for the Oids in the old cluster to be pre-assigned to their respective objects during pg_upgrade. The main motivation for this refactoring is that we need to pre-assign all Oids before object creation during restore into the new cluster since a restore can otherwise allocate an Oid which is later preassigned. This brings all pre-assignments into the head of the dumpfile such that they happen before most object creations (pg_dumpall objects are still written to the dumpfile before pg_dump has a chance). The main contributions of this patch include: * All binary-upgrade methods are moved to binary_upgrade.{c|h} for pg_dump and binary_upgradeall.{c[h} for pg_dumpall. This greatly reduce the diff wrt upstream PostgreSQL * Oid preassign calls are now loaded into the backup archive as TOC entries which allows for them to be sorted to be output before object creation * Avoid usage of PQExpBuffers for trivial string construction where a fixed buffer on the stack is sufficient, or simply outputting the string directly * Binary upgrade dumping in pg_dump is moved to a separate function which loops over the dumpable objects instead of being mixed in with the main code * Various simplifications of the code and cleanups where possible
-
由 Daniel Gustafsson 提交于
Github auto discovers the project license via the licensee ruby gem which in turn bases it's matching on the choosealicense.com database of license texts. The matching is failing for our license so copy the exact text from choosealicense in an attempt to make the matching work. The diff pulls in the Appendix and changes whitespace on the license name line, no changes whatsoever are made on the main license text.
-
由 Peifeng Qiu 提交于
Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Only for custom protocols entry exists in pg_extprotocol. So, only for external tables with custom protocols add dependency in pg_depend. Fixes #1547.
-
由 Tom Meyer 提交于
Signed-off-by: NJingyi Mei <jmei@pivotal.io> Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Marbin Tan 提交于
* pipelines were missing the new input from pulse trigger gpdb_src_behave_tarball. This is an update to fix that issue. * Restructured/refactored the pipeline for better visibility for each job
-
- `optimizer_analyze_root_partition` GUC functionality is introduced. When optimizer_analyze_root_partition is enabled, it will enable stats collection on root partitions when a plain analyze is run on a root partition table - `optimizer_analyze_midlevel_partition` GUC functionality is introduced When optimizer_analyze_midlevel_partition is enabled, it will enable stats collection on midlevel partitions. Expectation: Case 1: ```sql set optimizer_analyze_root_partition=off; analyze tablename; -- Stats should only be collected for the leaf tables analyze rootpartition tablename; -- Stats should be only collected for the root table ``` Case 2: ```sql set optimizer_analyze_root_partition=on; analyze tablename; -- Stats should be collected for the root and the leaf table; analyze rootpartition tablename; -- Stats should be only collected for the root table ``` Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
Uninitialized variable causes release and debug build to behave differently for analyze. This commit fixes the issue.
-