- 20 1月, 2017 6 次提交
-
-
由 Heikki Linnakangas 提交于
These warnings are not enabled by default, but you'll see them with -Wall.
-
由 Heikki Linnakangas 提交于
I don't know what this was used for, but it's dead now.
-
由 Adam Lee 提交于
Refactor S3Url class Add new config file parameters `version` and `verifycert`, location parameter `region` Support other endpoints than AWS For example: 's3://s3-us-west-2.amazonaws.com/bucket/prefix config=/path/to/config/file region=us-west-2' 's3://s3.amazonaws.com/bucket/prefix config=/path/to/config/file region=us-east-1' 's3://HOST_OF_ECS[:PORT]/bucket/prefix config=/path/to/config/file section=ECS' Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NYuan Zhao <yuzhao@pivotal.io>
-
由 David Sharp 提交于
Running ./configure and make in gpdb_src only builds Postgres. Build in gpAux instead. This requires sync_tools and java environment settings. For sync_tools, don't tar up the output, since we don't need it. Signed-off-by: NJingyi Mei <jmei@pivotal.io>
-
由 Tom Meyer 提交于
-
- 19 1月, 2017 11 次提交
-
-
由 Heikki Linnakangas 提交于
I'm seeing these warnings: execProcnode.c: In function ‘ExecInitNode’: ../../../src/include/codegen/codegen_wrapper.h:56:62: warning: statement with no effect [-Wunused-value] #define CodeGeneratorManagerAccumulateExplainString(manager) 1 ^ execProcnode.c:803:5: note: in expansion of macro ‘CodeGeneratorManagerAccumulateExplainString’ CodeGeneratorManagerAccumulateExplainString(CodegenManager); ^ ../../../src/include/codegen/codegen_wrapper.h:54:64: warning: statement with no effect [-Wunused-value] #define CodeGeneratorManagerPrepareGeneratedFunctions(manager) 1 ^ execProcnode.c:807:5: note: in expansion of macro ‘CodeGeneratorManagerPrepareGeneratedFunctions’ CodeGeneratorManagerPrepareGeneratedFunctions(CodegenManager); ^ To fix, define each dummy macro to return a value that's consistent with the actual function. For functions returning void, use "((void) 1)" to hint the compiler that it's OK to ignore the result. The void functions is what the warnings actually complained about, but fix them all for tidyness.
-
由 David Sharp 提交于
Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Abhijit Subramanya 提交于
- Update the answer files for vacuum related tests. - Update the SQL used to increase the age of the table in mpp24168 test. - Remove tests that use gp_filedump.
-
由 Haisheng Yuan 提交于
If ANALYZE ROOTPARTITION ALL is requrested and there are no partitioned tables in the database inform the user Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
-
由 Chris Hajas 提交于
-
由 Heikki Linnakangas 提交于
The .so files in the ORCA release tarballs are directly in a "lib" directory, so the OBJDIR_DEFAULT stuff, to search in a directory like ".obj.linux-x86_64.opt", is obsolete. The XERCES, OPTIMIZER et al. environment variables were not set by anything. We rely on the --with-libs and --with-includes to be set correctly in the configure command line, so remove the env variable stuff.
-
由 Heikki Linnakangas 提交于
The output from the Concourse pipeline actually threw warnings, saying that these directories don't exist. We extract the tarball directly into the "ext" directory, which is in the include path already, so these are unnecessary.
-
由 Heikki Linnakangas 提交于
I'm not sure what the purpose of this was, but we don't need it to install ORCA nowadays.
-
由 Abhijit Subramanya 提交于
In GPDB, vacuum full on heap relations is divided into two steps. The first step moves the tuples of the relation to free the last pages. The second step truncates the relation. Both the steps are performed in two separate transactions so we don't need to commit the transaction in `repair_frag()`.
-
由 Heikki Linnakangas 提交于
Issue was introduced in 5b0d517b. After that commit was backported, gpstop -a -M smart would result in a 2 minute delay before a SIGQUIT was sent to the walsender. This backport completes the patch originally intended by Heikki. Original Postgres commit 9c0e2b91: Fix walsender handling of postmaster shutdown, to not go into endless loop. This bug was introduced by my patch to use the regular die/quickdie signal handlers in walsender processes. I tried to make walsender exit at next CHECK_FOR_INTERRUPTS() by setting ProcDiePending, but that's not enough, you need to set InterruptPending too. On second thoght, it was not a very good way to make walsender exit anyway, so use proc_exit(0) instead. Also, send a CommandComplete message before exiting; that's what we did before, and you get a nicer error message in the standby that way. Reported by Thom Brown.
-
- 18 1月, 2017 18 次提交
-
-
由 Daniel Gustafsson 提交于
There was previously no support added for CREATE OPERATOR in the binary upgrade code, add Oid pre-assignment support for operators.
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
-
由 Daniel Gustafsson 提交于
The binary-upgrade support was added with the pg_upgrade patch and allows for the Oids in the old cluster to be pre-assigned to their respective objects during pg_upgrade. The main motivation for this refactoring is that we need to pre-assign all Oids before object creation during restore into the new cluster since a restore can otherwise allocate an Oid which is later preassigned. This brings all pre-assignments into the head of the dumpfile such that they happen before most object creations (pg_dumpall objects are still written to the dumpfile before pg_dump has a chance). The main contributions of this patch include: * All binary-upgrade methods are moved to binary_upgrade.{c|h} for pg_dump and binary_upgradeall.{c[h} for pg_dumpall. This greatly reduce the diff wrt upstream PostgreSQL * Oid preassign calls are now loaded into the backup archive as TOC entries which allows for them to be sorted to be output before object creation * Avoid usage of PQExpBuffers for trivial string construction where a fixed buffer on the stack is sufficient, or simply outputting the string directly * Binary upgrade dumping in pg_dump is moved to a separate function which loops over the dumpable objects instead of being mixed in with the main code * Various simplifications of the code and cleanups where possible
-
由 Daniel Gustafsson 提交于
Github auto discovers the project license via the licensee ruby gem which in turn bases it's matching on the choosealicense.com database of license texts. The matching is failing for our license so copy the exact text from choosealicense in an attempt to make the matching work. The diff pulls in the Appendix and changes whitespace on the license name line, no changes whatsoever are made on the main license text.
-
由 Peifeng Qiu 提交于
Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Ashwin Agrawal 提交于
Only for custom protocols entry exists in pg_extprotocol. So, only for external tables with custom protocols add dependency in pg_depend. Fixes #1547.
-
由 Tom Meyer 提交于
Signed-off-by: NJingyi Mei <jmei@pivotal.io> Signed-off-by: NTom Meyer <tmeyer@pivotal.io>
-
由 Marbin Tan 提交于
* pipelines were missing the new input from pulse trigger gpdb_src_behave_tarball. This is an update to fix that issue. * Restructured/refactored the pipeline for better visibility for each job
-
- `optimizer_analyze_root_partition` GUC functionality is introduced. When optimizer_analyze_root_partition is enabled, it will enable stats collection on root partitions when a plain analyze is run on a root partition table - `optimizer_analyze_midlevel_partition` GUC functionality is introduced When optimizer_analyze_midlevel_partition is enabled, it will enable stats collection on midlevel partitions. Expectation: Case 1: ```sql set optimizer_analyze_root_partition=off; analyze tablename; -- Stats should only be collected for the leaf tables analyze rootpartition tablename; -- Stats should be only collected for the root table ``` Case 2: ```sql set optimizer_analyze_root_partition=on; analyze tablename; -- Stats should be collected for the root and the leaf table; analyze rootpartition tablename; -- Stats should be only collected for the root table ``` Signed-off-by: NOmer Arap <oarap@pivotal.io>
-
Uninitialized variable causes release and debug build to behave differently for analyze. This commit fixes the issue.
-
由 Marbin Tan 提交于
* gptransfer used a sql command with "STRING_AGG" that combined all the attribute * names and generated the distribution key. However, this resulted in * an issue when there were more than 1 distribution key. In that case, * the combined distribution key was escaped incorrectly. * Instead, we now escape each attribute name separately via python. Signed-off-by: NLarry Hamel <lhamel@pivotal.io>
-
由 Dhanashree Kashid 提交于
With PostgreSQL 8.3, there's a new concept called "operator families". An operator class is now part of an operator family, which can contain cross-datatype operators that are "compatible" with each other. ORCA doesn't know anything about that. This commit updates the Translator files to refer to OpFamily instead of 'OpClasses'. ORCA still doesn't take advantage of this, but at least we are using operator families in operator classes' stead to make indexes work. Signed-off-by: NHaisheng Yuan <hyuan@pivotal.io>
-
由 Ashwin Agrawal 提交于
This is addressing the GPDB_83_MERGE_FIXME comment in xact.c:1081. GPDB doesn't need `haveNonTemp` check, since GPDB doesn't allow data loss. GPDB doesn't support the asynchronous commits from upstream because this might cause data inconsistency across segments in a cluster. We disable the support of async commit using macro IMPLEMENT_ASYNC_COMMIT. And make the user GUC `synchronous_commit` as DEFUNCT_OPTIONS, so that its setting will be ignored, and a WARNING is generated. The original check for temp table in smgrGetPendingFileSysWork() is not valid in GPDB, since GPDB temp table use shared buffers to support access across slices. Once GPDB decide to support async commit, this macro can be removed. Signed-off-by: NXin Zhang <xzhang@pivotal.io>
-
由 Asim R P 提交于
-
由 Asim R P 提交于
This allows writing only one .source file for a UAO test. Create the .source file in "input/uao*/" directory. Place the answer file, also named as .source, into corresponding "output/uao*/" directory. The .source files must contain the following header: create schema <filename_prefix>@orientation@; set search_path="$user",<filename_prefix>@orientation@,public; SET gp_default_storage_options='orientation=@orientation@'; Replace "<filename_prefix>" with the filename excluding the ".source" extension. Generated files are named as <filename_prefix>_row.sql and <filename_prefix>_column.sql. Add the generated filenames to schedule files and run pg_regress as usual. A new option "--ao-dir" is added to pg_regress. To enable row/column test generation, set it to the directory name containing generic UAO .source tests. The directory should be created under src/test/regress/input.
-
由 Olaf Flebbe 提交于
-
由 laixiong 提交于
-
- 17 1月, 2017 5 次提交
-
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
* Enable to run behave tests with the installed binary instead of source * code.
-
由 Marbin Tan 提交于
* This is an effort to enable us to run behave tests without relying on * the source code and instead use the GPDB installed in the system as * the behave tests are integration tests.
-