- 19 6月, 2018 6 次提交
-
-
由 Omer Arap 提交于
In the previous generation of analyze, gpdb provided features to merge statistics such as MCVs (Most common values) and histograms for the root or midlevel partitions from the leaf partition's statistics. This commit imports the utility functions for merging MCVs and histograms and modifies based on the needs of current version. Signed-off-by: NBhunvesh Chaudhary <bchaudhary@pivotal.io>
-
由 Omer Arap 提交于
-
由 Abhijit Subramanya 提交于
- Port the hyperloglog extension into the contrib directory and make corresponding makefile changes to get it to compile. - Also modify initdb to install the HLL extension as part of gpinitsystem. Signed-off-by: NOmer Arap <oarap@pivotal.io> Signed-off-by: NEkta Khanna <ekhanna@pivotal.io>
-
由 Adam Lee 提交于
The processed variable should not be reset while looping all partitions.
-
由 Adam Lee 提交于
BeginCopy() returns a brand new CopyState but ignored the value of skip_ext_partition, set after it. It's a simple boolean of struct CopyStmt, no need to wrap in options.
-
由 Adam Lee 提交于
To have a clean `git status` output.
-
- 18 6月, 2018 1 次提交
-
-
由 Mel Kiyama 提交于
* docs - gpbackup/gprestore new functionality. --gpbackup new option --jobs to backup tables in parallel. --gprestore --include-table* options support restoring views and sequences. * docs - gpbackup/gprestore. fixed typos. Updated backup/restore of sequences and views * docs - gpbackup/gprestore - clarified information on dependent objects. * docs - gpbackup/gprestore - updated information on locking/quiescent state. * docs - gpbackup/gprestore - clarify connection in --jobs option.
-
- 16 6月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
For CO table, storageAttributes.compress only conveys if should apply block compression or not. RLE is performed as stream compression within the block and hence storageAttributes.compress true or false doesn't relate to rle at all. So, with rle_type compression storageAttributes.compress is true for compression levels > 1 where along with stream compression, block compression is performed. For compress level = 1 storageAttributes.compress is always false as no block compression is applied. Now since rle doesn't relate to storageAttributes.compress there is no reason to touch the same based on rle_type compression. Also, the problem manifests more due the fact in datumstream layer AppendOnlyStorageAttributes in DatumStreamWrite (`acc->ao_attr.compress`) is used to decide block type whereas in cdb storage layer functions AppendOnlyStorageAttributes from AppendOnlyStorageWrite (`idesc->ds[i]->ao_write->storageAttributes.compress`) is used. Due to this difference changing just one that too unnecessarily is bound to cause issue during insert. So, removing the unnecessary and incorrect update to AppendOnlyStorageAttributes. Test case showcases the failing scenario without the patch.
-
- 15 6月, 2018 2 次提交
-
-
由 Divya Bhargov 提交于
* Rewrite circular buffer as a Python list Since we end up returning a List object, we may as well keep is as a List object from the start. Co-authored-by: NDaniel Gustafsson <dgustafsson@pivotal.io> Co-authored-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Lisa Owen 提交于
* docs - resource group cpuset feature * alter and create resource group sgml ref page updates * gp_resource_group_cpu_limit applies to both CPU alloc modes * add cpuset usage considerations * restore ... fail, not backup * misc edits, move note
-
- 14 6月, 2018 4 次提交
-
-
由 Ming LI 提交于
The hard-coded flag is not correct for all cases.
-
由 Nadeem Ghani 提交于
- Add mirrors with and without standby, and ensure that the host assignment is identical between the two. - Add mirrors, then kill one, and ensure that gprecoverseg operates correctly on the newly added mirror. Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NJacob Champion <pchampion@pivotal.io>
-
由 Mel Kiyama 提交于
-
由 Mel Kiyama 提交于
* docs - update GUC optimizer_analyze_root_partition -change default to on -update description * docs - optimizer_analyze_root_partition, fix typo
-
- 13 6月, 2018 1 次提交
-
-
由 Omer Arap 提交于
No hash was created for the new numeric format when it is a `NumericShort`. This commit resolves the issue.
-
- 12 6月, 2018 8 次提交
-
-
由 Jim Doty 提交于
For a while there were several jobs that were behind the nightly trigger. This necessitated some logic about including the nightly-trigger resource if any of a number of conditions were met. At the time of this commit, the only job that is using the resourse is an AIX job. Therefore the inclusion of the nightly-trigger resource will match the conditions that include the only job that requires that resource. This elimnates the "resource not used" error that can be seen when setting a development version of the pipeline that does not include the AIX job. Authored-by: NJim Doty <jdoty@pivotal.io>
-
由 David Yozie 提交于
-
由 Jim Doty 提交于
When cloning a fresh copy of GPDB, running through the documented make process, and then running the make target for the demo cluster, there are three files that get generated. This commit adds those files to the .gitignore files in their respective directories. Authored-by: NJim Doty <jdoty@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - update GUC gp_ignore_error_table -change set classification from system to session -clarify INTO ERROR TABLE clause is not used. * docs - update GUC gp_ignore_error_table - minor edits
-
由 Shoaib Lari 提交于
For long running commands such as gpinitstandby with a large master data directory, the server takes a long time. Therefore, there is no acitivity from the client to the server. If the ClientAliveInterval is set, then the server reports a timeout after ClientAliveInterval seconds. Setting a ServerAliveInterval value less than the ClientAliveInterval interval forces the client to send a Null message to the server. Hence, avoiding the timeout. Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
由 Mel Kiyama 提交于
* docs - gpbackup ddboost plugin - add replication feature * docs - gpbackup ddboost plugin - fix typos
-
由 Alexandra Wang 提交于
A gate job is added for Release Candidate to make sure that all the release candidate jobs passed for gpdb_src and bin_gpdb for centos6, centos7 and sles11 platform. The Release_Candidate job verifies that the commit SHA of gpdb_src and all the bin_gpdb resources are the same. If the versions don't match, the job will fail. The bin_gpdb_[platform]_rc resources are put to a stable builds bucket so that they can be consumed by integration and components pipelines Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NKris Macoskey <kmacoskey@pivotal.io> Co-authored-by: NTrevor Yacovone <tyacovone@pivotal.io>
-
由 Jamie McAtamney 提交于
We have added a test case to verify that the mirror configuration generated by gpaddmirrors with the `-s` option is indeed spread over different hosts for each of the primaries. Co-authored-by: NJim Doty <jdoty@pivotal.io> Co-authored-by: NJamie McAtamney <jmcatamney@pivotal.io> Co-authored-by: NNadeem Ghani <nghani@pivotal.io> Co-authored-by: NKevin Yeap <kyeap@pivotal.io> Co-authored-by: NShoaib Lari <slari@pivotal.io>
-
- 11 6月, 2018 5 次提交
-
-
由 Hubert Zhang 提交于
Follow src/pl/plpython/README.md to see how to build and use plpython3u on GPDB. Co-authored-by: NYandong Yao <yyao@pivotal.io>
-
由 Jialun 提交于
-
由 Violet Cheng 提交于
Gpperfmon table rows_out queries_history shows zero values under column "rows_out", even though they returned several rows as output. This fix will decrease the possibility of occurance of this bug. But it is still possible due to gpperfmon harvest mode.
-
由 Adam Lee 提交于
1, pass external table encoding to copy's options, then set cstate->file_encoding to it, for reading and writing. 2, after the merge, copy state doesn't have a member of client encoding, which used to set to the target encoding, get the converted data as a client, now passes the file encoding (from copy options) to convert directly.
-
由 Adam Lee 提交于
gppc.c: In function ‘TFGetFuncExpr’: gppc.c:1255:3: error: implicit declaration of function ‘exprType’ [-Werror=implicit-function-declaration] exprType(list_nth(fexpr->args, argno)) != typid) ^~~~~~~~
-
- 09 6月, 2018 3 次提交
-
-
由 Andreas Scherbaum 提交于
* Add start_ignore and end_ignore around all gp_inject_fault loads
-
由 Ashwin Agrawal 提交于
-
由 Lisa Owen 提交于
-
- 08 6月, 2018 9 次提交
-
-
由 Tom Lane 提交于
This commit pulls in the latest tzdata from Postgres 11. We intentionally left out comment changes to `src/backend/utils/adt/datetime.c` because it's not applicable (yet). > DST law changes in North Korea. Redefinition of "daylight savings" in > Ireland, as well as for some past years in Namibia and Czechoslovakia. > Additional historical corrections for Czechoslovakia. > > With this change, the IANA database models Irish timekeeping as following > "standard time" in summer, and "daylight savings" in winter, so that the > daylight savings offset is one hour behind standard time not one hour > ahead. This does not change their UTC offset (+1:00 in summer, 0:00 in > winter) nor their timezone abbreviations (IST in summer, GMT in winter), > though now "IST" is more correctly read as "Irish Standard Time" not "Irish > Summer Time". However, the "is_dst" column in the pg_timezone_names view > will now be true in winter and false in summer for the Europe/Dublin zone. > > Similar changes were made for Namibia between 1994 and 2017, and for > Czechoslovakia between 1946 and 1947. > > So far as I can find, no Postgres internal logic cares about which way > tm_isdst is reported; in particular, since commit b2cbced9 we do not > rely on it to decide how to interpret ambiguous timestamps during DST > transitions. So I don't think this change will affect any Postgres > behavior other than the timezone-view outputs. > > Discussion: https://postgr.es/m/30996.1525445902@sss.pgh.pa.us (cherry picked from commit 234bb985) Co-authored-by: NJesse Zhang <sbjesse@gmail.com> Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Tom Lane 提交于
The non-cosmetic changes involve teaching the "zic" tzdata compiler about negative DST. While I'm not currently intending that we start using negative-DST data right away, it seems possible that somebody would try to use our copy of zic with bleeding-edge IANA data. So we'd better be out in front of this change code-wise, even though it doesn't matter for the data file we're shipping. Discussion: https://postgr.es/m/30996.1525445902@sss.pgh.pa.us (cherry picked from commit b45f6613)
-
由 Jesse Zhang 提交于
This should have been part of commit f590dc94 but we forgot. Now remove them for good. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
由 Scott Kahler 提交于
-
由 David Yozie 提交于
-
由 Ning Yu 提交于
`SHOW memory_spill_ratio` will always display 20 when it's the first query in a connection (if you run this query in psql and pressed TAB when entering the command then the implicit queries ran by the tab completion function will be the first), the root cause is that SHOW command will be bypassed in resgroup, so the bound resgroup will not be assigned, and the resgroup's settings will not be loaded. To display the proper value in this case we will also load the resgroup settings even for bypassed queries.
-
由 Ashwin Agrawal 提交于
Before: qp_functions ... ok (76.24 sec) (diff:0.06 sec) qp_gist_indexes4 ... ok (88.46 sec) (diff:0.07 sec) qp_with_clause ... ok (130.70 sec) (diff:0.32 sec) After: qp_functions ... ok (4.49 sec) (diff:0.06 sec) qp_gist_indexes4 ... ok (16.18 sec) (diff:0.06 sec) qp_with_clause ... ok (54.41 sec) (diff:0.30 sec)
-
由 Lisa Owen 提交于
* docs - misc updates to gptransfer - conref from best practices to admin guide - qualify use for migration to diff number of segments - misc edits * conditionalize
-