- 26 9月, 2018 1 次提交
-
-
由 Asim R P 提交于
This commit promotes a few assertions into elog(ERROR) so as to avoid new data being appended to a segmene file that is not in available state. Scans on an AO table do not read segment files that are awaiting to be dropped. New data, if inserted in such a segment file, will be lost forever. The accompanying isolation2 test demonstrates a bug that hits these errors. The test uses a newly added UDF to evict an entry from the appendonly hash table. In production, an entry is evicted when the appendonly hash table is filled (default capacity of 1000 entries). Note: the bug will be fixed in a separate patch. Co-authored-by: NAdam Berlin <aberlin@pivotal.io>
-
- 15 8月, 2018 1 次提交
-
-
由 xiong-gang 提交于
* Remove ERRCODE_GP_FEATURE_NOT_SUPPORTED and use ERRCODE_FEATURE_NOT_SUPPORTED instead * Remove ERROR_INVALID_WINDOW_FRAME_PARAMETER and use ERRCODE_WINDOWING_ERROR instead Co-authored-by: NAlexandra Wang <lewang@pivotal.io> Co-authored-by: NGang Xiong <gxiong@pivotal.io>
-
- 16 6月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
For CO table, storageAttributes.compress only conveys if should apply block compression or not. RLE is performed as stream compression within the block and hence storageAttributes.compress true or false doesn't relate to rle at all. So, with rle_type compression storageAttributes.compress is true for compression levels > 1 where along with stream compression, block compression is performed. For compress level = 1 storageAttributes.compress is always false as no block compression is applied. Now since rle doesn't relate to storageAttributes.compress there is no reason to touch the same based on rle_type compression. Also, the problem manifests more due the fact in datumstream layer AppendOnlyStorageAttributes in DatumStreamWrite (`acc->ao_attr.compress`) is used to decide block type whereas in cdb storage layer functions AppendOnlyStorageAttributes from AppendOnlyStorageWrite (`idesc->ds[i]->ao_write->storageAttributes.compress`) is used. Due to this difference changing just one that too unnecessarily is bound to cause issue during insert. So, removing the unnecessary and incorrect update to AppendOnlyStorageAttributes. Test case showcases the failing scenario without the patch.
-
- 16 5月, 2018 1 次提交
-
-
由 Ashwin Agrawal 提交于
Temp tables don't have to be replicated neither crash safe and hence avoid generating xlog records for them. Heap already avoids, this patch skips for AO/CO tables as well. Adding new variable `isTempRel` to `BufferedAppend` to help perform the check for temp tables and skip generating xlog records.
-
- 12 4月, 2018 1 次提交
-
-
由 Jacob Champion 提交于
8.2->8.3 upgrade of NUMERIC types was implemented for row-oriented AO tables, but not column-oriented. Correct that here. Store upgraded Datum data in a per-DatumStream buffer, to avoid "upgrading" the same data multiple times (multiple tuples may be pointing at the same data buffer, for example with RLE compression). Cache the column's base type in the DatumStreamRead struct. Co-authored-by: NTaylor Vesely <tvesely@pivotal.io>
-
- 13 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
* Revert almost all the changes in smgr.c / md.c, to not go through the Mirrored* APIs. * Remove mmxlog stuff. Use upstream "pending relation deletion" code instead. * Get rid of multiple startup passes. Now it's just a single pass like in the upstream. * Revert the way database drop/create are handled to the way it is in upstream. Doesn't use PT anymore, but accesses file system directly, and WAL-logs a single CREATE/DROP DATABASE WAL record. * Get rid of MirroredLock * Remove a few tests that were specific to persistent tables. * Plus a lot of little removals and reverts to upstream code.
-
- 02 1月, 2018 1 次提交
-
-
由 Heikki Linnakangas 提交于
AFAICS, this code isn't used for anything. It's a debugging utility, though, so maybe that's intentional. I think to use this, you're supposed to modify the source code at some place of interest, and add a debug_break() call there. However, I'm not aware of anyone using that. I just insert a sleep() or use a gdb breakpoint for that, when I'm debugging.
-
- 28 12月, 2017 1 次提交
-
-
由 Xin Zhang 提交于
If the first insert into AOCS table aborted, the visible blocks in the block directory should be greater than 1. By default, we initialize the `DatumStreamWriter` with `blockFirstRowNumber=1` for newly added columns. Hence, the first row numbers are not consistent between the visible blocks. This caused inconsistency between the base table scan vs. the scan using indexes through block directory. This wrong result issue is only happened to the first invisible blocks. The current code (`aocs_addcol_endblock()` called in `ATAocsWriteNewColumns()`) already handles other gaps after the first visible blocks. The fix updates the `blockFirstRowNumber` with `expectedFRN`, and hence fixed the mis-alignment of visible blocks. Author: Xin Zhang <xzhang@pivotal.io> Author: Ashwin Agrawal <aagrawal@pivotal.io>
-
- 21 10月, 2017 1 次提交
-
-
由 Heikki Linnakangas 提交于
I have no idea what it was a placeholder for. But it's surely useless in its current form.
-
- 29 9月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
-
- 01 9月, 2017 1 次提交
-
-
由 Daniel Gustafsson 提交于
This bumps the copyright years to the appropriate years after not having been updated for some time. Also reformats existing code headers to match the upstream style to ensure consistency.
-
- 12 7月, 2017 1 次提交
-
-
由 Jesse Zhang 提交于
`enable-cassert` is your friend, yo
-
- 11 7月, 2017 3 次提交
-
-
由 Heikki Linnakangas 提交于
If you have a query like "SELECT COUNT(col1) FROM wide_table", where the table has dozens of columns, the overhead in aocs_getnext() just to figure out which columns need to be fetched becomes noticeable. Optimize it.
-
由 Heikki Linnakangas 提交于
There was a mixture of spaces and tabs being used for indentation in aocsam.c, and I finally got fed up with that while doing other changes in that file. I ran pgindent, and did a bunch of manual fixups of the formatting. All the changes in this commit are purely cosmetic. I did the same for appendonlyam.c, although I'm not changing it at the moment, to keep aocsam.c and appendonlyam.c in sync.
-
由 Heikki Linnakangas 提交于
This does mean that we don't free the array quite as quickly as we used to, but it's a drop in the sea. The array is very small, there are much bigger data structures involved in evey AOCS scan that are not freed as quickly, and it's freed at the end of the query in any case.
-
- 11 4月, 2017 1 次提交
-
-
由 Ashwin Agrawal 提交于
Alter table add column for CO table completely missed updating block-directory if default value for column is greater than blockSize. In this case one large content block will be created followed by small content blocks containing the actual column value. Missing to update block-directory generates wrong result during index scans after such alter. The commit fixes the issue by updating the block-directory for such a case accompanied with test to validate the same. Also, while fixing the same refactor code - rename lastWriteBeginPosition to logicalBlockStartOffset for better clarity based on its usage - centralize block-directory insert in datumstream block read-write routine - remove redundant buildBlockDirectory flag
-
- 21 12月, 2016 1 次提交
-
-
由 Ashwin Agrawal 提交于
Relfilenode is only unique within a tablespace. Across tablespaces same relfilenode may be allocated within a database. Currently, gp_relation_node only stores relfilenode and segment_num and has unique index using only those fields without tablespace. So, it breaks for situations where same relfilenode gets allocated to table within database.
-
- 02 11月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
This meant moving the version field from pg_appendonly to the pg_aoseg_<oid> table (or pg_aocsseg_<oid>, for AOCS). We can still read and write both formats, but new segments will always be created in the new format (except if you set the test_appendonly_version_default GUC).
-
- 26 9月, 2016 1 次提交
-
-
由 Daniel Gustafsson 提交于
-
- 25 8月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 02 8月, 2016 3 次提交
-
-
由 Heikki Linnakangas 提交于
It's pretty much identical to Form_pg_appendonly, which is now available in the relcache. No need to pass this additional struct around.
-
由 Heikki Linnakangas 提交于
This way, you don't need to always fetch it from the system catalogs, which makes things simpler, and is marginally faster too. To make all the fields in pg_appendonly accessible by direct access to the Form_pg_appendonly struct, change 'compresstype' field from text to name. "No compression" is now represented by an empty string, rather than NULL. I hope there are no applications out there that will get confused by this. The GetAppendOnlyEntry() function used to take a Snapshot as argument, but that seems unnecessary. The data in pg_appendonly doesn't change for a table after it's been created. Except when it's ALTERed, or rewritten by TRUNCATE or CLUSTER, but those operations invalidate the relcache, and we're never interested in the old version. There's not much need for the AppendOnlyEntry struct and the GetAppendOnlyEntry() function anymore; you can just as easily just access the Form_pg_appendonly struct directly. I'll remove that as a separate commit, though, to keep this one more readable.
-
由 Heikki Linnakangas 提交于
Many headers in src/include/catalog have gotten this treatment in the upstream. This makes life easier for the next commit, which adds a reference to Form_pg_appendonly to RelationData. With this separation, we can avoid pulling in a lot of other headers into rel.h, and avoid a circular dependency,
-
- 18 2月, 2016 1 次提交
-
-
由 Asim Praveen 提交于
This change applies only to column oriented tables (CO). It reads block directory entries for only those columns that appear in project list (e.g. select clause). Closes #369
-
- 28 10月, 2015 1 次提交
-
-