- 08 11月, 2016 4 次提交
-
-
由 Heikki Linnakangas 提交于
We don't do EXEC_BACKEND builds at the moment, but if we did, this would likely cause compiler or Coverity warnings. Or a crash at runtime, if a locale had a really long name.
-
由 Ashwin Agrawal 提交于
This separate pipeline helps to run the catalog and storage TINC test projects, eventually will be merged to run along side ICG on main and PR pipeline.
-
由 Ashwin Agrawal 提交于
Transactional stats for heap_delete must be updated only if in transaction, which should always be the case. Except the issue was encountered now when we started calling heap_delete for PT tables to free tuples instead of older mechanism. During recovery based on object state if transaction was aborted the tuple in PT needs to be deleted and was failing in this function as TopTransactionContext is not allocated. Hence, added the protection that only if nesting level if greater than 0 which will be the case if we are in transaction collect stats else ignore the same. Which helps to fix the problem.
-
由 Jimmy Yih 提交于
Removed timetz columns from certain tables in the filespace regression test. They were not needed and caused a false failure due to United States timezone change from PDT to PST. Reported by Heikki Linnakangas.
-
- 07 11月, 2016 5 次提交
-
-
由 Heikki Linnakangas 提交于
I changed the expected output of partition_locking regression test in commit f9016da2, but forgot to update the ORCA-specific output.
-
由 Daniel Gustafsson 提交于
Commit f9016da2 removed the definition of the relid variable, remove the assertion on it as well.
-
由 Heikki Linnakangas 提交于
Instead of carrying a "new OID" field in all the structs that represent CREATE statements, introduce a generic mechanism for capturing the OIDs of all created objects, dispatching them to the QEs, and using those same OIDs when the corresponding objects are created in the QEs. This allows removing a lot of scattered changes in DDL command handling, that was previously needed to ensure that objects are assigned the same OIDs in all the nodes. This also provides the groundwork for pg_upgrade to dictate the OIDs to use for upgraded objects. The upstream has mechanisms for pg_upgrade to dictate the OIDs for a few objects (relations and types, at least), but in GPDB, we need to preserve the OIDs of almost all object types.
-
由 Daniel Gustafsson 提交于
Asserting that an assignment isn't zero is a valid use of Assert(), but these instances look more like accidental assignments due to a missing '='. getgpsegmentCount() is already internally asserting that the count is > 0 so we would never reach here in case it was.
-
由 Andreas Scherbaum 提交于
-
- 05 11月, 2016 3 次提交
-
-
由 Corbin Halliwill 提交于
-
由 Nikos Armenatzoglou 提交于
The code that we generate for slot_getattr is not correct. In particular, to check if a tuple is virtual, we have to implement the code below: if (TupHasVirtualTuple(slot) && slot->PRIVATE_tts_nvalid >= attnum) In the codegened slot_getattr, we had not implemented the second condition, i.e., slot->PRIVATE_tts_nvalid >= attnum. In this commit, we generate code for the second condition.
-
由 Nikos Armenatzoglou 提交于
-
- 04 11月, 2016 6 次提交
-
-
由 Adam Lee 提交于
-
由 xiong-gang 提交于
Signed-off-by: NKenan Yao <kyao@pivotal.io>
-
由 Ryan Tang 提交于
Signed-off-by: NCorbin Halliwill <challiwill@pivotal.io>
-
由 Corbin Halliwill 提交于
Signed-off-by: NRyan Tang <rtang@pivotal.io>
-
由 Larry Hamel 提交于
* Tighten the criteria for partition validation Add additional case when source and destination attributes are different. Authors: Larry Hamel, Marbin Tan, Chris Hajas
-
由 Daniel Gustafsson 提交于
-
- 03 11月, 2016 10 次提交
-
-
由 Adam Lee 提交于
Ignore cursor case to get CI passed for now. Have submitted two issues to track it. Signed-off-by: NPengzhou Tang <ptang@pivotal.io>
-
由 Corbin Halliwill 提交于
-
由 Corbin Halliwill 提交于
-
由 Corbin Halliwill 提交于
This is hopefully a temporary change to increase stability of the pipeline.
-
由 Corbin Halliwill 提交于
-
由 Nikos Armenatzoglou 提交于
So far we were assuming that the content of `llvm_isNull_ptr` variable, which is passed as input to expression evaluation, is always `false`. Consequently, when the result of the expression is not null, we avoid setting `llvm_isNull_ptr` to `false`. However, this assumption is not correct since in codegen we do not use a temporary `fcinfo` struct (for perfromance reasons), which initializes `fcinfo->isnull` to `false`. Instead, we pass a pointer to the isnull variable of the caller directly (which might not have been inititialized). For example, in `GenerateAdvanceAggregates` we pass a pointer to `transValueIsNull`. In this commit, we explicitly set `llvm_isNull_ptr` to `false` when the result is not null. This will cover all cases that the input is not initialized to `false`. Signed-off-by: NKarthikeyan Jambu Rajaraman <karthi.jrk@gmail.com>
-
由 Nikos Armenatzoglou 提交于
Codegened advance_aggregares did not support null attributes. With this patch, we enhance it with checks for strict functions and create the proper arguments' nullity checks accordingly. Authors: Nikos Armenatzoglou and Jimmy Yih
-
由 Ashwin Agrawal 提交于
With this commit adding infrastructure scripts to enable running catalog and storage tests in Concourse containers natively. Using this infra interatively will be migrating CS tests suites to concourse.
-
由 Heikki Linnakangas 提交于
This avoids a lot of overhead for short read-only queries. We still don't do lazy assignment for most transactions, like PostgreSQL 8.3 does, but this is a step in the right direction.
-
由 Ashwin Agrawal 提交于
Replace sleeps with deterministic check for database operational, after injecting PANIC faults.
-
- 02 11月, 2016 9 次提交
-
-
由 Daniel Gustafsson 提交于
In moving the AO table format version to be a segment version, bump the catversion to ensure that existing clusters are rebuilt to handle the new format.
-
由 Heikki Linnakangas 提交于
Segments that are still in an old format are treated as read-only. All new data go to new segments, in new format. This allows us to eventually get rid of the old format completely. This is hypothetical until we have pg_upgrade working for GPDB 4.3 -> 5.0 upgrade, as you can't have old-format tables or segments at all in a cluster that's initialized with 5.0. Stay tuned for pg_upgrade, but this is preparatory work for that.
-
由 Heikki Linnakangas 提交于
This meant moving the version field from pg_appendonly to the pg_aoseg_<oid> table (or pg_aocsseg_<oid>, for AOCS). We can still read and write both formats, but new segments will always be created in the new format (except if you set the test_appendonly_version_default GUC).
-
由 Daniel Gustafsson 提交于
Ensure that that the header guards match the actual name of the file.
-
由 Daniel Gustafsson 提交于
Remove unused fields from past version control systems and ensure that all filenames in the comments match the actual name of the file. Also fix some spelling and references.
-
由 Heikki Linnakangas 提交于
-
由 Daniel Gustafsson 提交于
pgstat_write_statsfile() failed to write the resource queue statistics to the stat file which in turn makes the pg_stat_resqueues view empty. Patch by Github user LJoNe with testcase added by me
-
由 Haisheng Yuan 提交于
gporca has a set of banned API calls which needs to be allowed with the ALLOW_xxx macro in order for gpopt to compile. But it should be the library caller(GPDB/Orca)'s resposibility to take care of the function call. see discussions on greenplum-db/gpdb#1136 and https://groups.google.com/a/greenplum.org/forum/#!topic/gpdb-dev/Mcw6JPav6h4
-
由 foyzur 提交于
* Adding support for interrupt processing before reserving more vmem. * Process pending interrupts before reserving VMEM. * Adding guc to control vmem tracker checking for interrupts before reserving more vmem.
-
- 01 11月, 2016 3 次提交
-
-
由 brendanstephens 提交于
.psqlrc can create unexpected output and changes in formatting that don't play nice with parse_oids(). ``` psql database --pset footer -Atq -h localhost -p 5432 -U gpadmin -f /tmp/20161012232709/toolkit.sql {"relids": "573615536", "funcids": ""} Time: 2.973 ms ``` Generates an Exception: ``` Traceback (most recent call last): File "/usr/local/greenplum-db/./bin/minirepro", line 386, in <module> main() File "/usr/local/greenplum-db/./bin/minirepro", line 320, in main mr_query = parse_oids(cursor, json_str) File "/usr/local/greenplum-db/./bin/minirepro", line 151, in parse_oids result.relids = json.loads(json_oids)['relids'] File "/usr/local/greenplum-db/ext/python/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/usr/local/greenplum-db/ext/python/lib/python2.6/json/decoder.py", line 322, in decode raise ValueError(errmsg("Extra data", s, end, len(s))) ValueError: Extra data: line 2 column 1 - line 3 column 1 (char 39 - 54) ```
-
由 Heikki Linnakangas 提交于
In many places where we had used a mixture of spaces and tabs for indentation, new versions of gcc complained about misleading indentation, because gcc doesn't know we're using tab width of 4. To fix, make the indentation consistent in all the places where gcc gave a warning. Would be nice to fix it all around, but that's a lot of work, so let's do it in a piecemeal fashion whenever we run into issues or need to modify a piece of code anyway. For some files, especially the GPDB-specific ones, I ran pgindent over the whole file. I used the pgindent from PostgreSQL master, which is slightly different from what was used back 8.3 days, but that's what I had easily available, and that's what we're heading to in the future anyway. In some cases, I didn't commit the pgindented result if there were funnily formatted code or comments that would need special treatment. For other places, I fixed the indentation locally, just enough to make the warnings go away. I also did a tiny bit of other trivial cleanup, that I happened to spot while working on this, although I tried to refrain from anything more extensive.
-
由 Adam Lee 提交于
Update storage types of cidr and inet in AOCO_Compression2, schema_topology and schema_topology_optimizer expected files. Related commit: 3e23b68dSigned-off-by: NPengzhou Tang <ptang@pivotal.io>
-