- 06 11月, 2017 16 次提交
-
-
由 Heikki Linnakangas 提交于
-
由 Max Yang 提交于
We don't need to check Assert(list_length(resultNode->hashList) <= resultSlot->tts_tupleDescriptor->natts), because optimizer could be smart to reuse columns for following queries: create table tbl(a int, b int, p text, c int) distributed by(a, b); create function immutable_generate_series(integer, integer) returns setof integer as 'generate_series_int4' language internal immutable; set optimizer=on; insert into tbl select i, i, i || 'SOME NUMBER SOME NUMBER', i % 10 from immutable_generate_series(1, 1000) i; The hashList specified by planner is (1, 1) which references immutable_generate_series for (a, b), and resultSlot->tts_tupleDescriptor only contains immutable_generate_series. It's good, so we don't need to check again. And slot_getattr(resultSlot, attnum, &isnull) will check attnum <= resultSlot->tts_tupleDescriptor->natts for us Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Adam Lee 提交于
commit 55fb759a Author: Peter Eisentraut <peter_e@gmx.net> Date: Tue Jun 3 22:36:35 2014 -0400 Silence Bison deprecation warnings Bison >=3.0 issues warnings about %name-prefix="base_yy" instead of the now preferred %name-prefix "base_yy" but the latter doesn't work with Bison 2.3 or less. So for now we silence the deprecation warnings.
-
由 Adam Lee 提交于
Heikki at 6a76c5d0 back ported two commits from upstream to fix this, but hung gp_dump_agent and got reverted. This time I just backport one of it, which is enough as I tested. commit d923125b Author: Peter Eisentraut <peter_e@gmx.net> Date: Fri Mar 2 22:30:01 2012 +0200 Fix incorrect uses of gzFile gzFile is already a pointer, so code like gzFile *handle = gzopen(...) is wrong. This used to pass silently because gzFile used to be defined as void*, and you can assign a void* to a void**. But somewhere between zlib versions 1.2.3.4 and 1.2.6, the definition of gzFile was changed to struct gzFile_s *, and with that new definition this usage causes compiler warnings. So remove all those extra pointer decorations. There is a related issue in pg_backup_archiver.h, where FILE *FH; /* General purpose file handle */ is used throughout pg_dump as sometimes a real FILE* and sometimes a gzFile handle, which also causes warnings now. This is not yet fixed here, because it might need more code restructuring. GitHub issue #447
-
由 Adam Lee 提交于
C files need to include postgres.h on backend or postgres_fe.h on frontend, to adopt our C types. fe-protocol3.c: In function ‘pqParseInput3’: fe-protocol3.c:256:22: warning: passing argument 1 of ‘pqGetInt64’ from incompatible pointer type [-Wincompatible-pointer-types] if (pqGetInt64(&(ao->tupcount), conn)) ^ In file included from fe-protocol3.c:21:0: libpq-int.h:629:14: note: expected ‘int64 * {aka long int *}’ but argument is of type ‘long long int *’ extern int64 pqGetInt64(int64 *result, PGconn *conn); /* GPDB only */ ^~~~~~~~~~
-
由 Adam Lee 提交于
cdb_ddboost_util.c: In function ‘createFakeRestoreFile’: cdb_ddboost_util.c:52:82: warning: pointer type mismatch in conditional expression #define DDDOPEN(path, mode, compress) (((compress) == 1) ? (GZDOPEN(path, mode)) : (fdopen(path, mode))) ^ cdb_ddboost_util.c:1856:10: note: in expansion of macro ‘DDDOPEN’ ddfp = DDDOPEN(fd[0], "r", isCompress); ^~~~~~~ cdb_ddboost_util.c:51:80: warning: pointer type mismatch in conditional expression #define DDOPEN(path, mode, compress) (((compress) == 1) ? (GZOPEN(path, mode)) : (fopen(path, mode))) ^ cdb_ddboost_util.c:1864:14: note: in expansion of macro ‘DDOPEN’ ddfpTemp = DDOPEN(dd_options->to_file, "w", isCompress); ^~~~~~
-
由 Heikki Linnakangas 提交于
Despite the comment, I see no reason to hide it. Fixes github issue #2685.
-
由 Ning Yu 提交于
The resgroup cpu test is still flaky on concourse, but the failure is hard to trigger manually, so we have to put more debug info in the test logs.
-
由 Richard Guo 提交于
* Set resource group id to be InvalidOid in pg_stat_activity when transaction ends. * Change the mode of ResGroupLock to LW_SHARED in decideResGroup(). * Add necessary comments for resource group functions. * Change some function names and variable names.
-
由 Richard Guo 提交于
Test case resgroup_verify_guc is to verify the setting of GUC statement_mem. Since this GUC does not take effect in resource group, remove this test case.
-
由 Adam Lee 提交于
src/backend's makefiles have its own rules, this commit symlinks libpq files for backend to leverage them, canonical and much simpler. What are the rules? 1, src/backend compile SUBDIR, list OBJS in sub-directories' objfiles.txt, then link them all into postgres. 2, mock.mk links all OBJS, but filters out the objects which mocked by cases.
-
由 Max Yang 提交于
For instance, the simple query: create table spilltest2 (a integer); insert into spilltest2 select a from generate_series(1,40000000) a; Current optimizer would generate FunctionTableScan for generate_series, which stores result of generate_series in tuple store. Memory leak would happen because memory tuple binding will be constructed every time in tuplestore_putvalues. It is not only memory leak problem, also performance problem, because we don't need to construct memory tuple binding for every row, but just once. This fix just change the interface of tuplestore_putvalues. It will receive memory tuple binding as input, which is constructed once outside. Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Max Yang 提交于
before we make some changes for them. Just split indention and code changes to separated commits to make review easier: pg_exttable.c prepare.c execQual.c plpgsql.h Signed-off-by: NXiaoran Wang <xiwang@pivotal.io>
-
由 Heikki Linnakangas 提交于
I frequently saw failures when running the test on my laptop with ORCA. The reason is that the OIDs on a user table are not guaranteed to be unique across segments in GPDB. Depending on concurrent activity and timing, the OID counters on different segments are not always in sync, and can produce duplicates when looked at across all nodes. In fact, I think the only reason this is currently passing on the pipeline so reliably as it is, is because the test is run in parallel with other tests that also create objects, which creates enough noise in the OID allocations. To fix, modify the test data in the test so that all the initial test rows reside on the same segment. Within a segment, the OIDs are unique.
-
由 Heikki Linnakangas 提交于
This is mostly in preparation for changes soon to be merged from PostgreSQL 8.4, commit a77eaa6a to be more precise. Currently GPDB's ExecInsert uses ExecSlotFetch*() functions to get the tuple from the slot, while in the upstream, it makes a modifiable copy with ExecMaterializeSlot(). That's OK as the code stands, because there's always a "junk filter" that ensures that the slot doesn't point directly to an on-disk tuple. But commit a77eaa6a will change that, so we have to start being more careful. This does fix an existing bug, namely that if you UPDATE an AO table with OIDs, the OIDs currently change (github issue #3732). Add a test case for that. More detailed breakdown of the changes: * In ExecInsert, create a writeable copy of the tuple when we're about to modify it, so that we don't accidentally modify an existing on-disk tuple. By calling ExecMaterializeSlot(). * In ExecInsert, track the OID of the tuple we're about to insert in a local variable, when we call the BEFORE ROW triggers, because we don't have a "tuple" yet. * Add ExecMaterializeSlot() function, like in the upstream, because we now need it in ExecInsert. Refactor ExecFetchSlotHeapTuple to use ExecMaterializeSlot(), like in upstream. * Cherry-pick bug fix commit 3d02cae3 from upstream. We would get that soon anyway as part of the merge, but we'll soon have test failures if we don't fix it immediately. * Change the API of appendonly_insert(), so that it takes the new OID as argument, instead of extracting it from the passed-in MemTuple. With this change, appendonly_insert() is guaranteed to not modify the passed-in MemTuple, so we don't need the equivalent of ExecMaterializeSlot() for MemTuples. * Also change the API of appendonly_insert() so that it returns the new OID of the inserted tuple, like heap_insert() does. Most callers ignore the return value, so this way they don't need to pass a dummy pointer argument. * Add test case for the case that a BEFORE ROW trigger sets the OID of a tuple we're about to insert. This is based on earlier patches against the 8.4 merge iteration3 branch by Jacob and Max.
-
- 04 11月, 2017 8 次提交
-
-
由 Heikki Linnakangas 提交于
Also move initialization of gpmon packet to single choke point at ExecInitNode(), and sending the packet at ExecReScan() and ExecRestrPos(). A few CheckSendPlanStateGpmonPkt() remain here and there, which I didn't dare to remove. Although I'm pretty sure we could just remove them as well and no-one would notice the difference.
-
由 Heikki Linnakangas 提交于
Except at the very top, one node's output is always another node's input, so it seems silly to have separate counters for rows in. The only place where "rows in" was used, was in gpperfmon's calculation of "rows skew". Change that calculation to use "rows out" instead. That's not exactly the same thing, but seems just as good for the purpose of measuring skew.
-
由 Heikki Linnakangas 提交于
I believe this should've been removed by commit c0c1897f, which removed the Gpmon_M_Incr() call just before it.
-
由 Heikki Linnakangas 提交于
setMotionStatsForGpmon() didn't actually do anything. It just set a bunch of local variables. And the structs were simply unused.
-
由 Asim R P 提交于
Previously max_wal_senders was set to 1 on both master and the segments. This commit sets it to 0 if filerep is used. In case of walrep it is set to 1. Signed-off-by: NAbhijit Subramanya <asubramanya@pivotal.io>
-
由 Abhijit Subramanya 提交于
The macro is taken from the upstream commit 40f908bd. This commit fixes issues for CLUSTER and COPY command where the commands would not generate necessary XLOG records when streaming replication is enabled. With the correct use of XLogIsNeeded() this is now fixed. This also cleans up the XLog_CanBypassWal() and XLog_UnconvertedCanBypassWal() functions by replacing their usage with XLogIsNeeded(). Signed-off-by: NTaylor Vesely <tvesely@pivotal.io> Signed-off-by: NAsim R P <apraveen@pivotal.io>
-
由 Karen Huddleston 提交于
-
由 Karen Huddleston 提交于
-
- 03 11月, 2017 7 次提交
-
-
由 Heikki Linnakangas 提交于
Commit eb1740c6 backported a bunch of code from upstream, but the backported code was structured slightly differently. That added a pstrdup() call, with no corresponding pfree(). That lead to a memory leak in the executor memory context, which adds up if e.g. the conversion of the PL/Python function's return value to a Postgres type is very complicated. This was revealed by a test case that returns a huge array. Converting the Python array to a PostgreSQL array leaked the string representation of every element. create or replace function gen_array(x int) returns float8[] as $$ from random import random return [random() for _ in range(x)] $$language plpythonu; EXPLAIN ANALYZE select gen_array(120000000); I did not add that test case to the regression suite, as there is no convenient place to add it to. A memory leak just means that it consumes a lot of memory, which would be difficult to test reliably. Fixes github issue #3654.
-
由 David Sharp 提交于
If set, resgroup_assign_hook is called during transaction setup, and the transaction is assigned to the resource group corresponding to the returned Oid. This allows an extension to change how transactions are assigned to resource groups. Also adds the Makefile and .c file necessary for running CMockery tests for resgroup.c, as well as unit tests over the added code, which can be run with: cd test && make -C .. clean all && make && ./resgroup.t We set the CurrentResourceOwner in GetResGroupIdForName so it can be called from decideResGroupId, which is called outside a transaction when CurrentResourceOwner is not set. Signed-off-by: NAmil Khanzada <akhanzada@pivotal.io> Signed-off-by: NDavid Sharp <dsharp@pivotal.io>
-
由 Haisheng Yuan 提交于
This reverts commit a59d8338.
-
由 Haisheng Yuan 提交于
-
由 Kris Macoskey 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Heikki Linnakangas 提交于
With ORCA, an UPDATE is actually implemented as an DELETE + INSERT. Don't fire INSERT or DELETE triggers in that case. This was broken by commit 740f304e. I wonder if we should somehow fire UPDATE triggers in that case, but I don't see any existing code to do that either.
-
由 Lisa Owen 提交于
-
- 02 11月, 2017 9 次提交
-
-
由 Nadeem Ghani 提交于
Before this commit, gpmmon expected to see QLOG packets before QUERYSEG packets. Out of order packets were quietly dropped. This behavior was causing intermittent test failure with the message: No segments for CPU skew calculation. This commit change the order of packet sends on gpsmon to fix these failures. Signed-off-by: NJacob Champion <pchampion@pivotal.io>
-
由 Larry Hamel 提交于
- Remove Solaris special cases - We don't support solaris anymore - When PYTHONHOME is not set, don't use it. - PYTHONHOME should remain default (unset) and not used as a variable, unless a bundled python is available and preferred. - Use LD_LIBRARY_PATH only since macos 10.5, LD_LIBRARY_PATH has been supported, so remove conditionals for darwin, discarding DYLD_LIBRARY_PATH in favor of the standard LD_LIBRARY_PATH
-
由 Heikki Linnakangas 提交于
Previously, if a segment reported an error after starting up the interconnect, it would take up to 250 ms for the main thread in the QD process to wake up and poll the dispatcher connections, and to see that there was an error. Shorten that time, by waking up immediately if the QD->QE libpq socket becomes readable while we're waiting for data to arrive in a Motion node. This isn't a complete solution, because this will only wake up if one arbitrarily chosen connection becomes readable, and we still rely on polling for the others. But this greatly speeds up many common scenarios. In particular, the "qp_functions_in_select" test now runs in under 5 s on my laptop, when it took about 60 seconds before.
-
由 Heikki Linnakangas 提交于
The problem with pthread wait conditions is that there is no way to wait for the wakeup from anothre thread, and for other events, like a socket becomeing readable, at the same time. We currently rely on polling on the other events, which leads to unnecessary delays. In particular, if a QE throws an ERROR, we will wait up to 250 milliseconds before the timeout is reached, before waking up the QD main thread to process the error. This commit doesn't actually address that problem yet, just changes the signaling mechanism between the RX thread and the main thread. I'll make the changes to avoid that delay as a separate commit, for easier review.
-
由 Richard Guo 提交于
Previously QD dispatches resource group slot id to QEs and each QE gets a slot according to the slot id. The problem with this way is if QD exits before QE and then dispatches the same slot id in a new session, two different sessions on QE may share the same slot. In this commit, QD no longer dispatches slot id. Each QE alloc/free resource group slot from its own slot pool. Signed-off-by: Nxiong-gang <gxiong@pivotal.io>
-
由 dyozie 提交于
-
由 Kris Macoskey 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Kris Macoskey 提交于
Signed-off-by: NDivya Bhargov <dbhargov@pivotal.io>
-
由 Divya Bhargov 提交于
The current user of the container is 'root'. This does not work for ssh'ing into CCP AWS clusters because 'root' is explicitly disabled for ssh. This is a standard pattern across any AWS AMI. This is a quick fix to unblock the pipeline. Some refactors may follow this that change the run_tinc PRETEST_SCRIPT pattern. Signed-off-by: NKris Macoskey <kmacoskey@pivotal.io>
-