- 17 8月, 2016 3 次提交
-
-
由 Shreedhar Hardikar 提交于
-
由 Dave Cramer 提交于
gp_dbid gp_num_contents_in_cluster gp_contentid -b is being used upstream for binary upgrade mode. The rest because of future conflict issues. The long options will always work
-
由 Haisheng Yuan 提交于
Also updated gp_optimizer expected output and ignore line number difference for functions.c
-
- 16 8月, 2016 18 次提交
-
-
由 Heikki Linnakangas 提交于
We don't use CaQL here anymore. Fix to match upstream.
-
由 Heikki Linnakangas 提交于
We don't use CaQL here anymore.
-
由 Heikki Linnakangas 提交于
We just got rid of the last users of them.
-
由 Heikki Linnakangas 提交于
To reduce our diff vs upstream, to make diffing and merging easier.
-
由 Heikki Linnakangas 提交于
It's a plain heap tuple in the usptream. This reduces the diff vs. upstream.
-
由 Heikki Linnakangas 提交于
Use a regular syscache lookup like in the upstream, and get rid of the now-unused get_func_namespace function.
-
由 Heikki Linnakangas 提交于
While it might make sense to extract this duplicated code to a function, let's rather avoid the difference from upstream.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
FTS doesn't actually seem to use the superuser's username for anything. That allowed removing probeUser and retrieveUserAndDb(). All the remaining calls to FtsFindSuperuser() were from cdbresynchronizechangetracking.c, but all of those callers just discarded the return value. FtsFindSuperuser() doesn't throw any errors, either, so it didn't serve any error-checking purpose either. Remove it all.
-
由 Heikki Linnakangas 提交于
Not that it matters from a performance point of view, but surely it was not intended.
-
由 Heikki Linnakangas 提交于
We just looked up the relation with RangeVarGetRelidExtended(). There's no reason to believe it might've been removed since then, especially when we just locked it.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
I'm not sure why it ever existed, but it's been dead for a long time.
-
由 Pengzhou Tang 提交于
-
由 Pengzhou Tang 提交于
-
由 yezhiweicmss 提交于
* fix gpexpand Worker-thread hang problem When using python 2.7, gpexpand will hang when the command "gpexpand -D gpadmin" is done. sys.exit calls thread.join() which waits for each thread to exit; The sub-thread Worker will still try getting from the WorkerPool Queue. however, when using python 2.6.2, it's OK. Maybe the Python threading module changed somehow.
-
- 15 8月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 13 8月, 2016 2 次提交
-
-
由 Heikki Linnakangas 提交于
The functions in memtuple.c were only called when building a column binding struct. That's not a hot path, as that only gets done once, when starting the executor, not at every row. handleAckedPacket() is probably in a somewhat hot path, as it's called for every received Ack interconnect packet. However, all the calls to it are straight unconditional jumps, so branch prediction should make those very cheap. Inlining them would increase the code size quite a bit, so the compiler is probably making a good choice by refusing to inline them. Let's not try to force it. I'm seeing a few more warnings like this from tqual.c and tuplesort.c, but those are in upstream code, so I'm reluctant to just remove the inline attributes there. Should do something about those too, to reduce the noise, but let's at least get these straightforward gpdb-specific cases fixed. For reference, these are the warnings I was getting: memtuple.c: In function ‘create_col_bind’: memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:241:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 'd')) ^ memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:270:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 'i')) ^ memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:308:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 's')) ^ memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:354:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 'c')) ^ ic_udpifc.c: In function ‘handleAcks.isra.25’: ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5177:3: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^ ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5189:4: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^ ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5073:4: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^ ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5112:4: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^
-
-
- 12 8月, 2016 7 次提交
-
-
由 Heikki Linnakangas 提交于
Also rename the argument to adjust_inherited_tlist() to what it is in the upstream.
-
由 Heikki Linnakangas 提交于
Now that it's purely GPDB code, let's be tidy. (Running pgindent on files with mixed gpdb-specific and upstream code would easily mess up the formatting of upstream code, if a different version of pgindent is used, or if the gpdb-additions have changed indentation.)
-
由 Heikki Linnakangas 提交于
tablecmds.c is huge, and it's inherited from upstream. This makes diffing and merging with upstream a little bit easier.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
To reduce the diff vs. upstream in pg_proc.h a little bit.
-
由 volkovandr 提交于
* Changes in gpexpand: - Explicitly close the connection to the database before trying to stop it. Otherwise the database wouldn't stop due to active connection and the script will fail. - Close the connection to the database before leaving the script. Otherwise it will not return to the command line
-
- 11 8月, 2016 9 次提交
-
-
由 Heikki Linnakangas 提交于
What we store in that field is not actually a valid "varbit" datum. If you try to read it out as a varbit, it looks like a single "1" bit. That's clearly bogus, but it's also problematic for pg_upgrade, because there's no easy way to read the actual contents. There are functions in gp_toolkit for reading it and decompressing it out, but it'd be much better to keep it in compressed form in memory, as an uncompressed bitmap can be quite large. Also, there's no easy way to restore the uncompressed bitmap back into a valid visimap entry. We'll have to deal with it for the GPDB 4.3 to 5.0 upgrade, but let's fix the datatype in 5.0 so that the next upgrade will be easier. Bytea is a bit rough for this too. A custom "compressed bitmap" datatype would probably be best. But this'll do for now.
-
由 Adam Lee 提交于
Optimize s3key_writer to reuse chunkbuffer and upload the chunkbuffer to S3 via multipart uploading. Also remove CURLOPT_INFILESIZE_LARGE of curl because it's useless for s3ext. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Kuien Liu 提交于
In this test we may insert millions of rows onto S3, e.g., 1M rows occupy 55MB in plain CSV format. To make sure the regression is always successful, we need to clean files in remote S3 bucket firstly, e.g, in pipeline: `s3cmd --recursive del s3://s3test.pivotal.io/regress/s3write`
-
由 Adam Lee 提交于
Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Kuien Liu 提交于
When post/put/head requests of RESTfulService fail, try to re-execute them by multiple (default 3) times. Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Adam Lee 提交于
getUploadId(), uploadPartOfData() and completeMultiPart() Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
add s3key_writer which introduces chunkbuffer to hold intermediate rows and uploads the chunkbuffer to S3 if it is full or the request is the last call. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
genUniqueKeyName() makes sure the file for every segment to upload has a unique name. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-