- 16 8月, 2016 2 次提交
-
-
由 Pengzhou Tang 提交于
-
由 yezhiweicmss 提交于
* fix gpexpand Worker-thread hang problem When using python 2.7, gpexpand will hang when the command "gpexpand -D gpadmin" is done. sys.exit calls thread.join() which waits for each thread to exit; The sub-thread Worker will still try getting from the WorkerPool Queue. however, when using python 2.6.2, it's OK. Maybe the Python threading module changed somehow.
-
- 15 8月, 2016 1 次提交
-
-
由 Heikki Linnakangas 提交于
-
- 13 8月, 2016 2 次提交
-
-
由 Heikki Linnakangas 提交于
The functions in memtuple.c were only called when building a column binding struct. That's not a hot path, as that only gets done once, when starting the executor, not at every row. handleAckedPacket() is probably in a somewhat hot path, as it's called for every received Ack interconnect packet. However, all the calls to it are straight unconditional jumps, so branch prediction should make those very cheap. Inlining them would increase the code size quite a bit, so the compiler is probably making a good choice by refusing to inline them. Let's not try to force it. I'm seeing a few more warnings like this from tqual.c and tuplesort.c, but those are in upstream code, so I'm reluctant to just remove the inline attributes there. Should do something about those too, to reduce the noise, but let's at least get these straightforward gpdb-specific cases fixed. For reference, these are the warnings I was getting: memtuple.c: In function ‘create_col_bind’: memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:241:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 'd')) ^ memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:270:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 'i')) ^ memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:308:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 's')) ^ memtuple.c:122:20: warning: inlining failed in call to ‘add_null_save_aligned’: call is unlikely and code size would grow [-Winline] static inline bool add_null_save_aligned(MemTupleAttrBinding *bind, short *null_save_aligned, int i, char next_attr_align) ^ memtuple.c:354:10: warning: called from here [-Winline] if (add_null_save_aligned(previous_bind, colbind->null_saves_aligned, physical_col - 1, 'c')) ^ ic_udpifc.c: In function ‘handleAcks.isra.25’: ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5177:3: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^ ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5189:4: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^ ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5073:4: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^ ic_udpifc.c:4370:1: warning: inlining failed in call to ‘handleAckedPacket’: call is unlikely and code size would grow [-Winline] handleAckedPacket(MotionConn *ackConn, ICBuffer *buf, uint64 now) ^ ic_udpifc.c:5112:4: warning: called from here [-Winline] handleAckedPacket(conn, buf, now); ^
-
-
- 12 8月, 2016 7 次提交
-
-
由 Heikki Linnakangas 提交于
Also rename the argument to adjust_inherited_tlist() to what it is in the upstream.
-
由 Heikki Linnakangas 提交于
Now that it's purely GPDB code, let's be tidy. (Running pgindent on files with mixed gpdb-specific and upstream code would easily mess up the formatting of upstream code, if a different version of pgindent is used, or if the gpdb-additions have changed indentation.)
-
由 Heikki Linnakangas 提交于
tablecmds.c is huge, and it's inherited from upstream. This makes diffing and merging with upstream a little bit easier.
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
-
由 Heikki Linnakangas 提交于
To reduce the diff vs. upstream in pg_proc.h a little bit.
-
由 volkovandr 提交于
* Changes in gpexpand: - Explicitly close the connection to the database before trying to stop it. Otherwise the database wouldn't stop due to active connection and the script will fail. - Close the connection to the database before leaving the script. Otherwise it will not return to the command line
-
- 11 8月, 2016 20 次提交
-
-
由 Heikki Linnakangas 提交于
What we store in that field is not actually a valid "varbit" datum. If you try to read it out as a varbit, it looks like a single "1" bit. That's clearly bogus, but it's also problematic for pg_upgrade, because there's no easy way to read the actual contents. There are functions in gp_toolkit for reading it and decompressing it out, but it'd be much better to keep it in compressed form in memory, as an uncompressed bitmap can be quite large. Also, there's no easy way to restore the uncompressed bitmap back into a valid visimap entry. We'll have to deal with it for the GPDB 4.3 to 5.0 upgrade, but let's fix the datatype in 5.0 so that the next upgrade will be easier. Bytea is a bit rough for this too. A custom "compressed bitmap" datatype would probably be best. But this'll do for now.
-
由 Adam Lee 提交于
Optimize s3key_writer to reuse chunkbuffer and upload the chunkbuffer to S3 via multipart uploading. Also remove CURLOPT_INFILESIZE_LARGE of curl because it's useless for s3ext. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Kuien Liu 提交于
In this test we may insert millions of rows onto S3, e.g., 1M rows occupy 55MB in plain CSV format. To make sure the regression is always successful, we need to clean files in remote S3 bucket firstly, e.g, in pipeline: `s3cmd --recursive del s3://s3test.pivotal.io/regress/s3write`
-
由 Adam Lee 提交于
Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Kuien Liu 提交于
When post/put/head requests of RESTfulService fail, try to re-execute them by multiple (default 3) times. Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Adam Lee 提交于
getUploadId(), uploadPartOfData() and completeMultiPart() Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
add s3key_writer which introduces chunkbuffer to hold intermediate rows and uploads the chunkbuffer to S3 if it is full or the request is the last call. Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
genUniqueKeyName() makes sure the file for every segment to upload has a unique name. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Adam Lee 提交于
Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Kuien Liu 提交于
1. add PUT functions in S3RESTfulService class 2. add dummyHTTPServer.py to handle PUT requests 3. add unit tests (those depend on dummyHTTPServer are disabled) Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io>
-
由 Adam Lee 提交于
Only to create, not functional yet. Signed-off-by: NAdam Lee <ali@pivotal.io> Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Haozhou Wang 提交于
1. Add url checking messages for gpcheckcloud, 2. List all bucket content with '-c' option. 3. Add clear config checking messages for gpcheckcloud. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: Adam Lee ali@pivotal.io
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
-
由 Adam Lee 提交于
Use `openssl md5` instead of `md5sum` for compatibility. Define the array in another way, which makes it easy to be updated.
-
由 Marbin Tan 提交于
-
由 Marbin Tan 提交于
-
由 Jimmy Yih 提交于
TINC is an internal Pivotal test framework which is used for testing Greenplum. These regression tests are used regularly to validate internal and external commits. With this commit, nearly all Greenplum test code will be available for public usage.
-
- 10 8月, 2016 2 次提交
-
-
由 zhaoanan 提交于
-
由 Marc Spehlmann 提交于
This fixes the naming in c85f858e A new feature of ORCA is to more efficiently handle array constraints. It includes a new preprocessing stage, and a new way of internally representing array constraints. This feature can be enabled by use of this GUC.
-
- 09 8月, 2016 6 次提交
-
-
由 Haisheng Yuan 提交于
Orca couldn't pickup plan that uses index scan for the following cases: select * from btree_tbl where a in (1,2); --> Orca generated table scan instead of index scan select * from bitmap_tbl where a in (1,2); --> Orca generated table scan instead of bitmap scan Orca failed to consider the case that uses ArrayComp when trying to pick up index. The issue has been fixed in this patch. Closes #993
-
由 Ashwin Agrawal 提交于
-
由 Ashwin Agrawal 提交于
High level theory of the issue if checkpoint happens after recording COMMIT to clog, xactHashTable won't know the status of the xact during recovery, since no REDO records corresponding to the same would be looked into. But its incorrect without consulting CLOG to ABORT the xact based on having CREATE_PENDING entry in PT. Hence should check CLOG to verify if it was COMMITTED. If we don't perform this check, the recovery would try to mark COMMITED Xact Aborted, seeing Create-Pending entry associated with the Xact and double fault. Do not have repro as wasn't able to find scenario in which this can happen, but was seen in field and hence better to add protection to avoid double fault.
-
We just need to pass the query tree, we don't need source sql text and other arguments, so change to QueryRewrite here. Previously an implicit cast was causing a compiler warning. Also, the method pg_analyze_and_rewrite is overkill, and in fact calls QueryRewrite.
-
由 Marc Spehlmann 提交于
A new feature of ORCA is to more efficiently handle array constraints. It includes a new preprocessing stage, and a new way of internally representing array constraints. This feature can be enabled by use of this GUC.
-
由 Chumki Roy 提交于
-