- 29 6月, 2016 7 次提交
-
-
由 Nikos Armenatzoglou 提交于
Signed-off-by: NKarthikeyan Jambu Rajaraman <karthi.jrk@gmail.com>
-
-
由 Nikos Armenatzoglou 提交于
Verify that input expression is of type VAR <= CONST Codegen expression evaluation Fix wrong return value type Support constants and variables of Date type Add comments and fix code style
-
由 Nikos Armenatzoglou 提交于
This reverts commit aae0ad3d.
-
由 Shreedhar Hardikar 提交于
Replace RegisterExternalFunction with GetOrRegisterExternalFunction using an unordered_map in codegen_utils
-
由 Shreedhar Hardikar 提交于
* Derive GpCodegenUtils from CodegenUtils for GPDB specific utilities. * Move ElogWrapper functions to GpCodegenUtils * Implement GetOrRegisterExternalFunction
-
由 Marbin Tan 提交于
* The user can now update the DDBOOST_CONFIG storage unit, which needs to be reflected on the help file.
-
- 28 6月, 2016 12 次提交
-
-
由 Adam Lee 提交于
Root cause: DecompressReader requests data from upstream reader, which might return a very small amount of data, zlib inflate() may not be able to decompress it. Fix: fill zlib's chunk as possible as it could. Test case: DecompressReaderTest.AbleToDecompressFragmentalCompressedData Also refactor DecompressReader::read(). Signed-off-by: NPeifeng Qiu <pqiu@pivotal.io> Signed-off-by: NAdam Lee <ali@pivotal.io>
-
由 Adam Lee 提交于
-
由 Daniel Gustafsson 提交于
The functions GetPartitionTargetlist() and UpdateGenericExprState() are not called from anywhere so remove.
-
由 Kenan Yao 提交于
of meaningless code.
-
由 Kenan Yao 提交于
-
由 Kenan Yao 提交于
-
由 Daniel Gustafsson 提交于
The correct name is gp_set_proc_affinity, not gp_proc_affinity. Spotted by Alexey Grishchenko.
-
由 Kenan Yao 提交于
-
由 Kenan Yao 提交于
-
-
-
由 Andreas Scherbaum 提交于
Close #883
-
- 27 6月, 2016 2 次提交
-
-
由 Kenan Yao 提交于
in new added code.
-
- 25 6月, 2016 13 次提交
-
-
由 Tristan Su 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
-
由 Shreedhar Hardikar 提交于
Create GpcodegenBuild, GporcaBuild and GporcacodegenBuild builds to support building GPDB with codegen, orca, and orca_and_codegen.
-
由 Shreedhar Hardikar 提交于
-
由 Nikos Armenatzoglou 提交于
-
由 George Caragea 提交于
When we execute an ALTER TABLE statement that splits default partition into a "New" and a new Default partition, we perform the following 5 steps: * copy the tuples located in the default partition into a temporary table, * drop default partition, * create the “New“ partition, * create the new DEFAULT partition, and * copy all tuples from the temporary table into the newly created partitions. The last step is executed in split_rows, where for each tuple in the temporary table, * we determine the target partition/table, * construct the slot of the target table (if is NULL), and * store the tuple into the target table. Currently, we allocate the tuple table slot of the target table into a per-tuple memory context. If the size of that memory context exceeds 50KB, then we reset it. This will cause issues, since it will free target table's slot and at the next iteration (copy the next tuple) we will try to access the attributes of a freed (and not NULL) slot. In addition, the target table slot lifespan is much longer than an individual tuple. So it is not correct to allocate the slot in a per-tuple memory context. To solve this issue, we allocate target's tuple table slot in PortalHeapMemory. This is the CurrentMemoryContext that is used in split_rows before we start copying tuples. PortalHeapMemory is used already for storing the tuple table slot of the temporary table and the ResultRelInfo of the target tables. PortalHeapMemory is not freed while we copy tuples. After copying all tuples to the new partitions, we drop target tables' slots. Signed-off-by: NNikos Armenatzoglou <nikos.armenatzoglou@gmail.com>
-
由 Zak Auerbach 提交于
* Add section to readme with info on how to consume docker image * Docker image automatically starts demo cluster and sources relevant env files
-
- 24 6月, 2016 6 次提交
-
-
由 Dave Cramer 提交于
-
由 Peifeng Qiu 提交于
1. add global error message variable to report to gpdb console. 2. add try/catch block around fetchData() call to catch error message. 3. refactor shared error mechanism between ChunkBuffers. 4. fix unit tests. Also fixed a memleak thanks to Kuien. Signed-off-by: NHaozhou Wang <hawang@pivotal.io> Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Kuien Liu 提交于
Using new GPReader, and make it clean. Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Peifeng Qiu 提交于
1. add s3common_reader to detect the format of file if the file is compressed call decompressReader otherwise call S3KeyReader directly. 2. add checkCompressionType() into S3Interface. 3. fix a bug if the file size is less than S3_MAGIC_BYTES_NUM and add a unit test 4. fix a bug if chunkSize is zero. 5. add unit tests 6. rename uncompress* to decompress* Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-
由 Adam Lee 提交于
Two major problems this commit fixes: credential should be copied to keep the data. keyReader must be reset before every new file downloading. Signed-off-by: NKuien Liu <kliu@pivotal.io>
-
由 Peifeng Qiu 提交于
The clang in mac does not check the -std=c++98 flag strictly. Hence when compiling on the other platform like redhat 7 (using gcc), the source code may not be compiled correctly. The compiling errors have been fixed and the tests have been passed on redhat7 using c++98. 1. std::vector<T>::shrink_to_fit is added in C++11. 2. need to include <stdexcept> to use std::runtime_error. 3. In C++98, if a class has reference member, then it can't be copy assigned by default, we need to implement operator= explicitly. 4. stoi is added in C++11. Signed-off-by: NHaozhou Wang <hawang@pivotal.io>
-