Fix spilling of aggstate with compression
Earlier while creating the workfile for hash aggregates, the data was written to file for each agg state, and after all the aggstate's data has been written, the total size was updated in the file. For updating the total size, it used to SEEK backwards to the offset where total size was written previously. It used to work for workfiles without compression. However, when gp_workfile_compression=on, the workfiles are compressed, an attempt to SEEK the earlier offset will error out, as for compressed file its not expected to go back. This commit fixes the issue by writing all the data to a buffer, so that the total size is known, and after that its written to the file. Co-Authored-By: NAshwin Agrawal <aagrawal@pivotal.io> Signed-off-by: NAshwin Agrawal <aagrawal@pivotal.io>
Showing
想要评论请 注册 或 登录