• S
    [SPARK-19112][CORE] Support for ZStandard codec · 444bce1c
    Sital Kedia 提交于
    ## What changes were proposed in this pull request?
    
    Using zstd compression for Spark jobs spilling 100s of TBs of data, we could reduce the amount of data written to disk by as much as 50%. This translates to significant latency gain because of reduced disk io operations. There is a degradation CPU time by 2 - 5% because of zstd compression overhead, but for jobs which are bottlenecked by disk IO, this hit can be taken.
    
    ## Benchmark
    Please note that this benchmark is using real world compute heavy production workload spilling TBs of data to disk
    
    |         | zstd performance as compred to LZ4   |
    | ------------- | -----:|
    | spill/shuffle bytes    | -48% |
    | cpu time    |    + 3% |
    | cpu reservation time       |    -40%|
    | latency     |     -40% |
    
    ## How was this patch tested?
    
    Tested by running few jobs spilling large amount of data on the cluster and amount of intermediate data written to disk reduced by as much as 50%.
    
    Author: Sital Kedia <skedia@fb.com>
    
    Closes #18805 from sitalkedia/skedia/upstream_zstd.
    444bce1c
该项目使用协议 Apache License 2.0. 进一步了解
LICENSE 17.6 KB