1. 21 7月, 2023 6 次提交
    • J
      lib/uzlib: Add a source_read_data var to pass to source_read_cb. · e6c290c3
      Jim Mussared 提交于
      For better abstraction for users of this API.
      Signed-off-by: NJim Mussared <jim.mussared@gmail.com>
      e6c290c3
    • J
      lib/uzlib/defl_static: Optimize zlib_start/finish_block. · 7f16bfca
      Jim Mussared 提交于
      Collapsing the two adjacent calls to outbits saves 32 bytes.
      
      Bringing defl_static.c into lz77.c allows better inlining, saves 24 bytes.
      
      Merge the Outbuf/uzlib_lz77_state_t structs, a minor simplification that
      doesn't change code size.
      
      This work was funded through GitHub Sponsors.
      Signed-off-by: NJim Mussared <jim.mussared@gmail.com>
      7f16bfca
    • J
      lib/uzlib/tinflate: Implement more compact lookup tables. · ef5061fe
      Jim Mussared 提交于
      Saves 68 bytes on PYBV11.
      
      This work was funded through GitHub Sponsors.
      Signed-off-by: NJim Mussared <jim.mussared@gmail.com>
      ef5061fe
    • J
      lib/uzlib: Combine zlib/gzip header parsing to allow auto-detect. · d75a3cd8
      Jim Mussared 提交于
      This supports `wbits` values between +40 to +47.
      
      This work was funded through GitHub Sponsors.
      Signed-off-by: NJim Mussared <jim.mussared@gmail.com>
      d75a3cd8
    • J
      lib/uzlib: Clean up tinf -> uzlib rename. · c2b8e6e5
      Jim Mussared 提交于
      This library used a mix of "tinf" and "uzlib" to refer to itself.  Remove
      all use of "tinf" in the public API.
      
      This work was funded through GitHub Sponsors.
      Signed-off-by: NJim Mussared <jim.mussared@gmail.com>
      c2b8e6e5
    • D
      lib/uzlib: Add memory-efficient, streaming LZ77 compression support. · c4feb806
      Damien George 提交于
      The compression algorithm implemented in this commit uses much less memory
      compared to the standard way of implementing it using a hash table and
      large look-back window.  In particular the algorithm here doesn't allocate
      hash table to store indices into the history of the previously seen text.
      Instead it simply does a brute-force-search of the history text to find a
      match for the compressor.  This is slower (linear search vs hash table
      lookup) but with a small enough history (eg 512 bytes) it's not that slow.
      And a small history does not impact the compression too much.
      
      To give some more concrete numbers comparing memory use between the
      approaches:
      
      - Standard approach: inplace compression, all text to compress must be in
        RAM (or at least memory addressable), and then an additional 16k bytes
        RAM of hash table pointers, pointing into the text
      
      - The approach in this commit: streaming compression, only a limited amount
        of previous text must be in RAM (user selectable, defaults to 512 bytes).
      
      To compress, say, 1k of data, the standard approach requires all that data
      to be in RAM, plus an additional 16k of RAM for the hash table pointers.
      With this commit, you only need the 1k of data in RAM.  Or if it's
      streaming from a file (or elsewhere), you could get away with only 256
      bytes of RAM for the sliding history and still get very decent compression.
      
      In summary: because compression takes such a large amount of RAM (in the
      standard algorithm) and it's not really suitable for microcontrollers, the
      approach taken in this commit is to minimise RAM usage as much as possible,
      and still have acceptable performance (speed and compression ratio).
      Signed-off-by: NDamien George <damien@micropython.org>
      c4feb806
  2. 12 7月, 2021 1 次提交
  3. 27 1月, 2019 1 次提交