1. 14 3月, 2021 4 次提交
  2. 12 3月, 2021 6 次提交
  3. 11 3月, 2021 2 次提交
  4. 10 3月, 2021 3 次提交
  5. 08 3月, 2021 12 次提交
  6. 07 3月, 2021 3 次提交
  7. 06 3月, 2021 3 次提交
  8. 05 3月, 2021 5 次提交
  9. 04 3月, 2021 2 次提交
    • A
      Distributed: Add ability to delay/throttle INSERT until pending data will be reduced · 6965ac26
      Azat Khuzhin 提交于
      Add two new settings for the Distributed engine:
      - bytes_to_delay_insert
      - max_delay_to_insert
      
      If at the beginning of INSERT there will be too much pending data, more
      then bytes_to_delay_insert, then the INSERT will wait until it will be
      shrinked, and not more then max_delay_to_insert seconds.
      
      If after this there will be still too much pending, it will throw an
      exception.
      
      Also new profile events were added (by analogy to the MergeTree):
      - DistributedDelayedInserts (although you can use system.errors instead
        of this, but still)
      - DistributedRejectedInserts
      - DistributedDelayedInsertsMilliseconds
      6965ac26
    • A
      Distributed: Add ability to limit amount of pending bytes for async INSERT · b5a57785
      Azat Khuzhin 提交于
      Right now with distributed_directory_monitor_batch_inserts=1 and
      insert_distributed_sync=0 INSERT into Distributed table will store
      blocks that should be sent to remote (and in case of
      prefer_localhost_replica=0 to the localhost too) on the local
      filesystem, and sent it in background.
      
      However there is no limit for this storage, and if the remote is
      unavailable (or some other error), these pending blocks may take
      significant space, and this is not always desired behaviour.
      
      Add new Distributed setting - bytes_to_throw_insert, that will set the
      limit for how much pending bytes is allowed, if the limit will be
      reached an exception will be throw.
      
      By default was set to 0, to avoid surprises.
      b5a57785