1. 28 2月, 2020 2 次提交
  2. 07 2月, 2020 1 次提交
    • D
      fs: New zonefs file system · 8dcc1a9d
      Damien Le Moal 提交于
      zonefs is a very simple file system exposing each zone of a zoned block
      device as a file. Unlike a regular file system with zoned block device
      support (e.g. f2fs), zonefs does not hide the sequential write
      constraint of zoned block devices to the user. Files representing
      sequential write zones of the device must be written sequentially
      starting from the end of the file (append only writes).
      
      As such, zonefs is in essence closer to a raw block device access
      interface than to a full featured POSIX file system. The goal of zonefs
      is to simplify the implementation of zoned block device support in
      applications by replacing raw block device file accesses with a richer
      file API, avoiding relying on direct block device file ioctls which may
      be more obscure to developers. One example of this approach is the
      implementation of LSM (log-structured merge) tree structures (such as
      used in RocksDB and LevelDB) on zoned block devices by allowing SSTables
      to be stored in a zone file similarly to a regular file system rather
      than as a range of sectors of a zoned device. The introduction of the
      higher level construct "one file is one zone" can help reducing the
      amount of changes needed in the application as well as introducing
      support for different application programming languages.
      
      Zonefs on-disk metadata is reduced to an immutable super block to
      persistently store a magic number and optional feature flags and
      values. On mount, zonefs uses blkdev_report_zones() to obtain the device
      zone configuration and populates the mount point with a static file tree
      solely based on this information. E.g. file sizes come from the device
      zone type and write pointer offset managed by the device itself.
      
      The zone files created on mount have the following characteristics.
      1) Files representing zones of the same type are grouped together
         under a common sub-directory:
           * For conventional zones, the sub-directory "cnv" is used.
           * For sequential write zones, the sub-directory "seq" is used.
        These two directories are the only directories that exist in zonefs.
        Users cannot create other directories and cannot rename nor delete
        the "cnv" and "seq" sub-directories.
      2) The name of zone files is the number of the file within the zone
         type sub-directory, in order of increasing zone start sector.
      3) The size of conventional zone files is fixed to the device zone size.
         Conventional zone files cannot be truncated.
      4) The size of sequential zone files represent the file's zone write
         pointer position relative to the zone start sector. Truncating these
         files is allowed only down to 0, in which case, the zone is reset to
         rewind the zone write pointer position to the start of the zone, or
         up to the zone size, in which case the file's zone is transitioned
         to the FULL state (finish zone operation).
      5) All read and write operations to files are not allowed beyond the
         file zone size. Any access exceeding the zone size is failed with
         the -EFBIG error.
      6) Creating, deleting, renaming or modifying any attribute of files and
         sub-directories is not allowed.
      7) There are no restrictions on the type of read and write operations
         that can be issued to conventional zone files. Buffered, direct and
         mmap read & write operations are accepted. For sequential zone files,
         there are no restrictions on read operations, but all write
         operations must be direct IO append writes. mmap write of sequential
         files is not allowed.
      
      Several optional features of zonefs can be enabled at format time.
      * Conventional zone aggregation: ranges of contiguous conventional
        zones can be aggregated into a single larger file instead of the
        default one file per zone.
      * File ownership: The owner UID and GID of zone files is by default 0
        (root) but can be changed to any valid UID/GID.
      * File access permissions: the default 640 access permissions can be
        changed.
      
      The mkzonefs tool is used to format zoned block devices for use with
      zonefs. This tool is available on Github at:
      
      git@github.com:damien-lemoal/zonefs-tools.git.
      
      zonefs-tools also includes a test suite which can be run against any
      zoned block device, including null_blk block device created with zoned
      mode.
      
      Example: the following formats a 15TB host-managed SMR HDD with 256 MB
      zones with the conventional zones aggregation feature enabled.
      
      $ sudo mkzonefs -o aggr_cnv /dev/sdX
      $ sudo mount -t zonefs /dev/sdX /mnt
      $ ls -l /mnt/
      total 0
      dr-xr-xr-x 2 root root     1 Nov 25 13:23 cnv
      dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq
      
      The size of the zone files sub-directories indicate the number of files
      existing for each type of zones. In this example, there is only one
      conventional zone file (all conventional zones are aggregated under a
      single file).
      
      $ ls -l /mnt/cnv
      total 137101312
      -rw-r----- 1 root root 140391743488 Nov 25 13:23 0
      
      This aggregated conventional zone file can be used as a regular file.
      
      $ sudo mkfs.ext4 /mnt/cnv/0
      $ sudo mount -o loop /mnt/cnv/0 /data
      
      The "seq" sub-directory grouping files for sequential write zones has
      in this example 55356 zones.
      
      $ ls -lv /mnt/seq
      total 14511243264
      -rw-r----- 1 root root 0 Nov 25 13:23 0
      -rw-r----- 1 root root 0 Nov 25 13:23 1
      -rw-r----- 1 root root 0 Nov 25 13:23 2
      ...
      -rw-r----- 1 root root 0 Nov 25 13:23 55354
      -rw-r----- 1 root root 0 Nov 25 13:23 55355
      
      For sequential write zone files, the file size changes as data is
      appended at the end of the file, similarly to any regular file system.
      
      $ dd if=/dev/zero of=/mnt/seq/0 bs=4K count=1 conv=notrunc oflag=direct
      1+0 records in
      1+0 records out
      4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452219 s, 9.1 MB/s
      
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/seq/0
      
      The written file can be truncated to the zone size, preventing any
      further write operation.
      
      $ truncate -s 268435456 /mnt/seq/0
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0
      
      Truncation to 0 size allows freeing the file zone storage space and
      restart append-writes to the file.
      
      $ truncate -s 0 /mnt/seq/0
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0
      
      Since files are statically mapped to zones on the disk, the number of
      blocks of a file as reported by stat() and fstat() indicates the size
      of the file zone.
      
      $ stat /mnt/seq/0
        File: /mnt/seq/0
        Size: 0       Blocks: 524288     IO Block: 4096   regular empty file
      Device: 870h/2160d      Inode: 50431       Links: 1
      Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/  root)
      Access: 2019-11-25 13:23:57.048971997 +0900
      Modify: 2019-11-25 13:52:25.553805765 +0900
      Change: 2019-11-25 13:52:25.553805765 +0900
       Birth: -
      
      The number of blocks of the file ("Blocks") in units of 512B blocks
      gives the maximum file size of 524288 * 512 B = 256 MB, corresponding
      to the device zone size in this example. Of note is that the "IO block"
      field always indicates the minimum IO size for writes and corresponds
      to the device physical sector size.
      
      This code contains contributions from:
      * Johannes Thumshirn <jthumshirn@suse.de>,
      * Darrick J. Wong <darrick.wong@oracle.com>,
      * Christoph Hellwig <hch@lst.de>,
      * Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> and
      * Ting Yao <tingyao@hust.edu.cn>.
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      8dcc1a9d
  3. 01 2月, 2020 2 次提交
  4. 31 1月, 2020 1 次提交
  5. 30 1月, 2020 2 次提交
  6. 29 1月, 2020 4 次提交
  7. 28 1月, 2020 1 次提交
    • M
      prctl: PR_{G,S}ET_IO_FLUSHER to support controlling memory reclaim · 8d19f1c8
      Mike Christie 提交于
      There are several storage drivers like dm-multipath, iscsi, tcmu-runner,
      amd nbd that have userspace components that can run in the IO path. For
      example, iscsi and nbd's userspace deamons may need to recreate a socket
      and/or send IO on it, and dm-multipath's daemon multipathd may need to
      send SG IO or read/write IO to figure out the state of paths and re-set
      them up.
      
      In the kernel these drivers have access to GFP_NOIO/GFP_NOFS and the
      memalloc_*_save/restore functions to control the allocation behavior,
      but for userspace we would end up hitting an allocation that ended up
      writing data back to the same device we are trying to allocate for.
      The device is then in a state of deadlock, because to execute IO the
      device needs to allocate memory, but to allocate memory the memory
      layers want execute IO to the device.
      
      Here is an example with nbd using a local userspace daemon that performs
      network IO to a remote server. We are using XFS on top of the nbd device,
      but it can happen with any FS or other modules layered on top of the nbd
      device that can write out data to free memory.  Here a nbd daemon helper
      thread, msgr-worker-1, is performing a write/sendmsg on a socket to execute
      a request. This kicks off a reclaim operation which results in a WRITE to
      the nbd device and the nbd thread calling back into the mm layer.
      
      [ 1626.609191] msgr-worker-1   D    0  1026      1 0x00004000
      [ 1626.609193] Call Trace:
      [ 1626.609195]  ? __schedule+0x29b/0x630
      [ 1626.609197]  ? wait_for_completion+0xe0/0x170
      [ 1626.609198]  schedule+0x30/0xb0
      [ 1626.609200]  schedule_timeout+0x1f6/0x2f0
      [ 1626.609202]  ? blk_finish_plug+0x21/0x2e
      [ 1626.609204]  ? _xfs_buf_ioapply+0x2e6/0x410
      [ 1626.609206]  ? wait_for_completion+0xe0/0x170
      [ 1626.609208]  wait_for_completion+0x108/0x170
      [ 1626.609210]  ? wake_up_q+0x70/0x70
      [ 1626.609212]  ? __xfs_buf_submit+0x12e/0x250
      [ 1626.609214]  ? xfs_bwrite+0x25/0x60
      [ 1626.609215]  xfs_buf_iowait+0x22/0xf0
      [ 1626.609218]  __xfs_buf_submit+0x12e/0x250
      [ 1626.609220]  xfs_bwrite+0x25/0x60
      [ 1626.609222]  xfs_reclaim_inode+0x2e8/0x310
      [ 1626.609224]  xfs_reclaim_inodes_ag+0x1b6/0x300
      [ 1626.609227]  xfs_reclaim_inodes_nr+0x31/0x40
      [ 1626.609228]  super_cache_scan+0x152/0x1a0
      [ 1626.609231]  do_shrink_slab+0x12c/0x2d0
      [ 1626.609233]  shrink_slab+0x9c/0x2a0
      [ 1626.609235]  shrink_node+0xd7/0x470
      [ 1626.609237]  do_try_to_free_pages+0xbf/0x380
      [ 1626.609240]  try_to_free_pages+0xd9/0x1f0
      [ 1626.609245]  __alloc_pages_slowpath+0x3a4/0xd30
      [ 1626.609251]  ? ___slab_alloc+0x238/0x560
      [ 1626.609254]  __alloc_pages_nodemask+0x30c/0x350
      [ 1626.609259]  skb_page_frag_refill+0x97/0xd0
      [ 1626.609274]  sk_page_frag_refill+0x1d/0x80
      [ 1626.609279]  tcp_sendmsg_locked+0x2bb/0xdd0
      [ 1626.609304]  tcp_sendmsg+0x27/0x40
      [ 1626.609307]  sock_sendmsg+0x54/0x60
      [ 1626.609308]  ___sys_sendmsg+0x29f/0x320
      [ 1626.609313]  ? sock_poll+0x66/0xb0
      [ 1626.609318]  ? ep_item_poll.isra.15+0x40/0xc0
      [ 1626.609320]  ? ep_send_events_proc+0xe6/0x230
      [ 1626.609322]  ? hrtimer_try_to_cancel+0x54/0xf0
      [ 1626.609324]  ? ep_read_events_proc+0xc0/0xc0
      [ 1626.609326]  ? _raw_write_unlock_irq+0xa/0x20
      [ 1626.609327]  ? ep_scan_ready_list.constprop.19+0x218/0x230
      [ 1626.609329]  ? __hrtimer_init+0xb0/0xb0
      [ 1626.609331]  ? _raw_spin_unlock_irq+0xa/0x20
      [ 1626.609334]  ? ep_poll+0x26c/0x4a0
      [ 1626.609337]  ? tcp_tsq_write.part.54+0xa0/0xa0
      [ 1626.609339]  ? release_sock+0x43/0x90
      [ 1626.609341]  ? _raw_spin_unlock_bh+0xa/0x20
      [ 1626.609342]  __sys_sendmsg+0x47/0x80
      [ 1626.609347]  do_syscall_64+0x5f/0x1c0
      [ 1626.609349]  ? prepare_exit_to_usermode+0x75/0xa0
      [ 1626.609351]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      This patch adds a new prctl command that daemons can use after they have
      done their initial setup, and before they start to do allocations that
      are in the IO path. It sets the PF_MEMALLOC_NOIO and PF_LESS_THROTTLE
      flags so both userspace block and FS threads can use it to avoid the
      allocation recursion and try to prevent from being throttled while
      writing out data to free up memory.
      Signed-off-by: NMike Christie <mchristi@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Tested-by: NMasato Suzuki <masato.suzuki@wdc.com>
      Reviewed-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: NBart Van Assche <bvanassche@acm.org>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Link: https://lore.kernel.org/r/20191112001900.9206-1-mchristi@redhat.comSigned-off-by: NChristian Brauner <christian.brauner@ubuntu.com>
      8d19f1c8
  8. 27 1月, 2020 8 次提交
  9. 26 1月, 2020 1 次提交
  10. 24 1月, 2020 5 次提交
  11. 23 1月, 2020 3 次提交
    • M
      net: sched: add Flow Queue PIE packet scheduler · ec97ecf1
      Mohit P. Tahiliani 提交于
      Principles:
        - Packets are classified on flows.
        - This is a Stochastic model (as we use a hash, several flows might
                                      be hashed to the same slot)
        - Each flow has a PIE managed queue.
        - Flows are linked onto two (Round Robin) lists,
          so that new flows have priority on old ones.
        - For a given flow, packets are not reordered.
        - Drops during enqueue only.
        - ECN capability is off by default.
        - ECN threshold (if ECN is enabled) is at 10% by default.
        - Uses timestamps to calculate queue delay by default.
      
      Usage:
      tc qdisc ... fq_pie [ limit PACKETS ] [ flows NUMBER ]
                          [ target TIME ] [ tupdate TIME ]
                          [ alpha NUMBER ] [ beta NUMBER ]
                          [ quantum BYTES ] [ memory_limit BYTES ]
                          [ ecnprob PERCENTAGE ] [ [no]ecn ]
                          [ [no]bytemode ] [ [no_]dq_rate_estimator ]
      
      defaults:
        limit: 10240 packets, flows: 1024
        target: 15 ms, tupdate: 15 ms (in jiffies)
        alpha: 1/8, beta : 5/4
        quantum: device MTU, memory_limit: 32 Mb
        ecnprob: 10%, ecn: off
        bytemode: off, dq_rate_estimator: off
      Signed-off-by: NMohit P. Tahiliani <tahiliani@nitk.edu.in>
      Signed-off-by: NSachin D. Patil <sdp.sachin@gmail.com>
      Signed-off-by: NV. Saicharan <vsaicharan1998@gmail.com>
      Signed-off-by: NMohit Bhasi <mohitbhasi1998@gmail.com>
      Signed-off-by: NLeslie Monis <lesliemonis@gmail.com>
      Signed-off-by: NGautam Ramakrishnan <gautamramk@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec97ecf1
    • M
      bpf: Add BPF_FUNC_jiffies64 · 5576b991
      Martin KaFai Lau 提交于
      This patch adds a helper to read the 64bit jiffies.  It will be used
      in a later patch to implement the bpf_cubic.c.
      
      The helper is inlined for jit_requested and 64 BITS_PER_LONG
      as the map_gen_lookup().  Other cases could be considered together
      with map_gen_lookup() if needed.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200122233646.903260-1-kafai@fb.com
      5576b991
    • A
      bpf: Introduce dynamic program extensions · be8704ff
      Alexei Starovoitov 提交于
      Introduce dynamic program extensions. The users can load additional BPF
      functions and replace global functions in previously loaded BPF programs while
      these programs are executing.
      
      Global functions are verified individually by the verifier based on their types only.
      Hence the global function in the new program which types match older function can
      safely replace that corresponding function.
      
      This new function/program is called 'an extension' of old program. At load time
      the verifier uses (attach_prog_fd, attach_btf_id) pair to identify the function
      to be replaced. The BPF program type is derived from the target program into
      extension program. Technically bpf_verifier_ops is copied from target program.
      The BPF_PROG_TYPE_EXT program type is a placeholder. It has empty verifier_ops.
      The extension program can call the same bpf helper functions as target program.
      Single BPF_PROG_TYPE_EXT type is used to extend XDP, SKB and all other program
      types. The verifier allows only one level of replacement. Meaning that the
      extension program cannot recursively extend an extension. That also means that
      the maximum stack size is increasing from 512 to 1024 bytes and maximum
      function nesting level from 8 to 16. The programs don't always consume that
      much. The stack usage is determined by the number of on-stack variables used by
      the program. The verifier could have enforced 512 limit for combined original
      plus extension program, but it makes for difficult user experience. The main
      use case for extensions is to provide generic mechanism to plug external
      programs into policy program or function call chaining.
      
      BPF trampoline is used to track both fentry/fexit and program extensions
      because both are using the same nop slot at the beginning of every BPF
      function. Attaching fentry/fexit to a function that was replaced is not
      allowed. The opposite is true as well. Replacing a function that currently
      being analyzed with fentry/fexit is not allowed. The executable page allocated
      by BPF trampoline is not used by program extensions. This inefficiency will be
      optimized in future patches.
      
      Function by function verification of global function supports scalars and
      pointer to context only. Hence program extensions are supported for such class
      of global functions only. In the future the verifier will be extended with
      support to pointers to structures, arrays with sizes, etc.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20200121005348.2769920-2-ast@kernel.org
      be8704ff
  12. 21 1月, 2020 10 次提交