1. 26 8月, 2021 1 次提交
  2. 26 7月, 2021 1 次提交
    • D
      binfmt: remove support for em86 (alpha only) · 6208721f
      David Hildenbrand 提交于
      We have a fairly specific alpha binary loader in Linux: running x86
      (i386, i486) binaries via the em86 [1] emulator. As noted in the Kconfig
      option, the same behavior can be achieved via binfmt_misc, for example,
      more nowadays used for running qemu-user.
      
      An example on how to get binfmt_misc running with em86 can be found in
      Documentation/admin-guide/binfmt-misc.rst
      
      The defconfig does not have CONFIG_BINFMT_EM86=y set. And doing a
      	make defconfig && make olddefconfig
      results in
      	# CONFIG_BINFMT_EM86 is not set
      
      ... as we don't seem to have any supported Linux distirbution for alpha
      anymore, there isn't really any "default" user of that feature anymore.
      
      Searching for "CONFIG_BINFMT_EM86=y" reveals mostly discussions from
      around 20 years ago, like [2] describing how to get netscape via em86
      running via em86, or [3] discussing that running wine or installing
      Win 3.11 through em86 would be a nice feature.
      
      The latest binaries available for em86 are from 2000, version 2.2.1 [4] --
      which translates to "unsupported"; further, em86 doesn't even work with
      glibc-2.x but only with glibc-2.0 [4, 5]. These are clear signs that
      there might not be too many em86 users out there, especially users
      relying on modern Linux kernels.
      
      Even though the code footprint is relatively small, let's just get rid
      of this blast from the past that's effectively unused.
      
      [1] http://ftp.dreamtime.org/pub/linux/Linux-Alpha/em86/v0.4/docs/em86.html
      [2] https://static.lwn.net/1998/1119/a/alpha-netscape.html
      [3] https://groups.google.com/g/linux.debian.alpha/c/AkGuQHeCe0Y
      [4] http://zeniv.linux.org.uk/pub/linux/alpha/em86/v2.2-1/relnotes.2.2.1.html
      [5] https://forum.teamspeak.com/archive/index.php/t-1477.html
      
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-api@vger.kernel.org
      Cc: linux-alpha@vger.kernel.org
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      6208721f
  3. 28 6月, 2021 1 次提交
  4. 11 5月, 2021 1 次提交
  5. 23 4月, 2021 1 次提交
  6. 29 1月, 2021 1 次提交
  7. 15 10月, 2020 1 次提交
    • D
      vfs: move generic_remap_checks out of mm · 02e83f46
      Darrick J. Wong 提交于
      I would like to move all the generic helpers for the vfs remap range
      functionality (aka clonerange and dedupe) into a separate file so that
      they won't be scattered across the vfs and the mm subsystems.  The
      eventual goal is to be able to deselect remap_range.c if none of the
      filesystems need that code, but the tricky part here is picking a
      stable(ish) part of the merge window to rearrange code.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      02e83f46
  8. 05 10月, 2020 1 次提交
  9. 23 9月, 2020 1 次提交
  10. 31 7月, 2020 1 次提交
  11. 06 3月, 2020 1 次提交
  12. 09 2月, 2020 1 次提交
  13. 07 2月, 2020 1 次提交
    • D
      fs: New zonefs file system · 8dcc1a9d
      Damien Le Moal 提交于
      zonefs is a very simple file system exposing each zone of a zoned block
      device as a file. Unlike a regular file system with zoned block device
      support (e.g. f2fs), zonefs does not hide the sequential write
      constraint of zoned block devices to the user. Files representing
      sequential write zones of the device must be written sequentially
      starting from the end of the file (append only writes).
      
      As such, zonefs is in essence closer to a raw block device access
      interface than to a full featured POSIX file system. The goal of zonefs
      is to simplify the implementation of zoned block device support in
      applications by replacing raw block device file accesses with a richer
      file API, avoiding relying on direct block device file ioctls which may
      be more obscure to developers. One example of this approach is the
      implementation of LSM (log-structured merge) tree structures (such as
      used in RocksDB and LevelDB) on zoned block devices by allowing SSTables
      to be stored in a zone file similarly to a regular file system rather
      than as a range of sectors of a zoned device. The introduction of the
      higher level construct "one file is one zone" can help reducing the
      amount of changes needed in the application as well as introducing
      support for different application programming languages.
      
      Zonefs on-disk metadata is reduced to an immutable super block to
      persistently store a magic number and optional feature flags and
      values. On mount, zonefs uses blkdev_report_zones() to obtain the device
      zone configuration and populates the mount point with a static file tree
      solely based on this information. E.g. file sizes come from the device
      zone type and write pointer offset managed by the device itself.
      
      The zone files created on mount have the following characteristics.
      1) Files representing zones of the same type are grouped together
         under a common sub-directory:
           * For conventional zones, the sub-directory "cnv" is used.
           * For sequential write zones, the sub-directory "seq" is used.
        These two directories are the only directories that exist in zonefs.
        Users cannot create other directories and cannot rename nor delete
        the "cnv" and "seq" sub-directories.
      2) The name of zone files is the number of the file within the zone
         type sub-directory, in order of increasing zone start sector.
      3) The size of conventional zone files is fixed to the device zone size.
         Conventional zone files cannot be truncated.
      4) The size of sequential zone files represent the file's zone write
         pointer position relative to the zone start sector. Truncating these
         files is allowed only down to 0, in which case, the zone is reset to
         rewind the zone write pointer position to the start of the zone, or
         up to the zone size, in which case the file's zone is transitioned
         to the FULL state (finish zone operation).
      5) All read and write operations to files are not allowed beyond the
         file zone size. Any access exceeding the zone size is failed with
         the -EFBIG error.
      6) Creating, deleting, renaming or modifying any attribute of files and
         sub-directories is not allowed.
      7) There are no restrictions on the type of read and write operations
         that can be issued to conventional zone files. Buffered, direct and
         mmap read & write operations are accepted. For sequential zone files,
         there are no restrictions on read operations, but all write
         operations must be direct IO append writes. mmap write of sequential
         files is not allowed.
      
      Several optional features of zonefs can be enabled at format time.
      * Conventional zone aggregation: ranges of contiguous conventional
        zones can be aggregated into a single larger file instead of the
        default one file per zone.
      * File ownership: The owner UID and GID of zone files is by default 0
        (root) but can be changed to any valid UID/GID.
      * File access permissions: the default 640 access permissions can be
        changed.
      
      The mkzonefs tool is used to format zoned block devices for use with
      zonefs. This tool is available on Github at:
      
      git@github.com:damien-lemoal/zonefs-tools.git.
      
      zonefs-tools also includes a test suite which can be run against any
      zoned block device, including null_blk block device created with zoned
      mode.
      
      Example: the following formats a 15TB host-managed SMR HDD with 256 MB
      zones with the conventional zones aggregation feature enabled.
      
      $ sudo mkzonefs -o aggr_cnv /dev/sdX
      $ sudo mount -t zonefs /dev/sdX /mnt
      $ ls -l /mnt/
      total 0
      dr-xr-xr-x 2 root root     1 Nov 25 13:23 cnv
      dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq
      
      The size of the zone files sub-directories indicate the number of files
      existing for each type of zones. In this example, there is only one
      conventional zone file (all conventional zones are aggregated under a
      single file).
      
      $ ls -l /mnt/cnv
      total 137101312
      -rw-r----- 1 root root 140391743488 Nov 25 13:23 0
      
      This aggregated conventional zone file can be used as a regular file.
      
      $ sudo mkfs.ext4 /mnt/cnv/0
      $ sudo mount -o loop /mnt/cnv/0 /data
      
      The "seq" sub-directory grouping files for sequential write zones has
      in this example 55356 zones.
      
      $ ls -lv /mnt/seq
      total 14511243264
      -rw-r----- 1 root root 0 Nov 25 13:23 0
      -rw-r----- 1 root root 0 Nov 25 13:23 1
      -rw-r----- 1 root root 0 Nov 25 13:23 2
      ...
      -rw-r----- 1 root root 0 Nov 25 13:23 55354
      -rw-r----- 1 root root 0 Nov 25 13:23 55355
      
      For sequential write zone files, the file size changes as data is
      appended at the end of the file, similarly to any regular file system.
      
      $ dd if=/dev/zero of=/mnt/seq/0 bs=4K count=1 conv=notrunc oflag=direct
      1+0 records in
      1+0 records out
      4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452219 s, 9.1 MB/s
      
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/seq/0
      
      The written file can be truncated to the zone size, preventing any
      further write operation.
      
      $ truncate -s 268435456 /mnt/seq/0
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0
      
      Truncation to 0 size allows freeing the file zone storage space and
      restart append-writes to the file.
      
      $ truncate -s 0 /mnt/seq/0
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0
      
      Since files are statically mapped to zones on the disk, the number of
      blocks of a file as reported by stat() and fstat() indicates the size
      of the file zone.
      
      $ stat /mnt/seq/0
        File: /mnt/seq/0
        Size: 0       Blocks: 524288     IO Block: 4096   regular empty file
      Device: 870h/2160d      Inode: 50431       Links: 1
      Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/  root)
      Access: 2019-11-25 13:23:57.048971997 +0900
      Modify: 2019-11-25 13:52:25.553805765 +0900
      Change: 2019-11-25 13:52:25.553805765 +0900
       Birth: -
      
      The number of blocks of the file ("Blocks") in units of 512B blocks
      gives the maximum file size of 524288 * 512 B = 256 MB, corresponding
      to the device zone size in this example. Of note is that the "IO block"
      field always indicates the minimum IO size for writes and corresponds
      to the device physical sector size.
      
      This code contains contributions from:
      * Johannes Thumshirn <jthumshirn@suse.de>,
      * Darrick J. Wong <darrick.wong@oracle.com>,
      * Christoph Hellwig <hch@lst.de>,
      * Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> and
      * Ting Yao <tingyao@hust.edu.cn>.
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      8dcc1a9d
  14. 03 1月, 2020 1 次提交
  15. 30 10月, 2019 1 次提交
    • J
      io-wq: small threadpool implementation for io_uring · 771b53d0
      Jens Axboe 提交于
      This adds support for io-wq, a smaller and specialized thread pool
      implementation. This is meant to replace workqueues for io_uring. Among
      the reasons for this addition are:
      
      - We can assign memory context smarter and more persistently if we
        manage the life time of threads.
      
      - We can drop various work-arounds we have in io_uring, like the
        async_list.
      
      - We can implement hashed work insertion, to manage concurrency of
        buffered writes without needing a) an extra workqueue, or b)
        needlessly making the concurrency of said workqueue very low
        which hurts performance of multiple buffered file writers.
      
      - We can implement cancel through signals, for cancelling
        interruptible work like read/write (or send/recv) to/from sockets.
      
      - We need the above cancel for being able to assign and use file tables
        from a process.
      
      - We can implement a more thorough cancel operation in general.
      
      - We need it to move towards a syslet/threadlet model for even faster
        async execution. For that we need to take ownership of the used
        threads.
      
      This list is just off the top of my head. Performance should be the
      same, or better, at least that's what I've seen in my testing. io-wq
      supports basic NUMA functionality, setting up a pool per node.
      
      io-wq hooks up to the scheduler schedule in/out just like workqueue
      and uses that to drive the need for more/less workers.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      771b53d0
  16. 24 8月, 2019 1 次提交
    • G
      erofs: move erofs out of staging · 47e4937a
      Gao Xiang 提交于
      EROFS filesystem has been merged into linux-staging for a year.
      
      EROFS is designed to be a better solution of saving extra storage
      space with guaranteed end-to-end performance for read-only files
      with the help of reduced metadata, fixed-sized output compression
      and decompression inplace technologies.
      
      In the past year, EROFS was greatly improved by many people as
      a staging driver, self-tested, betaed by a large number of our
      internal users, successfully applied to almost all in-service
      HUAWEI smartphones as the part of EMUI 9.1 and proven to be stable
      enough to be moved out of staging.
      
      EROFS is a self-contained filesystem driver. Although there are
      still some TODOs to be more generic, we have a dedicated team
      actively keeping on working on EROFS in order to make it better
      with the evolution of Linux kernel as the other in-kernel filesystems.
      
      As Pavel suggested, it's better to do as one commit since git
      can do moves and all histories will be saved in this way.
      
      Let's promote it from staging and enhance it more actively as
      a "real" part of kernel for more wider scenarios!
      
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Pavel Machek <pavel@denx.de>
      Cc: David Sterba <dsterba@suse.cz>
      Cc: Amir Goldstein <amir73il@gmail.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Darrick J . Wong <darrick.wong@oracle.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Chao Yu <yuchao0@huawei.com>
      Cc: Miao Xie <miaoxie@huawei.com>
      Cc: Li Guifu <bluce.liguifu@huawei.com>
      Cc: Fang Wei <fangwei1@huawei.com>
      Signed-off-by: NGao Xiang <gaoxiang25@huawei.com>
      Link: https://lore.kernel.org/r/20190822213659.5501-1-hsiangkao@aol.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      47e4937a
  17. 29 7月, 2019 1 次提交
  18. 17 7月, 2019 1 次提交
  19. 15 7月, 2019 1 次提交
  20. 26 4月, 2019 1 次提交
    • G
      unicode: introduce UTF-8 character database · 955405d1
      Gabriel Krisman Bertazi 提交于
      The decomposition and casefolding of UTF-8 characters are described in a
      prefix tree in utf8data.h, which is a generate from the Unicode
      Character Database (UCD), published by the Unicode Consortium, and
      should not be edited by hand.  The structures in utf8data.h are meant to
      be used for lookup operations by the unicode subsystem, when decoding a
      utf-8 string.
      
      mkutf8data.c is the source for a program that generates utf8data.h. It
      was written by Olaf Weber from SGI and originally proposed to be merged
      into Linux in 2014.  The original proposal performed the compatibility
      decomposition, NFKD, but the current version was modified by me to do
      canonical decomposition, NFD, as suggested by the community.  The
      changes from the original submission are:
      
        * Rebase to mainline.
        * Fix out-of-tree-build.
        * Update makefile to build 11.0.0 ucd files.
        * drop references to xfs.
        * Convert NFKD to NFD.
        * Merge back robustness fixes from original patch. Requested by
          Dave Chinner.
      
      The original submission is archived at:
      
      <https://linux-xfs.oss.sgi.narkive.com/Xx10wjVY/rfc-unicode-utf-8-support-for-xfs>
      
      The utf8data.h file can be regenerated using the instructions in
      fs/unicode/README.utf8data.
      
      - Notes on the update from 8.0.0 to 11.0:
      
      The structure of the ucd files and special cases have not experienced
      any changes between versions 8.0.0 and 11.0.0.  8.0.0 saw the addition
      of Cherokee LC characters, which is an interesting case for
      case-folding.  The update is accompanied by new tests on the test_ucd
      module to catch specific cases.  No changes to mkutf8data script were
      required for the updates.
      Signed-off-by: NGabriel Krisman Bertazi <krisman@collabora.co.uk>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      955405d1
  21. 19 4月, 2019 1 次提交
  22. 21 3月, 2019 2 次提交
    • D
      vfs: syscall: Add fsopen() to prepare for superblock creation · 24dcb3d9
      David Howells 提交于
      Provide an fsopen() system call that starts the process of preparing to
      create a superblock that will then be mountable, using an fd as a context
      handle.  fsopen() is given the name of the filesystem that will be used:
      
      	int mfd = fsopen(const char *fsname, unsigned int flags);
      
      where flags can be 0 or FSOPEN_CLOEXEC.
      
      For example:
      
      	sfd = fsopen("ext4", FSOPEN_CLOEXEC);
      	fsconfig(sfd, FSCONFIG_SET_PATH, "source", "/dev/sda1", AT_FDCWD);
      	fsconfig(sfd, FSCONFIG_SET_FLAG, "noatime", NULL, 0);
      	fsconfig(sfd, FSCONFIG_SET_FLAG, "acl", NULL, 0);
      	fsconfig(sfd, FSCONFIG_SET_FLAG, "user_xattr", NULL, 0);
      	fsconfig(sfd, FSCONFIG_SET_STRING, "sb", "1", 0);
      	fsconfig(sfd, FSCONFIG_CMD_CREATE, NULL, NULL, 0);
      	fsinfo(sfd, NULL, ...); // query new superblock attributes
      	mfd = fsmount(sfd, FSMOUNT_CLOEXEC, MS_RELATIME);
      	move_mount(mfd, "", sfd, AT_FDCWD, "/mnt", MOVE_MOUNT_F_EMPTY_PATH);
      
      	sfd = fsopen("afs", -1);
      	fsconfig(fd, FSCONFIG_SET_STRING, "source",
      		 "#grand.central.org:root.cell", 0);
      	fsconfig(fd, FSCONFIG_CMD_CREATE, NULL, NULL, 0);
      	mfd = fsmount(sfd, 0, MS_NODEV);
      	move_mount(mfd, "", sfd, AT_FDCWD, "/mnt", MOVE_MOUNT_F_EMPTY_PATH);
      
      If an error is reported at any step, an error message may be available to be
      read() back (ENODATA will be reported if there isn't an error available) in
      the form:
      
      	"e <subsys>:<problem>"
      	"e SELinux:Mount on mountpoint not permitted"
      
      Once fsmount() has been called, further fsconfig() calls will incur EBUSY,
      even if the fsmount() fails.  read() is still possible to retrieve error
      information.
      
      The fsopen() syscall creates a mount context and hangs it of the fd that it
      returns.
      
      Netlink is not used because it is optional and would make the core VFS
      dependent on the networking layer and also potentially add network
      namespace issues.
      
      Note that, for the moment, the caller must have SYS_CAP_ADMIN to use
      fsopen().
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc: linux-api@vger.kernel.org
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      24dcb3d9
    • D
      Make anon_inodes unconditional · dadd2299
      David Howells 提交于
      Make the anon_inodes facility unconditional so that it can be used by core
      VFS code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      dadd2299
  23. 28 2月, 2019 2 次提交
    • J
      Add io_uring IO interface · 2b188cc1
      Jens Axboe 提交于
      The submission queue (SQ) and completion queue (CQ) rings are shared
      between the application and the kernel. This eliminates the need to
      copy data back and forth to submit and complete IO.
      
      IO submissions use the io_uring_sqe data structure, and completions
      are generated in the form of io_uring_cqe data structures. The SQ
      ring is an index into the io_uring_sqe array, which makes it possible
      to submit a batch of IOs without them being contiguous in the ring.
      The CQ ring is always contiguous, as completion events are inherently
      unordered, and hence any io_uring_cqe entry can point back to an
      arbitrary submission.
      
      Two new system calls are added for this:
      
      io_uring_setup(entries, params)
      	Sets up an io_uring instance for doing async IO. On success,
      	returns a file descriptor that the application can mmap to
      	gain access to the SQ ring, CQ ring, and io_uring_sqes.
      
      io_uring_enter(fd, to_submit, min_complete, flags, sigset, sigsetsize)
      	Initiates IO against the rings mapped to this fd, or waits for
      	them to complete, or both. The behavior is controlled by the
      	parameters passed in. If 'to_submit' is non-zero, then we'll
      	try and submit new IO. If IORING_ENTER_GETEVENTS is set, the
      	kernel will wait for 'min_complete' events, if they aren't
      	already available. It's valid to set IORING_ENTER_GETEVENTS
      	and 'min_complete' == 0 at the same time, this allows the
      	kernel to return already completed events without waiting
      	for them. This is useful only for polling, as for IRQ
      	driven IO, the application can just check the CQ ring
      	without entering the kernel.
      
      With this setup, it's possible to do async IO with a single system
      call. Future developments will enable polled IO with this interface,
      and polled submission as well. The latter will enable an application
      to do IO without doing ANY system calls at all.
      
      For IRQ driven IO, an application only needs to enter the kernel for
      completions if it wants to wait for them to occur.
      
      Each io_uring is backed by a workqueue, to support buffered async IO
      as well. We will only punt to an async context if the command would
      need to wait for IO on the device side. Any data that can be accessed
      directly in the page cache is done inline. This avoids the slowness
      issue of usual threadpools, since cached data is accessed as quickly
      as a sync interface.
      
      Sample application: http://git.kernel.dk/cgit/fio/plain/t/io_uring.cReviewed-by: NHannes Reinecke <hare@suse.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2b188cc1
    • D
      vfs: Add configuration parser helpers · 31d921c7
      David Howells 提交于
      Because the new API passes in key,value parameters, match_token() cannot be
      used with it.  Instead, provide three new helpers to aid with parsing:
      
       (1) fs_parse().  This takes a parameter and a simple static description of
           all the parameters and maps the key name to an ID.  It returns 1 on a
           match, 0 on no match if unknowns should be ignored and some other
           negative error code on a parse error.
      
           The parameter description includes a list of key names to IDs, desired
           parameter types and a list of enumeration name -> ID mappings.
      
           [!] Note that for the moment I've required that the key->ID mapping
           array is expected to be sorted and unterminated.  The size of the
           array is noted in the fsconfig_parser struct.  This allows me to use
           bsearch(), but I'm not sure any performance gain is worth the hassle
           of requiring people to keep the array sorted.
      
           The parameter type array is sized according to the number of parameter
           IDs and is indexed directly.  The optional enum mapping array is an
           unterminated, unsorted list and the size goes into the fsconfig_parser
           struct.
      
           The function can do some additional things:
      
      	(a) If it's not ambiguous and no value is given, the prefix "no" on
      	    a key name is permitted to indicate that the parameter should
      	    be considered negatory.
      
      	(b) If the desired type is a single simple integer, it will perform
      	    an appropriate conversion and store the result in a union in
      	    the parse result.
      
      	(c) If the desired type is an enumeration, {key ID, name} will be
      	    looked up in the enumeration list and the matching value will
      	    be stored in the parse result union.
      
      	(d) Optionally generate an error if the key is unrecognised.
      
           This is called something like:
      
      	enum rdt_param {
      		Opt_cdp,
      		Opt_cdpl2,
      		Opt_mba_mpbs,
      		nr__rdt_params
      	};
      
      	const struct fs_parameter_spec rdt_param_specs[nr__rdt_params] = {
      		[Opt_cdp]	= { fs_param_is_bool },
      		[Opt_cdpl2]	= { fs_param_is_bool },
      		[Opt_mba_mpbs]	= { fs_param_is_bool },
      	};
      
      	const const char *const rdt_param_keys[nr__rdt_params] = {
      		[Opt_cdp]	= "cdp",
      		[Opt_cdpl2]	= "cdpl2",
      		[Opt_mba_mpbs]	= "mba_mbps",
      	};
      
      	const struct fs_parameter_description rdt_parser = {
      		.name		= "rdt",
      		.nr_params	= nr__rdt_params,
      		.keys		= rdt_param_keys,
      		.specs		= rdt_param_specs,
      		.no_source	= true,
      	};
      
      	int rdt_parse_param(struct fs_context *fc,
      			    struct fs_parameter *param)
      	{
      		struct fs_parse_result parse;
      		struct rdt_fs_context *ctx = rdt_fc2context(fc);
      		int ret;
      
      		ret = fs_parse(fc, &rdt_parser, param, &parse);
      		if (ret < 0)
      			return ret;
      
      		switch (parse.key) {
      		case Opt_cdp:
      			ctx->enable_cdpl3 = true;
      			return 0;
      		case Opt_cdpl2:
      			ctx->enable_cdpl2 = true;
      			return 0;
      		case Opt_mba_mpbs:
      			ctx->enable_mba_mbps = true;
      			return 0;
      		}
      
      		return -EINVAL;
      	}
      
       (2) fs_lookup_param().  This takes a { dirfd, path, LOOKUP_EMPTY? } or
           string value and performs an appropriate path lookup to convert it
           into a path object, which it will then return.
      
           If the desired type was a blockdev, the type of the looked up inode
           will be checked to make sure it is one.
      
           This can be used like:
      
      	enum foo_param {
      		Opt_source,
      		nr__foo_params
      	};
      
      	const struct fs_parameter_spec foo_param_specs[nr__foo_params] = {
      		[Opt_source]	= { fs_param_is_blockdev },
      	};
      
      	const char *char foo_param_keys[nr__foo_params] = {
      		[Opt_source]	= "source",
      	};
      
      	const struct constant_table foo_param_alt_keys[] = {
      		{ "device",	Opt_source },
      	};
      
      	const struct fs_parameter_description foo_parser = {
      		.name		= "foo",
      		.nr_params	= nr__foo_params,
      		.nr_alt_keys	= ARRAY_SIZE(foo_param_alt_keys),
      		.keys		= foo_param_keys,
      		.alt_keys	= foo_param_alt_keys,
      		.specs		= foo_param_specs,
      	};
      
      	int foo_parse_param(struct fs_context *fc,
      			    struct fs_parameter *param)
      	{
      		struct fs_parse_result parse;
      		struct foo_fs_context *ctx = foo_fc2context(fc);
      		int ret;
      
      		ret = fs_parse(fc, &foo_parser, param, &parse);
      		if (ret < 0)
      			return ret;
      
      		switch (parse.key) {
      		case Opt_source:
      			return fs_lookup_param(fc, &foo_parser, param,
      					       &parse, &ctx->source);
      		default:
      			return -EINVAL;
      		}
      	}
      
       (3) lookup_constant().  This takes a table of named constants and looks up
           the given name within it.  The table is expected to be sorted such
           that bsearch() be used upon it.
      
           Possibly I should require the table be terminated and just use a
           for-loop to scan it instead of using bsearch() to reduce hassle.
      
           Tables look something like:
      
      	static const struct constant_table bool_names[] = {
      		{ "0",		false },
      		{ "1",		true },
      		{ "false",	false },
      		{ "no",		false },
      		{ "true",	true },
      		{ "yes",	true },
      	};
      
           and a lookup is done with something like:
      
      	b = lookup_constant(bool_names, param->string, -1);
      
      Additionally, optional validation routines for the parameter description
      are provided that can be enabled at compile time.  A later patch will
      invoke these when a filesystem is registered.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      31d921c7
  24. 06 2月, 2019 1 次提交
  25. 31 1月, 2019 1 次提交
    • D
      vfs: Introduce fs_context, switch vfs_kern_mount() to it. · 9bc61ab1
      David Howells 提交于
      Introduce a filesystem context concept to be used during superblock
      creation for mount and superblock reconfiguration for remount.  This is
      allocated at the beginning of the mount procedure and into it is placed:
      
       (1) Filesystem type.
      
       (2) Namespaces.
      
       (3) Source/Device names (there may be multiple).
      
       (4) Superblock flags (SB_*).
      
       (5) Security details.
      
       (6) Filesystem-specific data, as set by the mount options.
      
      Accessor functions are then provided to set up a context, parameterise it
      from monolithic mount data (the data page passed to mount(2)) and tear it
      down again.
      
      A legacy wrapper is provided that implements what will be the basic
      operations, wrapping access to filesystems that aren't yet aware of the
      fs_context.
      
      Finally, vfs_kern_mount() is changed to make use of the fs_context and
      mount_fs() is replaced by vfs_get_tree(), called from vfs_kern_mount().
      [AV -- add missing kstrdup()]
      [AV -- put_cred() can be unconditional - fc->cred can't be NULL]
      [AV -- take legacy_validate() contents into legacy_parse_monolithic()]
      [AV -- merge KERNEL_MOUNT and USER_MOUNT]
      [AV -- don't unlock superblock on success return from vfs_get_tree()]
      [AV -- kill 'reference' argument of init_fs_context()]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Co-developed-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9bc61ab1
  26. 22 1月, 2019 1 次提交
  27. 11 6月, 2018 1 次提交
    • L
      autofs: remove left-over autofs4 stubs · a2225d93
      Linus Torvalds 提交于
      There's no need to retain the fs/autofs4 directory for backward
      compatibility.
      
      Adding an AUTOFS4_FS fragment to the autofs Kconfig and a module alias
      for autofs4 is sufficient for almost all cases. Not keeping fs/autofs4
      remnants will prevent "insmod <path>/autofs4/autofs4.ko" from working
      but this shouldn't be used in automation scripts rather than
      modprobe(8).
      
      There were some comments about things to look out for with the module
      rename in the fs/autofs4/Kconfig that is removed by this patch, see the
      commit patch if you are interested.
      
      One potential problem with this change is that when the
      fs/autofs/Kconfig fragment for AUTOFS4_FS is removed any AUTOFS4_FS
      entries will be removed from the kernel config, resulting in no autofs
      file system being built if there is no AUTOFS_FS entry also.
      
      This would have also happened if the fs/autofs4 remnants had remained
      and is most likely to be a problem with automated builds.
      
      Please check your build configurations before the removal which will
      occur after the next couple of kernel releases.
      Acked-by: NIan Kent <raven@themaw.net>
      [ With edits and commit message from Ian Kent ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a2225d93
  28. 08 6月, 2018 1 次提交
  29. 30 3月, 2018 1 次提交
  30. 28 11月, 2017 1 次提交
  31. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  32. 15 12月, 2016 1 次提交
    • C
      logfs: remove from tree · 1d0fd57a
      Christoph Hellwig 提交于
      Logfs was introduced to the kernel in 2009, and hasn't seen any non
      drive-by changes since 2012, while having lots of unsolved issues
      including the complete lack of error handling, with more and more
      issues popping up without any fixes.
      
      The logfs.org domain has been bouncing from a mail, and the maintainer
      on the non-logfs.org domain hasn't repsonded to past queries either.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      1d0fd57a
  33. 21 6月, 2016 1 次提交
    • C
      fs: introduce iomap infrastructure · ae259a9c
      Christoph Hellwig 提交于
      Add infrastructure for multipage buffered writes.  This is implemented
      using an main iterator that applies an actor function to a range that
      can be written.
      
      This infrastucture is used to implement a buffered write helper, one
      to zero file ranges and one to implement the ->page_mkwrite VM
      operations.  All of them borrow a fair amount of code from fs/buffers.
      for now by using an internal version of __block_write_begin that
      gets passed an iomap and builds the corresponding buffer head.
      
      The file system is gets a set of paired ->iomap_begin and ->iomap_end
      calls which allow it to map/reserve a range and get a notification
      once the write code is finished with it.
      
      Based on earlier code from Dave Chinner.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBob Peterson <rpeterso@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      ae259a9c
  34. 18 3月, 2016 1 次提交
    • J
      fs crypto: move per-file encryption from f2fs tree to fs/crypto · 0b81d077
      Jaegeuk Kim 提交于
      This patch adds the renamed functions moved from the f2fs crypto files.
      
      1. definitions for per-file encryption used by ext4 and f2fs.
      
      2. crypto.c for encrypt/decrypt functions
       a. IO preparation:
        - fscrypt_get_ctx / fscrypt_release_ctx
       b. before IOs:
        - fscrypt_encrypt_page
        - fscrypt_decrypt_page
        - fscrypt_zeroout_range
       c. after IOs:
        - fscrypt_decrypt_bio_pages
        - fscrypt_pullback_bio_page
        - fscrypt_restore_control_page
      
      3. policy.c supporting context management.
       a. For ioctls:
        - fscrypt_process_policy
        - fscrypt_get_policy
       b. For context permission
        - fscrypt_has_permitted_context
        - fscrypt_inherit_context
      
      4. keyinfo.c to handle permissions
        - fscrypt_get_encryption_info
        - fscrypt_free_encryption_info
      
      5. fname.c to support filename encryption
       a. general wrapper functions
        - fscrypt_fname_disk_to_usr
        - fscrypt_fname_usr_to_disk
        - fscrypt_setup_filename
        - fscrypt_free_filename
      
       b. specific filename handling functions
        - fscrypt_fname_alloc_buffer
        - fscrypt_fname_free_buffer
      
      6. Makefile and Kconfig
      
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Signed-off-by: NMichael Halcrow <mhalcrow@google.com>
      Signed-off-by: NIldar Muslukhov <ildarm@google.com>
      Signed-off-by: NUday Savagaonkar <savagaon@google.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJaegeuk Kim <jaegeuk@kernel.org>
      0b81d077
  35. 23 2月, 2016 3 次提交
    • J
      mbcache2: rename to mbcache · 7a2508e1
      Jan Kara 提交于
      Since old mbcache code is gone, let's rename new code to mbcache since
      number 2 is now meaningless. This is just a mechanical replacement.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      7a2508e1
    • J
      mbcache: remove mbcache · ecd1e644
      Jan Kara 提交于
      Both ext2 and ext4 are now converted to mbcache2. Remove the old mbcache
      code.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      ecd1e644
    • J
      mbcache2: reimplement mbcache · f9a61eb4
      Jan Kara 提交于
      Original mbcache was designed to have more features than what ext?
      filesystems ended up using. It supported entry being in more hashes, it
      had a home-grown rwlocking of each entry, and one cache could cache
      entries from multiple filesystems. This genericity also resulted in more
      complex locking, larger cache entries, and generally more code
      complexity.
      
      This is reimplementation of the mbcache functionality to exactly fit the
      purpose ext? filesystems use it for. Cache entries are now considerably
      smaller (7 instead of 13 longs), the code is considerably smaller as
      well (414 vs 913 lines of code), and IMO also simpler. The new code is
      also much more lightweight.
      
      I have measured the speed using artificial xattr-bench benchmark, which
      spawns P processes, each process sets xattr for F different files, and
      the value of xattr is randomly chosen from a pool of V values. Averages
      of runtimes for 5 runs for various combinations of parameters are below.
      The first value in each cell is old mbache, the second value is the new
      mbcache.
      
      V=10
      F\P	1		2		4		8		16		32		64
      10	0.158,0.157	0.208,0.196	0.500,0.277	0.798,0.400	3.258,0.584	13.807,1.047	61.339,2.803
      100	0.172,0.167	0.279,0.222	0.520,0.275	0.825,0.341	2.981,0.505	12.022,1.202	44.641,2.943
      1000	0.185,0.174	0.297,0.239	0.445,0.283	0.767,0.340	2.329,0.480	6.342,1.198	16.440,3.888
      
      V=100
      F\P	1		2		4		8		16		32		64
      10	0.162,0.153	0.200,0.186	0.362,0.257	0.671,0.496	1.433,0.943	3.801,1.345	7.938,2.501
      100	0.153,0.160	0.221,0.199	0.404,0.264	0.945,0.379	1.556,0.485	3.761,1.156	7.901,2.484
      1000	0.215,0.191	0.303,0.246	0.471,0.288	0.960,0.347	1.647,0.479	3.916,1.176	8.058,3.160
      
      V=1000
      F\P	1		2		4		8		16		32		64
      10	0.151,0.129	0.210,0.163	0.326,0.245	0.685,0.521	1.284,0.859	3.087,2.251	6.451,4.801
      100	0.154,0.153	0.211,0.191	0.276,0.282	0.687,0.506	1.202,0.877	3.259,1.954	8.738,2.887
      1000	0.145,0.179	0.202,0.222	0.449,0.319	0.899,0.333	1.577,0.524	4.221,1.240	9.782,3.579
      
      V=10000
      F\P	1		2		4		8		16		32		64
      10	0.161,0.154	0.198,0.190	0.296,0.256	0.662,0.480	1.192,0.818	2.989,2.200	6.362,4.746
      100	0.176,0.174	0.236,0.203	0.326,0.255	0.696,0.511	1.183,0.855	4.205,3.444	19.510,17.760
      1000	0.199,0.183	0.240,0.227	1.159,1.014	2.286,2.154	6.023,6.039	---,10.933	---,36.620
      
      V=100000
      F\P	1		2		4		8		16		32		64
      10	0.171,0.162	0.204,0.198	0.285,0.230	0.692,0.500	1.225,0.881	2.990,2.243	6.379,4.771
      100	0.151,0.171	0.220,0.210	0.295,0.255	0.720,0.518	1.226,0.844	3.423,2.831	19.234,17.544
      1000	0.192,0.189	0.249,0.225	1.162,1.043	2.257,2.093	5.853,4.997	---,10.399	---,32.198
      
      We see that the new code is faster in pretty much all the cases and
      starting from 4 processes there are significant gains with the new code
      resulting in upto 20-times shorter runtimes. Also for large numbers of
      cached entries all values for the old code could not be measured as the
      kernel started hitting softlockups and died before the test completed.
      Signed-off-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      f9a61eb4
  36. 15 10月, 2015 1 次提交
    • D
      ext4: promote ext4 over ext2 in the default probe order · 9172796b
      Darrick J. Wong 提交于
      Prevent clean ext3 filesystems from mounting by default with the ext2
      driver (with no journal!) by putting ext4 ahead of ext2 in the default
      probe order.  This will have the effect of mounting ext2 filesystems
      with ext4.ko by default, which is a safer failure than hoping the user
      notices that their journalled ext3 is now running without a journal!
      
      Users who require ext2.ko for ext2 can either disable ext4.ko or
      explicitly request ext2 via "mount -t ext2" or "rootfstype=ext2".
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NTheodore Ts'o <tytso@mit.edu>
      9172796b