1. 19 12月, 2014 1 次提交
    • S
      hfsplus: fix longname handling · 89ac9b4d
      Sougata Santra 提交于
      Longname is not correctly handled by hfsplus driver.  If an attempt to
      create a longname(>255) file/directory is made, it succeeds by creating a
      file/directory with HFSPLUS_MAX_STRLEN and incorrect catalog key.  Thus
      leaving the volume in an inconsistent state.  This patch fixes this issue.
      
      Although lookup is always called first to create a negative entry, so just
      doing a check in lookup would probably fix this issue.  I choose to
      propagate error to other iops as well.
      
      Please NOTE: I have factored out hfsplus_cat_build_key_with_cnid from
      hfsplus_cat_build_key, to avoid unncessary branching.
      
      Thanks a lot.
      
        TEST:
        ------
        dir="TEST_DIR"
        cdir=`pwd`
        name255="_123456789_123456789_123456789_123456789_123456789_123456789\
        _123456789_123456789_123456789_123456789_123456789_123456789_123456789\
        _123456789_123456789_123456789_123456789_123456789_123456789_123456789\
        _123456789_123456789_123456789_123456789_123456789_1234"
        name256="${name255}5"
      
        mkdir $dir
        cd $dir
        touch $name255
        rm -f $name255
        touch $name256
        ls -la
        cd $cdir
        rm -rf $dir
      
        RESULT:
        -------
        [sougata@ultrabook tmp]$ cdir=`pwd`
        [sougata@ultrabook tmp]$
        name255="_123456789_123456789_123456789_123456789_123456789_123456789\
         > _123456789_123456789_123456789_123456789_123456789_123456789_123456789\
         > _123456789_123456789_123456789_123456789_123456789_123456789_123456789\
         > _123456789_123456789_123456789_123456789_123456789_1234"
        [sougata@ultrabook tmp]$ name256="${name255}5"
        [sougata@ultrabook tmp]$
        [sougata@ultrabook tmp]$ mkdir $dir
        [sougata@ultrabook tmp]$ cd $dir
        [sougata@ultrabook TEST_DIR]$ touch $name255
        [sougata@ultrabook TEST_DIR]$ rm -f $name255
        [sougata@ultrabook TEST_DIR]$ touch $name256
        [sougata@ultrabook TEST_DIR]$ ls -la
        ls: cannot access
        _123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_1234:
        No such file or directory
        total 0
        drwxrwxr-x 1 sougata sougata 3 Feb 20 19:56 .
        drwxrwxrwx 1 root    root    6 Feb 20 19:56 ..
        -????????? ? ?       ?       ?            ?
        _123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_123456789_1234
        [sougata@ultrabook TEST_DIR]$ cd $cdir
        [sougata@ultrabook tmp]$ rm -rf $dir
        rm: cannot remove `TEST_DIR': Directory not empty
      
      -ENAMETOOLONG returned from hfsplus_asc2uni was not propaged to iops.
      This allowed hfsplus to create files/directories with HFSPLUS_MAX_STRLEN
      and incorrect keys, leaving the FS in an inconsistent state.  This patch
      fixes this issue.
      Signed-off-by: NSougata Santra <sougata@tuxera.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Cc: Vyacheslav Dubeyko <slava@dubeyko.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      89ac9b4d
  2. 07 6月, 2014 2 次提交
    • S
      hfsplus: emit proper file type from readdir · 97a62eae
      Sergei Antonov 提交于
      hfsplus_readdir() incorrectly returned DT_REG for symbolic links and
      special files.  Return DT_REG, DT_LNK, DT_FIFO, DT_CHR, DT_BLK, DT_SOCK,
      or DT_UNKNOWN according to mode field in catalog record.  Programs
      relying on information from readdir will now work correctly with HFS+.
      Signed-off-by: NSergei Antonov <saproj@gmail.com>
      Cc: Anton Altaparmakov <aia21@cam.ac.uk>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Vyacheslav Dubeyko <slava@dubeyko.com>
      Cc: Hin-Tak Leung <htl10@users.sourceforge.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97a62eae
    • H
      hfsplus: fix worst-case unicode to char conversion of file names and attributes · 017f8da4
      Hin-Tak Leung 提交于
      This is a series of 3 patches which corrects issues in HFS+ concerning
      the use of non-english file names and attributes.  Names and attributes
      are stored internally as UTF-16 units up to a fixed maximum size, and
      convert to and from user-representation by NLS.  The code incorrectly
      assume that NLS string lengths are equal to unicode lengths, which is
      only true for English ascii usage.
      
      This patch (of 3):
      
      The HFS Plus Volume Format specification (TN1150) states that file names
      are stored internally as a maximum of 255 unicode characters, as defined
      by The Unicode Standard, Version 2.0 [Unicode, Inc.  ISBN
      0-201-48345-9].  File names are converted by the NLS system on Linux
      before presented to the user.
      
      255 CJK characters converts to UTF-8 with 1 unicode character to up to 3
      bytes, and to GB18030 with 1 unicode character to up to 4 bytes.  Thus,
      trying in a UTF-8 locale to list files with names of more than 85 CJK
      characters results in:
      
          $ ls /mnt
          ls: reading directory /mnt: File name too long
      
      The receiving buffer to hfsplus_uni2asc() needs to be 255 x
      NLS_MAX_CHARSET_SIZE bytes, not 255 bytes as the code has always been.
      
      Similar consideration applies to attributes, which are stored internally
      as a maximum of 127 UTF-16BE units.  See XNU source for an up-to-date
      reference on attributes.
      
      Strictly speaking, the maximum value of NLS_MAX_CHARSET_SIZE = 6 is not
      attainable in the case of conversion to UTF-8, as going beyond 3 bytes
      requires the use of surrogate pairs, i.e.  consuming two input units.
      
      Thanks Anton Altaparmakov for reviewing an earlier version of this
      change.
      
      This patch fixes all callers of hfsplus_uni2asc(), and also enables the
      use of long non-English file names in HFS+.  The getting and setting,
      and general usage of long non-English attributes requires further
      forthcoming work, in the following patches of this series.
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: NHin-Tak Leung <htl10@users.sourceforge.net>
      Reviewed-by: NAnton Altaparmakov <anton@tuxera.com>
      Cc: Vyacheslav Dubeyko <slava@dubeyko.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Sougata Santra <sougata@tuxera.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      017f8da4
  3. 01 2月, 2014 1 次提交
  4. 26 1月, 2014 1 次提交
  5. 12 9月, 2013 1 次提交
  6. 29 6月, 2013 1 次提交
  7. 01 5月, 2013 1 次提交
  8. 28 2月, 2013 1 次提交
  9. 23 2月, 2013 1 次提交
  10. 23 7月, 2012 1 次提交
    • A
      hfsplus: get rid of write_super · 9e6c5829
      Artem Bityutskiy 提交于
      This patch makes hfsplus stop using the VFS '->write_super()' method along with
      the 's_dirt' superblock flag, because they are on their way out.
      
      The whole "superblock write-out" VFS infrastructure is served by the
      'sync_supers()' kernel thread, which wakes up every 5 (by default) seconds and
      writes out all dirty superblocks using the '->write_super()' call-back.  But the
      problem with this thread is that it wastes power by waking up the system every
      5 seconds, even if there are no diry superblocks, or there are no client
      file-systems which would need this (e.g., btrfs does not use
      '->write_super()'). So we want to kill it completely and thus, we need to make
      file-systems to stop using the '->write_super()' VFS service, and then remove
      it together with the kernel thread.
      
      Tested using fsstress from the LTP project.
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@linux.intel.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      9e6c5829
  11. 14 7月, 2012 2 次提交
  12. 05 5月, 2012 1 次提交
  13. 04 1月, 2012 3 次提交
  14. 02 11月, 2011 1 次提交
  15. 07 7月, 2011 1 次提交
  16. 28 5月, 2011 1 次提交
  17. 26 5月, 2011 2 次提交
  18. 13 1月, 2011 1 次提交
  19. 07 1月, 2011 1 次提交
    • N
      fs: dcache reduce branches in lookup path · fb045adb
      Nick Piggin 提交于
      Reduce some branches and memory accesses in dcache lookup by adding dentry
      flags to indicate common d_ops are set, rather than having to check them.
      This saves a pointer memory access (dentry->d_op) in common path lookup
      situations, and saves another pointer load and branch in cases where we
      have d_op but not the particular operation.
      
      Patched with:
      
      git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      fb045adb
  20. 17 12月, 2010 2 次提交
  21. 23 11月, 2010 1 次提交
  22. 27 10月, 2010 1 次提交
  23. 26 10月, 2010 1 次提交
  24. 14 10月, 2010 2 次提交
    • C
      hfsplus: create correct initial catalog entries for device files · 90e61690
      Christoph Hellwig 提交于
      Make sure the initial insertation of the catalog entry already contains
      the device number by calling init_special_inode early and setting writing
      out the dev field of the on-disk permission structure.  The latter is
      facilitated by sharing the almost identical hfsplus_set_perms helpers
      between initial catalog entry creating and ->write_inode.
      
      Unless we crashed just after mknod this bug was harmless as the inode
      is marked dirty at the end of hfsplus_mknod, and hfsplus_write_inode
      will update the catalog entry to contain the correct value.
      Signed-off-by: NChristoph Hellwig <hch@tuxera.com>
      90e61690
    • C
      hfsplus: fix link corruption · f6089ff8
      Christoph Hellwig 提交于
      HFS implements hardlink by using indirect catalog entries that refer to a hidden
      directly.  The link target is cached in the dev field in the HFS+ specific
      inode, which is also used for the device number for device files, and inside
      for passing the nlink value of the indirect node from hfsplus_cat_write_inode
      to a helper function.  Now if we happen to write out the indirect node while
      hfsplus_link is creating the catalog entry we'll get a link pointing to the
      linkid of the current nlink value.  This can easily be reproduced by a large
      enough loop of local git-clone operations.
      
      Stop abusing the dev field in the HFS+ inode for short term storage by
      refactoring the way the permission structure in the catalog entry is
      set up, and rename the dev field to linkid to avoid any confusion.
      
      While we're at it also prevent creating hard links to special files, as
      the HFS+ dev and linkid share the same space in the on-disk structure.
      Signed-off-by: NChristoph Hellwig <hch@tuxera.com>
      f6089ff8
  25. 01 10月, 2010 7 次提交
  26. 17 5月, 2010 1 次提交
  27. 11 4月, 2008 1 次提交