1. 11 9月, 2011 11 次提交
    • B
      mtd: rename MTD_MODE_* to MTD_FILE_MODE_* · beb133fc
      Brian Norris 提交于
      These modes hold their state only for the life of their file descriptor,
      and they overlap functionality with the MTD_OPS_* modes. Particularly,
      MTD_MODE_RAW and MTD_OPS_RAW cover the same function: to provide raw
      (i.e., without ECC) access to the flash. In fact, although it may not be
      clear, MTD_MODE_RAW implied that operations should enable the
      MTD_OPS_RAW mode.
      
      Thus, we should be specific on what each mode means. This is a start,
      where MTD_FILE_MODE_* actually represents a "file mode," not necessarily
      a true global MTD mode.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      beb133fc
    • B
      mtd: rename MTD_OOB_* to MTD_OPS_* · 0612b9dd
      Brian Norris 提交于
      These modes are not necessarily for OOB only. Particularly, MTD_OOB_RAW
      affected operations on in-band page data as well. To clarify these
      options and to emphasize that their effect is applied per-operation, we
      change the primary prefix to MTD_OPS_.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      0612b9dd
    • B
      mtd: support reading OOB without ECC · c46f6483
      Brian Norris 提交于
      This fixes issues with `nanddump -n' and the MEMREADOOB[64] ioctls on
      hardware that performs error correction when reading only OOB data. A
      driver for such hardware needs to know when we're doing a RAW vs. a
      normal write, but mtd_do_read_oob does not pass such information to the
      lower layers (e.g., NAND). We should pass MTD_OOB_RAW or MTD_OOB_PLACE
      based on the MTD file mode.
      
      For now, most drivers can get away with just setting:
      
        chip->ecc.read_oob_raw = chip->ecc.read_oob
      
      This is done by default; but for systems that behave as described above,
      you must supply your own replacement function.
      
      This was tested with nandsim as well as on actual SLC NAND.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Cc: Jim Quinlan <jim2101024@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      c46f6483
    • B
      mtd: support writing OOB without ECC · 9ce244b3
      Brian Norris 提交于
      This fixes issues with `nandwrite -n -o' and the MEMWRITEOOB[64] ioctls
      on hardware that writes ECC when writing OOB. The problem arises as
      follows: `nandwrite -n' can write page data to flash without applying
      ECC, but when used with the `-o' option, ECC is applied (incorrectly),
      contrary to the `--noecc' option.
      
      I found that this is the case because my hardware computes and writes
      ECC data to flash upon either OOB write or page write. Thus, to support
      a proper "no ECC" write, my driver must know when we're performing a raw
      OOB write vs. a normal ECC OOB write. However, MTD does not pass any raw
      mode information to the write_oob functions.  This patch addresses the
      problems by:
      
      1) Passing MTD_OOB_RAW down to lower layers, instead of just defaulting
         to MTD_OOB_PLACE
      2) Handling MTD_OOB_RAW within the NAND layer's `nand_do_write_oob'
      3) Adding a new (replaceable) function pointer in struct ecc_ctrl; this
         function should support writing OOB without ECC data. Current
         hardware often can use the same OOB write function when writing
         either with or without ECC
      
      This was tested with nandsim as well as on actual SLC NAND.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Cc: Jim Quinlan <jim2101024@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      9ce244b3
    • B
      mtd: do not assume oobsize is power of 2 · 305b93f1
      Brian Norris 提交于
      Previous generations of MTDs all used OOB sizes that were powers of 2,
      (e.g., 64, 128). However, newer generations of flash, especially NAND,
      use irregular OOB sizes that are not powers of 2 (e.g., 218, 224, 448).
      This means we cannot use masks like "mtd->oobsize - 1" to assume that we
      will get a proper bitmask for OOB operations.
      
      These masks are really only intended to hide the "page" portion of the
      offset, leaving any OOB offset intact, so a masking with the writesize
      (which *is* always a power of 2) is valid and makes more sense.
      
      This has been tested for read/write of NAND devices (nanddump/nandwrite)
      using nandsim and actual NAND flash.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      305b93f1
    • B
      mtd: spelling fixes · 92394b5c
      Brian Norris 提交于
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      92394b5c
    • B
      mtd: replace DEBUG() with pr_debug() · 289c0522
      Brian Norris 提交于
      Start moving away from the MTD_DEBUG_LEVEL messages. The dynamic
      debugging feature is a generic kernel feature that provides more
      flexibility.
      
      (See Documentation/dynamic-debug-howto.txt)
      
      Also fix some punctuation, indentation, and capitalization that went
      along with the affected lines.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <artem.bityutskiy@intel.com>
      289c0522
    • B
      mtd: edit NAND-related comment · c478d7e4
      Brian Norris 提交于
      This comment was unclear regarding which NAND functions do and do not
      support ECC on the spare area. This update should reflect the current
      status of the NAND system but can be updated if changes are made in
      the standard functions.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <dedekind1@gmail.com>
      c478d7e4
    • B
      mtd: nand: handle ECC errors in OOB · 041e4575
      Brian Norris 提交于
      While the standard NAND OOB functions do not do ECC on the spare area,
      it is possible for a driver to supply its own OOB ECC functions (e.g., HW
      ECC). nand_do_read_oob should act like nand_do_read_ops in checking the
      ECC stats and returning -EBADMSG or -EUCLEAN on uncorrectable errors or
      correctable bitflips, respectively. These error codes could be used in
      flash-based BBT code or in YAFFS, for example.
      
      Doing this, however, messes with the behavior of mtd_do_readoob. Now,
      mtd_do_readoob should check whether we had -EUCLEAN or -EBADMSG errors
      and discard those as "non-fatal" so that the ioctls can still succeed
      with (possibly uncorrected) data.
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      041e4575
    • B
      mtd: spelling, capitalization, uniformity · 7854d3f7
      Brian Norris 提交于
      Therefor -> Therefore
      [Intern], [Internal] -> [INTERN]
      [REPLACABLE] -> [REPLACEABLE]
      syndrom, syndom -> syndrome
      ecc -> ECC
      buswith -> buswidth
      endianess -> endianness
      dont -> don't
      occures -> occurs
      independend -> independent
      wihin -> within
      erease -> erase
      blockes -> blocks
      ...
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      7854d3f7
    • P
      mtd: mtdchar: add missing initializer on raw write · bf514081
      Peter Wippich 提交于
      On writes in MODE_RAW the mtd_oob_ops struct is not sufficiently
      initialized which may cause nandwrite to fail. With this patch
      it is possible to write raw nand/oob data without additional ECC
      (either for testing or when some sectors need different oob layout
      e.g. bootloader) like
      nandwrite  -n -r -o  /dev/mtd0 <myfile>
      Signed-off-by: NPeter Wippich <pewi@gw-instruments.de>
      Cc: stable@kernel.org
      Tested-by: NRicard Wanderlof <ricardw@axis.com>
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      bf514081
  2. 24 7月, 2011 1 次提交
    • T
      VFS : mount lock scalability for internal mounts · 423e0ab0
      Tim Chen 提交于
      For a number of file systems that don't have a mount point (e.g. sockfs
      and pipefs), they are not marked as long term. Therefore in
      mntput_no_expire, all locks in vfs_mount lock are taken instead of just
      local cpu's lock to aggregate reference counts when we release
      reference to file objects.  In fact, only local lock need to have been
      taken to update ref counts as these file systems are in no danger of
      going away until we are ready to unregister them.
      
      The attached patch marks file systems using kern_mount without
      mount point as long term.  The contentions of vfs_mount lock
      is now eliminated.  Before un-registering such file system,
      kern_unmount should be called to remove the long term flag and
      make the mount point ready to be freed.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      423e0ab0
  3. 25 5月, 2011 2 次提交
  4. 31 3月, 2011 1 次提交
  5. 17 1月, 2011 1 次提交
    • A
      sanitize vfsmount refcounting changes · f03c6599
      Al Viro 提交于
      Instead of splitting refcount between (per-cpu) mnt_count
      and (SMP-only) mnt_longrefs, make all references contribute
      to mnt_count again and keep track of how many are longterm
      ones.
      
      Accounting rules for longterm count:
      	* 1 for each fs_struct.root.mnt
      	* 1 for each fs_struct.pwd.mnt
      	* 1 for having non-NULL ->mnt_ns
      	* decrement to 0 happens only under vfsmount lock exclusive
      
      That allows nice common case for mntput() - since we can't drop the
      final reference until after mnt_longterm has reached 0 due to the rules
      above, mntput() can grab vfsmount lock shared and check mnt_longterm.
      If it turns out to be non-zero (which is the common case), we know
      that this is not the final mntput() and can just blindly decrement
      percpu mnt_count.  Otherwise we grab vfsmount lock exclusive and
      do usual decrement-and-check of percpu mnt_count.
      
      For fs_struct.c we have mnt_make_longterm() and mnt_make_shortterm();
      namespace.c uses the latter in places where we don't already hold
      vfsmount lock exclusive and opencodes a few remaining spots where
      we need to manipulate mnt_longterm.
      
      Note that we mostly revert the code outside of fs/namespace.c back
      to what we used to have; in particular, normal code doesn't need
      to care about two kinds of references, etc.  And we get to keep
      the optimization Nick's variant had bought us...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      f03c6599
  6. 13 1月, 2011 1 次提交
  7. 07 1月, 2011 1 次提交
    • N
      fs: scale mntget/mntput · b3e19d92
      Nick Piggin 提交于
      The problem that this patch aims to fix is vfsmount refcounting scalability.
      We need to take a reference on the vfsmount for every successful path lookup,
      which often go to the same mount point.
      
      The fundamental difficulty is that a "simple" reference count can never be made
      scalable, because any time a reference is dropped, we must check whether that
      was the last reference. To do that requires communication with all other CPUs
      that may have taken a reference count.
      
      We can make refcounts more scalable in a couple of ways, involving keeping
      distributed counters, and checking for the global-zero condition less
      frequently.
      
      - check the global sum once every interval (this will delay zero detection
        for some interval, so it's probably a showstopper for vfsmounts).
      
      - keep a local count and only taking the global sum when local reaches 0 (this
        is difficult for vfsmounts, because we can't hold preempt off for the life of
        a reference, so a counter would need to be per-thread or tied strongly to a
        particular CPU which requires more locking).
      
      - keep a local difference of increments and decrements, which allows us to sum
        the total difference and hence find the refcount when summing all CPUs. Then,
        keep a single integer "long" refcount for slow and long lasting references,
        and only take the global sum of local counters when the long refcount is 0.
      
      This last scheme is what I implemented here. Attached mounts and process root
      and working directory references are "long" references, and everything else is
      a short reference.
      
      This allows scalable vfsmount references during path walking over mounted
      subtrees and unattached (lazy umounted) mounts with processes still running
      in them.
      
      This results in one fewer atomic op in the fastpath: mntget is now just a
      per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock
      and non-atomic decrement in the common case. However code is otherwise bigger
      and heavier, so single threaded performance is basically a wash.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b3e19d92
  8. 04 12月, 2010 2 次提交
  9. 29 10月, 2010 1 次提交
  10. 25 10月, 2010 4 次提交
  11. 16 9月, 2010 1 次提交
    • A
      mtd: autoconvert trivial BKL users to private mutex · 5aa82940
      Arnd Bergmann 提交于
      All these files use the big kernel lock in a trivial
      way to serialize their private file operations,
      typically resulting from an earlier semi-automatic
      pushdown from VFS.
      
      None of these drivers appears to want to lock against
      other code, and they all use the BKL as the top-level
      lock in their file operations, meaning that there
      is no lock-order inversion problem.
      
      Consequently, we can remove the BKL completely,
      replacing it with a per-file mutex in every case.
      Using a scripted approach means we can avoid
      typos.
      
      file=$1
      name=$2
      if grep -q lock_kernel ${file} ; then
          if grep -q 'include.*linux.mutex.h' ${file} ; then
                  sed -i '/include.*<linux\/smp_lock.h>/d' ${file}
          else
                  sed -i 's/include.*<linux\/smp_lock.h>.*$/include <linux\/mutex.h>/g' ${file}
          fi
          sed -i ${file} \
              -e "/^#include.*linux.mutex.h/,$ {
                      1,/^\(static\|int\|long\)/ {
                           /^\(static\|int\|long\)/istatic DEFINE_MUTEX(${name}_mutex);
      
      } }"  \
          -e "s/\(un\)*lock_kernel\>[ ]*()/mutex_\1lock(\&${name}_mutex)/g" \
          -e '/[      ]*cycle_kernel_lock();/d'
      else
          sed -i -e '/include.*\<smp_lock.h\>/d' ${file}  \
                      -e '/cycle_kernel_lock()/d'
      fi
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: David Woodhouse <David.Woodhouse@intel.com>
      Cc: linux-mtd@lists.infradead.org
      5aa82940
  12. 09 8月, 2010 2 次提交
  13. 04 8月, 2010 1 次提交
  14. 02 8月, 2010 1 次提交
  15. 22 5月, 2010 1 次提交
    • J
      drivers/mtd: Use memdup_user · df1f1d1c
      Julia Lawall 提交于
      Use memdup_user when user data is immediately copied into the
      allocated region.
      
      The semantic patch that makes this change is as follows:
      (http://coccinelle.lip6.fr/)
      
      // <smpl>
      @@
      expression from,to,size,flag;
      position p;
      identifier l1,l2;
      @@
      
      -  to = \(kmalloc@p\|kzalloc@p\)(size,flag);
      +  to = memdup_user(from,size);
         if (
      -      to==NULL
      +      IS_ERR(to)
                       || ...) {
         <+... when != goto l1;
      -  -ENOMEM
      +  PTR_ERR(to)
         ...+>
         }
      -  if (copy_from_user(to, from, size) != 0) {
      -    <+... when != goto l2;
      -    -EFAULT
      -    ...+>
      -  }
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      df1f1d1c
  16. 18 5月, 2010 1 次提交
  17. 17 5月, 2010 1 次提交
  18. 25 2月, 2010 4 次提交
  19. 29 5月, 2009 3 次提交