1. 26 5月, 2011 1 次提交
  2. 25 5月, 2011 18 次提交
    • A
      fs/ncpfs/inode.c: suppress used-uninitialised warning · 2a5cac17
      Andrew Morton 提交于
      We get this spurious warning:
      
        fs/ncpfs/inode.c: In function 'ncp_fill_super':
        fs/ncpfs/inode.c:451: warning: 'data.mounted_vol[1u]' may be used uninitialized in this function
        fs/ncpfs/inode.c:451: warning: 'data.mounted_vol[2u]' may be used uninitialized in this function
        fs/ncpfs/inode.c:451: warning: 'data.mounted_vol[3u]' may be used uninitialized in this function
        ...
      
      It's notabug, but we can easily fix it with a memset().
      Reported-by: NHarry Wei <jiaweiwei.xiyou@gmail.com>
      Cc: Petr Vandrovec <petr@vandrovec.name>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2a5cac17
    • A
      fscache: remove dead code under CONFIG_WORKQUEUE_DEBUGFS · e50c1f60
      Amerigo Wang 提交于
      There is no CONFIG_WORKQUEUE_DEBUGFS any more, so this code is dead.
      Signed-off-by: NWANG Cong <amwang@redhat.com>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e50c1f60
    • S
      proc: allocate storage for numa_maps statistics once · 5b52fc89
      Stephen Wilson 提交于
      In show_numa_map() we collect statistics into a numa_maps structure.
      Since the number of NUMA nodes can be very large, this structure is not a
      candidate for stack allocation.
      
      Instead of going thru a kmalloc()+kfree() cycle each time show_numa_map()
      is invoked, perform the allocation just once when /proc/pid/numa_maps is
      opened.
      
      Performing the allocation when numa_maps is opened, and thus before a
      reference to the target tasks mm is taken, eliminates a potential
      stalemate condition in the oom-killer as originally described by Hugh
      Dickins:
      
        ... imagine what happens if the system is out of memory, and the mm
        we're looking at is selected for killing by the OOM killer: while
        we wait in __get_free_page for more memory, no memory is freed
        from the selected mm because it cannot reach exit_mmap while we hold
        that reference.
      Signed-off-by: NStephen Wilson <wilsons@start.ca>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b52fc89
    • S
      proc: make struct proc_maps_private truly private · f2beb798
      Stephen Wilson 提交于
      Now that mm/mempolicy.c is no longer implementing /proc/pid/numa_maps
      there is no need to export struct proc_maps_private to the world.  Move it
      to fs/proc/internal.h instead.
      Signed-off-by: NStephen Wilson <wilsons@start.ca>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f2beb798
    • S
      mm: proc: move show_numa_map() to fs/proc/task_mmu.c · f69ff943
      Stephen Wilson 提交于
      Moving show_numa_map() from mempolicy.c to task_mmu.c solves several
      issues.
      
        - Having the show() operation "miles away" from the corresponding
          seq_file iteration operations is a maintenance burden.
      
        - The need to export ad hoc info like struct proc_maps_private is
          eliminated.
      
        - The implementation of show_numa_map() can be improved in a simple
          manner by cooperating with the other seq_file operations (start,
          stop, etc) -- something that would be messy to do without this
          change.
      Signed-off-by: NStephen Wilson <wilsons@start.ca>
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f69ff943
    • E
      tmpfs: implement generic xattr support · b09e0fa4
      Eric Paris 提交于
      Implement generic xattrs for tmpfs filesystems.  The Feodra project, while
      trying to replace suid apps with file capabilities, realized that tmpfs,
      which is used on the build systems, does not support file capabilities and
      thus cannot be used to build packages which use file capabilities.  Xattrs
      are also needed for overlayfs.
      
      The xattr interface is a bit odd.  If a filesystem does not implement any
      {get,set,list}xattr functions the VFS will call into some random LSM hooks
      and the running LSM can then implement some method for handling xattrs.
      SELinux for example provides a method to support security.selinux but no
      other security.* xattrs.
      
      As it stands today when one enables CONFIG_TMPFS_POSIX_ACL tmpfs will have
      xattr handler routines specifically to handle acls.  Because of this tmpfs
      would loose the VFS/LSM helpers to support the running LSM.  To make up
      for that tmpfs had stub functions that did nothing but call into the LSM
      hooks which implement the helpers.
      
      This new patch does not use the LSM fallback functions and instead just
      implements a native get/set/list xattr feature for the full security.* and
      trusted.* namespace like a normal filesystem.  This means that tmpfs can
      now support both security.selinux and security.capability, which was not
      previously possible.
      
      The basic implementation is that I attach a:
      
      struct shmem_xattr {
      	struct list_head list; /* anchored by shmem_inode_info->xattr_list */
      	char *name;
      	size_t size;
      	char value[0];
      };
      
      Into the struct shmem_inode_info for each xattr that is set.  This
      implementation could easily support the user.* namespace as well, except
      some care needs to be taken to prevent large amounts of unswappable memory
      being allocated for unprivileged users.
      
      [mszeredi@suse.cz: new config option, suport trusted.*, support symlinks]
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Acked-by: NSerge Hallyn <serge.hallyn@ubuntu.com>
      Tested-by: NSerge Hallyn <serge.hallyn@ubuntu.com>
      Cc: Kyle McMartin <kyle@mcmartin.ca>
      Acked-by: NHugh Dickins <hughd@google.com>
      Tested-by: NJordi Pujol <jordipujolp@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b09e0fa4
    • Y
      vmscan: change shrinker API by passing shrink_control struct · 1495f230
      Ying Han 提交于
      Change each shrinker's API by consolidating the existing parameters into
      shrink_control struct.  This will simplify any further features added w/o
      touching each file of shrinker.
      
      [akpm@linux-foundation.org: fix build]
      [akpm@linux-foundation.org: fix warning]
      [kosaki.motohiro@jp.fujitsu.com: fix up new shrinker API]
      [akpm@linux-foundation.org: fix xfs warning]
      [akpm@linux-foundation.org: update gfs2]
      Signed-off-by: NYing Han <yinghan@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Acked-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1495f230
    • Y
      vmscan: change shrink_slab() interfaces by passing shrink_control · a09ed5e0
      Ying Han 提交于
      Consolidate the existing parameters to shrink_slab() into a new
      shrink_control struct.  This is needed later to pass the same struct to
      shrinkers.
      Signed-off-by: NYing Han <yinghan@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Acked-by: NPavel Emelyanov <xemul@openvz.org>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a09ed5e0
    • P
      mm: Convert i_mmap_lock to a mutex · 3d48ae45
      Peter Zijlstra 提交于
      Straightforward conversion of i_mmap_lock to a mutex.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3d48ae45
    • P
      mm: Remove i_mmap_lock lockbreak · 97a89413
      Peter Zijlstra 提交于
      Hugh says:
       "The only significant loser, I think, would be page reclaim (when
        concurrent with truncation): could spin for a long time waiting for
        the i_mmap_mutex it expects would soon be dropped? "
      
      Counter points:
       - cpu contention makes the spin stop (need_resched())
       - zap pages should be freeing pages at a higher rate than reclaim
         ever can
      
      I think the simplification of the truncate code is definitely worth it.
      
      Effectively reverts: 2aa15890 ("mm: prevent concurrent
      unmap_mapping_range() on the same inode") and takes out the code that
      caused its problem.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97a89413
    • P
      mm: mmu_gather rework · d16dfc55
      Peter Zijlstra 提交于
      Rework the existing mmu_gather infrastructure.
      
      The direct purpose of these patches was to allow preemptible mmu_gather,
      but even without that I think these patches provide an improvement to the
      status quo.
      
      The first 9 patches rework the mmu_gather infrastructure.  For review
      purpose I've split them into generic and per-arch patches with the last of
      those a generic cleanup.
      
      The next patch provides generic RCU page-table freeing, and the followup
      is a patch converting s390 to use this.  I've also got 4 patches from
      DaveM lined up (not included in this series) that uses this to implement
      gup_fast() for sparc64.
      
      Then there is one patch that extends the generic mmu_gather batching.
      
      After that follow the mm preemptibility patches, these make part of the mm
      a lot more preemptible.  It converts i_mmap_lock and anon_vma->lock to
      mutexes which together with the mmu_gather rework makes mmu_gather
      preemptible as well.
      
      Making i_mmap_lock a mutex also enables a clean-up of the truncate code.
      
      This also allows for preemptible mmu_notifiers, something that XPMEM I
      think wants.
      
      Furthermore, it removes the new and universially detested unmap_mutex.
      
      This patch:
      
      Remove the first obstacle towards a fully preemptible mmu_gather.
      
      The current scheme assumes mmu_gather is always done with preemption
      disabled and uses per-cpu storage for the page batches.  Change this to
      try and allocate a page for batching and in case of failure, use a small
      on-stack array to make some progress.
      
      Preemptible mmu_gather is desired in general and usable once i_mmap_lock
      becomes a mutex.  Doing it before the mutex conversion saves us from
      having to rework the code by moving the mmu_gather bits inside the
      pte_lock.
      
      Also avoid flushing the tlb batches from under the pte lock, this is
      useful even without the i_mmap_lock conversion as it significantly reduces
      pte lock hold times.
      
      [akpm@linux-foundation.org: fix comment tpyo]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Tony Luck <tony.luck@intel.com>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NHugh Dickins <hughd@google.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d16dfc55
    • M
      mm: make expand_downwards() symmetrical with expand_upwards() · d05f3169
      Michal Hocko 提交于
      Currently we have expand_upwards exported while expand_downwards is
      accessible only via expand_stack or expand_stack_downwards.
      
      check_stack_guard_page is a nice example of the asymmetry.  It uses
      expand_stack for VM_GROWSDOWN while expand_upwards is called for
      VM_GROWSUP case.
      
      Let's clean this up by exporting both functions and make those names
      consistent.  Let's use expand_{upwards,downwards} because expanding
      doesn't always involve stack manipulation (an example is
      ia64_do_page_fault which uses expand_upwards for registers backing store
      expansion).  expand_downwards has to be defined for both
      CONFIG_STACK_GROWS{UP,DOWN} because get_arg_page calls the downwards
      version in the early process initialization phase for growsup
      configuration.
      Signed-off-by: NMichal Hocko <mhocko@suse.cz>
      Acked-by: NHugh Dickins <hughd@google.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d05f3169
    • A
      fs/9p: Don't clunk dentry fid when we fail to get a writeback inode · 398c4f0e
      Aneesh Kumar K.V 提交于
      The dentry fid get clunked via the dput.
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NEric Van Hensbergen <ericvh@gmail.com>
      398c4f0e
    • E
      9p: remove experimental tag from tested configurations · 87211cd8
      Eric Van Hensbergen 提交于
      The 9p client is currently undergoing regular regresssion and
      stress testing as a by-product of the virtfs work.  I think its
      finally time to take off the experimental tags from the well-tested
      code paths.
      Signed-off-by: NEric Van Hensbergen <ericvh@gmail.com>
      87211cd8
    • E
    • S
      ceph: fix cap flush race reentrancy · db354052
      Sage Weil 提交于
      In e9964c10 we change cap flushing to do a delicate dance because some
      inodes on the cap_dirty list could be in a migrating state (got EXPORT but
      not IMPORT) in which we couldn't actually flush and move from
      dirty->flushing, breaking the while (!empty) { process first } loop
      structure.  It worked for a single sync thread, but was not reentrant and
      triggered infinite loops when multiple syncers came along.
      
      Instead, move inodes with dirty to a separate cap_dirty_migrating list
      when in the limbo export-but-no-import state, allowing us to go back to
      the simple loop structure (which was reentrant).  This is cleaner and more
      robust.
      
      Audited the cap_dirty users and this looks fine:
      list_empty(&ci->i_dirty_item) is still a reliable indicator of whether we
      have dirty caps (which list we're on is irrelevant) and list_del_init()
      calls still do the right thing.
      Signed-off-by: NSage Weil <sage@newdream.net>
      db354052
    • S
      ceph: avoid inode lookup on nfs fh reconnect · 45e3d3ee
      Sage Weil 提交于
      If we get the inode from the MDS, we have a reference in req; don't do a
      fresh lookup.
      Signed-off-by: NSage Weil <sage@newdream.net>
      45e3d3ee
    • S
      ceph: use LOOKUPINO to make unconnected nfs fh more reliable · 3c454cf2
      Sage Weil 提交于
      If we are unable to locate an inode by ino, ask the MDS using the new
      LOOKUPINO command.
      Signed-off-by: NSage Weil <sage@newdream.net>
      3c454cf2
  3. 24 5月, 2011 2 次提交
  4. 23 5月, 2011 6 次提交
    • T
      block: move bd_set_size() above rescan_partitions() in __blkdev_get() · ff2a9941
      Tejun Heo 提交于
      02e35228 (block: rescan partitions on invalidated devices on
      -ENOMEDIA too) relocated partition rescan above explicit bd_set_size()
      to simplify condition check.  As rescan_partitions() does its own bdev
      size setting, this doesn't break anything; however,
      rescan_partitions() prints out the following messages when adjusting
      bdev size, which can be confusing.
      
        sda: detected capacity change from 0 to 146815737856
        sdb: detected capacity change from 0 to 146815737856
      
      This patch restores the original order and remove the warning
      messages.
      
      stable: Please apply together with 02e35228 (block: rescan
              partitions on invalidated devices on -ENOMEDIA too).
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NTony Luck <tony.luck@gmail.com>
      Tested-by: NTony Luck <tony.luck@gmail.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ff2a9941
    • D
      dlm: make plock operation killable · 901025d2
      David Teigland 提交于
      Allow processes blocked on plock requests to be interrupted
      when they are killed.  This leaves the problem of cleaning
      up the lock state in userspace.  This has three parts:
      
      1. Add a flag to unlock operations sent to userspace
      indicating the file is being closed.  Userspace will
      then look for and clear any waiting plock operations that
      were abandoned by an interrupted process.
      
      2. Queue an unlock-close operation (like in 1) to clean up
      userspace from an interrupted plock request.  This is needed
      because the vfs will not send a cleanup-unlock if it sees no
      locks on the file, which it won't if the interrupted operation
      was the only one.
      
      3. Do not use replies from userspace for unlock-close operations
      because they are unnecessary (they are just cleaning up for the
      process which did not make an unlock call).  This also simplifies
      the new unlock-close generated from point 2.
      Signed-off-by: NDavid Teigland <teigland@redhat.com>
      901025d2
    • T
      timerfd: Manage cancelable timers in timerfd · 9ec26907
      Thomas Gleixner 提交于
      Peter is concerned about the extra scan of CLOCK_REALTIME_COS in the
      timer interrupt. Yes, I did not think about it, because the solution
      was so elegant. I didn't like the extra list in timerfd when it was
      proposed some time ago, but with a rcu based list the list walk it's
      less horrible than the original global lock, which was held over the
      list iteration.
      Requested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NPeter Zijlstra <peterz@infradead.org>
      9ec26907
    • T
      block: move bd_set_size() above rescan_partitions() in __blkdev_get() · 7e69723f
      Tejun Heo 提交于
      02e35228 (block: rescan partitions on invalidated devices on
      -ENOMEDIA too) relocated partition rescan above explicit bd_set_size()
      to simplify condition check.  As rescan_partitions() does its own bdev
      size setting, this doesn't break anything; however,
      rescan_partitions() prints out the following messages when adjusting
      bdev size, which can be confusing.
      
        sda: detected capacity change from 0 to 146815737856
        sdb: detected capacity change from 0 to 146815737856
      
      This patch restores the original order and remove the warning
      messages.
      
      stable: Please apply together with 02e35228 (block: rescan
              partitions on invalidated devices on -ENOMEDIA too).
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reported-by: NTony Luck <tony.luck@gmail.com>
      Tested-by: NTony Luck <tony.luck@gmail.com>
      Cc: stable@kernel.org
      
      Stable note: 2.6.39 only.
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      7e69723f
    • A
      UBIFS: switch to dynamic printks · 56e46742
      Artem Bityutskiy 提交于
      Switch to debugging using dynamic printk (pr_debug()). There is no good reason
      to carry custom debugging prints if there is so cool and powerful generic
      dynamic printk infrastructure, see Documentation/dynamic-debug-howto.txt. With
      dynamic printks we can switch on/of individual prints, per-file, per-function
      and per format messages. This means that instead of doing old-fashioned
      
      echo 1 > /sys/module/ubifs/parameters/debug_msgs
      
      to enable general messages, we can do:
      
      echo 'format "UBIFS DBG gen" +ptlf' > control
      
      to enable general messages and additionally ask the dynamic printk
      infrastructure to print process ID, line number and function name. So there is
      no reason to keep UBIFS-specific crud if there is more powerful generic thing.
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@nokia.com>
      56e46742
    • H
      fs: add missing prefetch.h include · 9ce6e0be
      Heiko Carstens 提交于
      Fixes this build error on s390 and probably other archs as well:
      
        fs/inode.c: In function 'new_inode':
        fs/inode.c:894:2: error: implicit declaration of function 'spin_lock_prefetch'
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      [ Happens on architectures that don't define their own prefetch
        functions in <asm/processor.h>, and instead rely on the default
        ones in <linux/prefetch.h>   - Linus]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9ce6e0be
  5. 22 5月, 2011 1 次提交
  6. 21 5月, 2011 5 次提交
  7. 20 5月, 2011 7 次提交