1. 24 9月, 2010 1 次提交
  2. 23 9月, 2010 1 次提交
    • S
      nfs: introduce mount option '-olocal_lock' to make locks local · 5eebde23
      Suresh Jayaraman 提交于
      NFS clients since 2.6.12 support flock locks by emulating fcntl byte-range
      locks. Due to this, some windows applications which seem to use both flock
      (share mode lock mapped as flock by Samba) and fcntl locks sequentially on
      the same file, can't lock as they falsely assume the file is already locked.
      The problem was reported on a setup with windows clients accessing excel files
      on a Samba exported share which is originally a NFS mount from a NetApp filer.
      
      Older NFS clients (< 2.6.12) did not see this problem as flock locks were
      considered local. To support legacy flock behavior, this patch adds a mount
      option "-olocal_lock=" which can take the following values:
      
         'none'  		- Neither flock locks nor POSIX locks are local
         'flock' 		- flock locks are local
         'posix' 		- fcntl/POSIX locks are local
         'all'		- Both flock locks and POSIX locks are local
      
      Testing:
      
         - This patch was tested by using -olocal_lock option with different values
           and the NLM calls were noted from the network packet captured.
      
           'none'  - NLM calls were seen during both flock() and fcntl(), flock lock
         	       was granted, fcntl was denied
           'flock' - no NLM calls for flock(), NLM call was seen for fcntl(),
         	       granted
           'posix' - NLM call was seen for flock() - granted, no NLM call for fcntl()
           'all'   - no NLM calls were seen during both flock() and fcntl()
      
         - No bugs were seen during NFSv4 locking/unlocking in general and NFSv4
           reboot recovery.
      
      Cc: Neil Brown <neilb@suse.de>
      Signed-off-by: NSuresh Jayaraman <sjayaraman@suse.de>
      Signed-off-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      5eebde23
  3. 22 9月, 2010 1 次提交
  4. 18 9月, 2010 4 次提交
  5. 17 9月, 2010 5 次提交
  6. 29 8月, 2010 1 次提交
  7. 28 8月, 2010 1 次提交
  8. 27 8月, 2010 1 次提交
  9. 25 8月, 2010 2 次提交
  10. 24 8月, 2010 2 次提交
  11. 23 8月, 2010 5 次提交
  12. 22 8月, 2010 1 次提交
    • A
      workqueue: Add basic tracepoints to track workqueue execution · e36c886a
      Arjan van de Ven 提交于
      With the introduction of the new unified work queue thread pools,
      we lost one feature: It's no longer possible to know which worker
      is causing the CPU to wake out of idle. The result is that PowerTOP
      now reports a lot of "kworker/a:b" instead of more readable results.
      
      This patch adds a pair of tracepoints to the new workqueue code,
      similar in style to the timer/hrtimer tracepoints.
      
      With this pair of tracepoints, the next PowerTOP can correctly
      report which work item caused the wakeup (and how long it took):
      
      Interrupt (43)            i915      time   3.51ms    wakeups 141
      Work      ieee80211_iface_work      time   0.81ms    wakeups  29
      Work              do_dbs_timer      time   0.55ms    wakeups  24
      Process                   Xorg      time  21.36ms    wakeups   4
      Timer    sched_rt_period_timer      time   0.01ms    wakeups   1
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e36c886a
  13. 21 8月, 2010 5 次提交
  14. 20 8月, 2010 1 次提交
  15. 19 8月, 2010 4 次提交
  16. 18 8月, 2010 5 次提交
    • J
      ALSA: emu10k1 - delay the PCM interrupts (add pcm_irq_delay parameter) · 56385a12
      Jaroslav Kysela 提交于
      With some hardware combinations, the PCM interrupts are acknowledged
      before the period boundary from the emu10k1 chip. The midlevel PCM code
      gets confused and the playback stream is interrupted.
      
      It seems that the interrupt processing shift by 2 samples is enough
      to fix this issue. This default value does not harm other,
      non-affected hardware.
      
      More information: Kernel bugzilla bug#16300
      
      [A copmile warning fixed by tiwai]
      Signed-off-by: NJaroslav Kysela <perex@perex.cz>
      Cc: <stable@kernel.org>
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      56385a12
    • N
      fs: scale files_lock · 6416ccb7
      Nick Piggin 提交于
      fs: scale files_lock
      
      Improve scalability of files_lock by adding per-cpu, per-sb files lists,
      protected with an lglock. The lglock provides fast access to the per-cpu lists
      to add and remove files. It also provides a snapshot of all the per-cpu lists
      (although this is very slow).
      
      One difficulty with this approach is that a file can be removed from the list
      by another CPU. We must track which per-cpu list the file is on with a new
      variale in the file struct (packed into a hole on 64-bit archs). Scalability
      could suffer if files are frequently removed from different cpu's list.
      
      However loads with frequent removal of files imply short interval between
      adding and removing the files, and the scheduler attempts to avoid moving
      processes too far away. Also, even in the case of cross-CPU removal, the
      hardware has much more opportunity to parallelise cacheline transfers with N
      cachelines than with 1.
      
      A worst-case test of 1 CPU allocating files subsequently being freed by N CPUs
      degenerates to contending on a single lock, which is no worse than before. When
      more than one CPU are allocating files, even if they are always freed by
      different CPUs, there will be more parallelism than the single-lock case.
      
      Testing results:
      
      On a 2 socket, 8 core opteron, I measure the number of times the lock is taken
      to remove the file, the number of times it is removed by the same CPU that
      added it, and the number of times it is removed by the same node that added it.
      
      Booting:    locks=  25049 cpu-hits=  23174 (92.5%) node-hits=  23945 (95.6%)
      kbuild -j16 locks=2281913 cpu-hits=2208126 (96.8%) node-hits=2252674 (98.7%)
      dbench 64   locks=4306582 cpu-hits=4287247 (99.6%) node-hits=4299527 (99.8%)
      
      So a file is removed from the same CPU it was added by over 90% of the time.
      It remains within the same node 95% of the time.
      
      Tim Chen ran some numbers for a 64 thread Nehalem system performing a compile.
      
                      throughput
      2.6.34-rc2      24.5
      +patch          24.9
      
                      us      sys     idle    IO wait (in %)
      2.6.34-rc2      51.25   28.25   17.25   3.25
      +patch          53.75   18.5    19      8.75
      
      So significantly less CPU time spent in kernel code, higher idle time and
      slightly higher throughput.
      
      Single threaded performance difference was within the noise of microbenchmarks.
      That is not to say penalty does not exist, the code is larger and more memory
      accesses required so it will be slightly slower.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6416ccb7
    • N
      lglock: introduce special lglock and brlock spin locks · 2dc91abe
      Nick Piggin 提交于
      lglock: introduce special lglock and brlock spin locks
      
      This patch introduces "local-global" locks (lglocks). These can be used to:
      
      - Provide fast exclusive access to per-CPU data, with exclusive access to
        another CPU's data allowed but possibly subject to contention, and to provide
        very slow exclusive access to all per-CPU data.
      - Or to provide very fast and scalable read serialisation, and to provide
        very slow exclusive serialisation of data (not necessarily per-CPU data).
      
      Brlocks are also implemented as a short-hand notation for the latter use
      case.
      
      Thanks to Paul for local/global naming convention.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2dc91abe
    • N
      tty: fix fu_list abuse · d996b62a
      Nick Piggin 提交于
      tty: fix fu_list abuse
      
      tty code abuses fu_list, which causes a bug in remount,ro handling.
      
      If a tty device node is opened on a filesystem, then the last link to the inode
      removed, the filesystem will be allowed to be remounted readonly. This is
      because fs_may_remount_ro does not find the 0 link tty inode on the file sb
      list (because the tty code incorrectly removed it to use for its own purpose).
      This can result in a filesystem with errors after it is marked "clean".
      
      Taking idea from Christoph's initial patch, allocate a tty private struct
      at file->private_data and put our required list fields in there, linking
      file and tty. This makes tty nodes behave the same way as other device nodes
      and avoid meddling with the vfs, and avoids this bug.
      
      The error handling is not trivial in the tty code, so for this bugfix, I take
      the simple approach of using __GFP_NOFAIL and don't worry about memory errors.
      This is not a problem because our allocator doesn't fail small allocs as a rule
      anyway. So proper error handling is left as an exercise for tty hackers.
      
      [ Arguably filesystem's device inode would ideally be divorced from the
      driver's pseudo inode when it is opened, but in practice it's not clear whether
      that will ever be worth implementing. ]
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      d996b62a
    • N
      fs: cleanup files_lock locking · ee2ffa0d
      Nick Piggin 提交于
      fs: cleanup files_lock locking
      
      Lock tty_files with a new spinlock, tty_files_lock; provide helpers to
      manipulate the per-sb files list; unexport the files_lock spinlock.
      
      Cc: linux-kernel@vger.kernel.org
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Acked-by: NAndi Kleen <ak@linux.intel.com>
      Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      ee2ffa0d