1. 11 5月, 2007 3 次提交
  2. 09 5月, 2007 3 次提交
    • P
      Introduce a handy list_first_entry macro · b5e61818
      Pavel Emelianov 提交于
      There are many places in the kernel where the construction like
      
         foo = list_entry(head->next, struct foo_struct, list);
      
      are used.
      The code might look more descriptive and neat if using the macro
      
         list_first_entry(head, type, member) \
                   list_entry((head)->next, type, member)
      
      Here is the macro itself and the examples of its usage in the generic code.
       If it will turn out to be useful, I can prepare the set of patches to
      inject in into arch-specific code, drivers, networking, etc.
      Signed-off-by: NPavel Emelianov <xemul@openvz.org>
      Signed-off-by: NKirill Korotaev <dev@openvz.org>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Zach Brown <zach.brown@oracle.com>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: John McCutchan <ttb@tentacle.dhs.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: john stultz <johnstul@us.ibm.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5e61818
    • R
      header cleaning: don't include smp_lock.h when not used · e63340ae
      Randy Dunlap 提交于
      Remove includes of <linux/smp_lock.h> where it is not used/needed.
      Suggested by Al Viro.
      
      Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
      sparc64, and arm (all 59 defconfigs).
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e63340ae
    • D
      epoll: optimizations and cleanups · 6192bd53
      Davide Libenzi 提交于
      Epoll is doing multiple passes over the ready set at the moment, because of
      the constraints over the f_op->poll() call.  Looking at the code again, I
      noticed that we already hold the epoll semaphore in read, and this
      (together with other locking conditions that hold while doing an
      epoll_wait()) can lead to a smarter way [1] to "ship" events to userspace
      (in a single pass).
      
      This is a stress application that can be used to test the new code.  It
      spwans multiple thread and call epoll_wait() and epoll_ctl() from many
      threads.  Stress tested on my dual Opteron 254 w/out any problems.
      
      http://www.xmailserver.org/totalmess.c
      
      This is not a benchmark, just something that tries to stress and exploit
      possible problems with the new code.
      Also, I made a stupid micro-benchmark:
      
      http://www.xmailserver.org/epwbench.c
      
      [1] Considering that epoll must be thread-safe, there are five ways we can
          be hit during an epoll_wait() transfer loop (ep_send_events()):
      
          1) The epoll fd going away and calling ep_free
             This just can't happen, since we did an fget() in sys_epoll_wait
      
          2) An epoll_ctl(EPOLL_CTL_DEL)
             This can't happen because epoll_ctl() gets ep->sem in write, and
             we're holding it in read during ep_send_events()
      
          3) An fd stored inside the epoll fd going away
             This can't happen because in eventpoll_release_file() we get
             ep->sem in write, and we're holding it in read during
             ep_send_events()
      
          4) Another epoll_wait() happening on another thread
             They both can be inside ep_send_events() at the same time, we get
             (splice) the ready-list under the spinlock, so each one will get
             its own ready list. Note that an fd cannot be at the same time
             inside more than one ready list, because ep_poll_callback() will
             not re-queue it if it sees it already linked:
      
             if (ep_is_linked(&epi->rdllink))
                      goto is_linked;
      
             Another case that can happen, is two concurrent epoll_wait(),
             coming in with a userspace event buffer of size, say, ten.
             Suppose there are 50 event ready in the list. The first
             epoll_wait() will "steal" the whole list, while the second, seeing
             no events, will go to sleep. But at the end of ep_send_events() in
             the first epoll_wait(), we will re-inject surplus ready fds, and we
             will trigger the proper wake_up to the second epoll_wait().
      
          5) ep_poll_callback() hitting us asyncronously
             This is the tricky part. As I said above, the ep_is_linked() test
             done inside ep_poll_callback(), will guarantee us that until the
             item will result linked to a list, ep_poll_callback() will not try
             to re-queue it again (read, write data on any of its members). When
             we do a list_del() in ep_send_events(), the item will still satisfy
             the ep_is_linked() test (whatever data is written in prev/next,
             it'll never be its own pointer), so ep_poll_callback() will still
             leave us alone. It's only after the eventual smp_mb()+INIT_LIST_HEAD(&epi->rdllink)
             that it'll become visible to ep_poll_callback(), but at the point
             we're already past it.
      
      [akpm@osdl.org: 80 cols]
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6192bd53
  3. 09 12月, 2006 1 次提交
  4. 08 12月, 2006 2 次提交
  5. 12 10月, 2006 1 次提交
    • D
      [PATCH] epoll_pwait() · b611967d
      Davide Libenzi 提交于
      Implement the epoll_pwait system call, that extend the event wait mechanism
      with the same logic ppoll and pselect do.  The definition of epoll_pwait
      is:
      
      int epoll_pwait(int epfd, struct epoll_event *events, int maxevents,
                       int timeout, const sigset_t *sigmask, size_t sigsetsize);
      
      The difference between the vanilla epoll_wait and epoll_pwait is that the
      latter allows the caller to specify a signal mask to be set while waiting
      for events.  Hence epoll_pwait will wait until either one monitored event,
      or an unmasked signal happen.  If sigmask is NULL, the epoll_pwait system
      call will act exactly like epoll_wait.  For the POSIX definition of
      pselect, information is available here:
      
      http://www.opengroup.org/onlinepubs/009695399/functions/select.htmlSigned-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Cc: Roland McGrath <roland@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b611967d
  6. 03 10月, 2006 1 次提交
  7. 27 9月, 2006 1 次提交
  8. 28 8月, 2006 1 次提交
  9. 04 7月, 2006 1 次提交
  10. 26 6月, 2006 1 次提交
    • D
      [PATCH] epoll: use unlocked wqueue operations · 3419b23a
      Davide Libenzi 提交于
      A few days ago Arjan signaled a lockdep red flag on epoll locks, and
      precisely between the epoll's device structure lock (->lock) and the wait
      queue head lock (->lock).
      
      Like I explained in another email, and directly to Arjan, this can't happen
      in reality because of the explicit check at eventpoll.c:592, that does not
      allow to drop an epoll fd inside the same epoll fd.  Since lockdep is
      working on per-structure locks, it will never be able to know of policies
      enforced in other parts of the code.
      
      It was decided time ago of having the ability to drop epoll fds inside
      other epoll fds, that triggers a very trick wakeup operations (due to
      possibly reentrant callback-driven wakeups) handled by the
      ep_poll_safewake() function.  While looking again at the code though, I
      noticed that all the operations done on the epoll's main structure wait
      queue head (->wq) are already protected by the epoll lock (->lock), so that
      locked-style functions can be used to manipulate the ->wq member.  This
      makes both a lock-acquire save, and lockdep happy.
      
      Running totalmess on my dual opteron for a while did not reveal any problem
      so far:
      
      http://www.xmailserver.org/totalmess.cSigned-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3419b23a
  11. 23 6月, 2006 1 次提交
    • D
      [PATCH] VFS: Permit filesystem to override root dentry on mount · 454e2398
      David Howells 提交于
      Extend the get_sb() filesystem operation to take an extra argument that
      permits the VFS to pass in the target vfsmount that defines the mountpoint.
      
      The filesystem is then required to manually set the superblock and root dentry
      pointers.  For most filesystems, this should be done with simple_set_mnt()
      which will set the superblock pointer and then set the root dentry to the
      superblock's s_root (as per the old default behaviour).
      
      The get_sb() op now returns an integer as there's now no need to return the
      superblock pointer.
      
      This patch permits a superblock to be implicitly shared amongst several mount
      points, such as can be done with NFS to avoid potential inode aliasing.  In
      such a case, simple_set_mnt() would not be called, and instead the mnt_root
      and mnt_sb would be set directly.
      
      The patch also makes the following changes:
      
       (*) the get_sb_*() convenience functions in the core kernel now take a vfsmount
           pointer argument and return an integer, so most filesystems have to change
           very little.
      
       (*) If one of the convenience function is not used, then get_sb() should
           normally call simple_set_mnt() to instantiate the vfsmount. This will
           always return 0, and so can be tail-called from get_sb().
      
       (*) generic_shutdown_super() now calls shrink_dcache_sb() to clean up the
           dcache upon superblock destruction rather than shrink_dcache_anon().
      
           This is required because the superblock may now have multiple trees that
           aren't actually bound to s_root, but that still need to be cleaned up. The
           currently called functions assume that the whole tree is rooted at s_root,
           and that anonymous dentries are not the roots of trees which results in
           dentries being left unculled.
      
           However, with the way NFS superblock sharing are currently set to be
           implemented, these assumptions are violated: the root of the filesystem is
           simply a dummy dentry and inode (the real inode for '/' may well be
           inaccessible), and all the vfsmounts are rooted on anonymous[*] dentries
           with child trees.
      
           [*] Anonymous until discovered from another tree.
      
       (*) The documentation has been adjusted, including the additional bit of
           changing ext2_* into foo_* in the documentation.
      
      [akpm@osdl.org: convert ipath_fs, do other stuff]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Cc: Nathan Scott <nathans@sgi.com>
      Cc: Roland Dreier <rolandd@cisco.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      454e2398
  12. 21 4月, 2006 1 次提交
  13. 11 4月, 2006 1 次提交
  14. 29 3月, 2006 1 次提交
  15. 27 3月, 2006 1 次提交
  16. 26 3月, 2006 1 次提交
    • D
      [PATCH] POLLRDHUP/EPOLLRDHUP handling for half-closed devices notifications · f348d70a
      Davide Libenzi 提交于
      Implement the half-closed devices notifiation, by adding a new POLLRDHUP
      (and its alias EPOLLRDHUP) bit to the existing poll/select sets.  Since the
      existing POLLHUP handling, that does not report correctly half-closed
      devices, was feared to be changed, this implementation leaves the current
      POLLHUP reporting unchanged and simply add a new bit that is set in the few
      places where it makes sense.  The same thing was discussed and conceptually
      agreed quite some time ago:
      
      http://lkml.org/lkml/2003/7/12/116
      
      Since this new event bit is added to the existing Linux poll infrastruture,
      even the existing poll/select system calls will be able to use it.  As far
      as the existing POLLHUP handling, the patch leaves it as is.  The
      pollrdhup-2.6.16.rc5-0.10.diff defines the POLLRDHUP for all the existing
      archs and sets the bit in the six relevant files.  The other attached diff
      is the simple change required to sys/epoll.h to add the EPOLLRDHUP
      definition.
      
      There is "a stupid program" to test POLLRDHUP delivery here:
      
       http://www.xmailserver.org/pollrdhup-test.c
      
      It tests poll(2), but since the delivery is same epoll(2) will work equally.
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f348d70a
  17. 23 3月, 2006 2 次提交
  18. 28 9月, 2005 1 次提交
  19. 18 9月, 2005 1 次提交
  20. 24 6月, 2005 1 次提交
  21. 06 5月, 2005 1 次提交
  22. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4