1. 03 4月, 2009 2 次提交
    • W
      vfs: skip I_CLEAR state inodes · b6fac63c
      Wu Fengguang 提交于
      clear_inode() will switch inode state from I_FREEING to I_CLEAR, and do so
      _outside_ of inode_lock.  So any I_FREEING testing is incomplete without a
      coupled testing of I_CLEAR.
      
      So add I_CLEAR tests to drop_pagecache_sb(), generic_sync_sb_inodes() and
      add_dquot_ref().
      
      Masayoshi MIZUMA discovered the bug in drop_pagecache_sb() and Jan Kara
      reminds fixing the other two cases.
      
      Masayoshi MIZUMA has a nice panic flow:
      
      =====================================================================
                  [process A]               |        [process B]
       |                                    |
       |    prune_icache()                  | drop_pagecache()
       |      spin_lock(&inode_lock)        |   drop_pagecache_sb()
       |      inode->i_state |= I_FREEING;  |       |
       |      spin_unlock(&inode_lock)      |       V
       |          |                         |     spin_lock(&inode_lock)
       |          V                         |         |
       |      dispose_list()                |         |
       |        list_del()                  |         |
       |        clear_inode()               |         |
       |          inode->i_state = I_CLEAR  |         |
       |            |                       |         V
       |            |                       |      if (inode->i_state & (I_FREEING|I_WILL_FREE))
       |            |                       |              continue;           <==== NOT MATCH
       |            |                       |
       |            |                       | (DANGER from here on! Accessing disposing inode!)
       |            |                       |
       |            |                       |      __iget()
       |            |                       |        list_move() <===== PANIC on poisoned list !!
       V            V                       |
      (time)
      =====================================================================
      Reported-by: NMasayoshi MIZUMA <m.mizuma@jp.fujitsu.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b6fac63c
    • D
      nommu: fix a number of issues with the per-MM VMA patch · 33e5d769
      David Howells 提交于
      Fix a number of issues with the per-MM VMA patch:
      
       (1) Make mmap_pages_allocated an atomic_long_t, just in case this is used on
           a NOMMU system with more than 2G pages.  Makes no difference on a 32-bit
           system.
      
       (2) Report vma->vm_pgoff * PAGE_SIZE as a 64-bit value, not a 32-bit value,
           lest it overflow.
      
       (3) Move the allocation of the vm_area_struct slab back for fork.c.
      
       (4) Use KMEM_CACHE() for both vm_area_struct and vm_region slabs.
      
       (5) Use BUG_ON() rather than if () BUG().
      
       (6) Make the default validate_nommu_regions() a static inline rather than a
           #define.
      
       (7) Make free_page_series()'s objection to pages with a refcount != 1 more
           informative.
      
       (8) Adjust the __put_nommu_region() banner comment to indicate that the
           semaphore must be held for writing.
      
       (9) Limit the number of warnings about munmaps of non-mmapped regions.
      Reported-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Cc: Greg Ungerer <gerg@snapgear.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33e5d769
  2. 02 4月, 2009 1 次提交
  3. 01 4月, 2009 29 次提交
    • I
      autofs4: fix lookup deadlock · 8f63aaa8
      Ian Kent 提交于
      A deadlock can occur when user space uses a signal (autofs version 4 uses
      SIGCHLD for this) to effect expire completion.
      
      The order of events is:
      
      Expire process completes, but before being able to send SIGCHLD to it's parent
      ...
      
      Another process walks onto a different mount point and drops the directory
      inode semaphore prior to sending the request to the daemon as it must ...
      
      A third process does an lstat on on the expired mount point causing it to wait
      on expire completion (unfortunately) holding the directory semaphore.
      
      The mount request then arrives at the daemon which does an lstat and,
      deadlock.
      
      For some time I was concerned about releasing the directory semaphore around
      the expire wait in autofs4_lookup as well as for the mount call back.  I
      finally realized that the last round of changes in this function made the
      expiring dentry and the lookup dentry separate and distinct so the check and
      possible wait can be done anywhere prior to the mount call back.  This patch
      moves the check to just before the mount call back and inside the directory
      inode mutex release.
      Signed-off-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f63aaa8
    • I
      autofs4: cleanup expire code duplication · 56fcef75
      Ian Kent 提交于
      A significant portion of the autofs_dev_ioctl_expire() and
      autofs4_expire_multi() functions is duplicated code.  This patch cleans that
      up.
      Signed-off-by: NIan Kent <raven@themaw.net>
      Signed-off-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56fcef75
    • J
      ecryptfs: use kzfree() · 00fcf2cb
      Johannes Weiner 提交于
      Use kzfree() instead of memset() + kfree().
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Acked-by: NTyler Hicks <tyhicks@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00fcf2cb
    • W
      ramfs: add support for "mode=" mount option · c3b1b1cb
      Wu Fengguang 提交于
      Addresses http://bugzilla.kernel.org/show_bug.cgi?id=12843
      
      "I use ramfs instead of tmpfs for /tmp because I don't use swap on my
      laptop.  Some apps need 1777 mode for /tmp directory, but ramfs does not
      support 'mode=' mount option."
      Reported-by: NAvan Anishchuk <matimatik@gmail.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c3b1b1cb
    • D
      epoll keyed wakeups: make eventfd use keyed wakeups · 39510888
      Davide Libenzi 提交于
      Introduce keyed event wakeups inside the eventfd code.
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@movementarian.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      39510888
    • D
      epoll keyed wakeups: teach epoll about hints coming with the wakeup key · 2dfa4eea
      Davide Libenzi 提交于
      Use the events hint now sent by some devices, to avoid unnecessary wakeups
      for events that are of no interest for the caller.  This code handles both
      devices that are sending keyed events, and the ones that are not (and
      event the ones that sometimes send events, and sometimes don't).
      
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Miller <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@movementarian.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2dfa4eea
    • D
      eventfd: improve support for semaphore-like behavior · bcd0b235
      Davide Libenzi 提交于
      People started using eventfd in a semaphore-like way where before they
      were using pipes.
      
      That is, counter-based resource access.  Where a "wait()" returns
      immediately by decrementing the counter by one, if counter is greater than
      zero.  Otherwise will wait.  And where a "post(count)" will add count to
      the counter releasing the appropriate amount of waiters.  If eventfd the
      "post" (write) part is fine, while the "wait" (read) does not dequeue 1,
      but the whole counter value.
      
      The problem with eventfd is that a read() on the fd returns and wipes the
      whole counter, making the use of it as semaphore a little bit more
      cumbersome.  You can do a read() followed by a write() of COUNTER-1, but
      IMO it's pretty easy and cheap to make this work w/out extra steps.  This
      patch introduces a new eventfd flag that tells eventfd to only dequeue 1
      from the counter, allowing simple read/write to make it behave like a
      semaphore.  Simple test here:
      
      http://www.xmailserver.org/eventfd-sem.c
      
      To be back-compatible with earlier kernels, userspace applications should
      probe for the availability of this feature via
      
      #ifdef EFD_SEMAPHORE
      	fd = eventfd2 (CNT, EFD_SEMAPHORE);
      	if (fd == -1 && errno == EINVAL)
      		<fallback>
      #else
      		<fallback>
      #endif
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: <linux-api@vger.kernel.org>
      Tested-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ulrich Drepper <drepper@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bcd0b235
    • T
      epoll: use real type instead of void * · 4f0989db
      Tony Battersby 提交于
      eventpoll.c uses void * in one place for no obvious reason; change it to
      use the real type instead.
      Signed-off-by: NTony Battersby <tonyb@cybernetics.com>
      Acked-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f0989db
    • T
      epoll: clean up ep_modify · e057e15f
      Tony Battersby 提交于
      ep_modify() doesn't need to set event.data from within the ep->lock
      spinlock as the comment suggests.  The only place event.data is used is
      ep_send_events_proc(), and this is protected by ep->mtx instead of
      ep->lock.  Also update the comment for mutex_lock() at the top of
      ep_scan_ready_list(), which mentions epoll_ctl(EPOLL_CTL_DEL) but not
      epoll_ctl(EPOLL_CTL_MOD).
      
      ep_modify() can also use spin_lock_irq() instead of spin_lock_irqsave().
      Signed-off-by: NTony Battersby <tonyb@cybernetics.com>
      Acked-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e057e15f
    • T
      epoll: remove unnecessary xchg · d1bc90dd
      Tony Battersby 提交于
      xchg in ep_unregister_pollwait() is unnecessary because it is protected by
      either epmutex or ep->mtx (the same protection as ep_remove()).
      
      If xchg was necessary, it would be insufficient to protect against
      problems: if multiple concurrent calls to ep_unregister_pollwait() were
      possible then a second caller that returns without doing anything because
      nwait == 0 could return before the waitqueues are removed by the first
      caller, which looks like it could lead to problematic races with
      ep_poll_callback().
      
      So remove xchg and add comments about the locking.
      Signed-off-by: NTony Battersby <tonyb@cybernetics.com>
      Acked-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d1bc90dd
    • T
      epoll: remember the event if epoll_wait returns -EFAULT · d0305882
      Tony Battersby 提交于
      If epoll_wait returns -EFAULT, the event that was being returned when the
      fault was encountered will be forgotten.  This is not a big deal since
      EFAULT will happen only if a buggy userspace program passes in a bad
      address, in which case what happens later usually doesn't matter.
      However, it is easy to remember the event for later, and this patch makes
      a simple change to do that.
      Signed-off-by: NTony Battersby <tonyb@cybernetics.com>
      Acked-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0305882
    • T
      epoll: don't use current in irq context · abff55ce
      Tony Battersby 提交于
      ep_call_nested() (formerly ep_poll_safewake()) uses "current" (without
      dereferencing it) to detect callback recursion, but it may be called from
      irq context where the use of current is generally discouraged.  It would
      be better to use get_cpu() and put_cpu() to detect the callback recursion.
      Signed-off-by: NTony Battersby <tonyb@cybernetics.com>
      Acked-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      abff55ce
    • D
      epoll: remove debugging code · bb57c3ed
      Davide Libenzi 提交于
      Remove debugging code from epoll.  There's no need for it to be included
      into mainline code.
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bb57c3ed
    • D
      epoll: fix epoll's own poll (update) · 296e236e
      Davide Libenzi 提交于
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Pavel Pisa <pisa@cmp.felk.cvut.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      296e236e
    • D
      epoll: fix epoll's own poll · 5071f97e
      Davide Libenzi 提交于
      Fix a bug inside the epoll's f_op->poll() code, that returns POLLIN even
      though there are no actual ready monitored fds.  The bug shows up if you
      add an epoll fd inside another fd container (poll, select, epoll).
      
      The problem is that callback-based wake ups used by epoll does not carry
      (patches will follow, to fix this) any information about the events that
      actually happened.  So the callback code, since it can't call the file*
      ->poll() inside the callback, chains the file* into a ready-list.
      
      So, suppose you added an fd with EPOLLOUT only, and some data shows up on
      the fd, the file* mapped by the fd will be added into the ready-list (via
      wakeup callback).  During normal epoll_wait() use, this condition is
      sorted out at the time we're actually able to call the file*'s
      f_op->poll().
      
      Inside the old epoll's f_op->poll() though, only a quick check
      !list_empty(ready-list) was performed, and this could have led to
      reporting POLLIN even though no ready fds would show up at a following
      epoll_wait().  In order to correctly report the ready status for an epoll
      fd, the ready-list must be checked to see if any really available fd+event
      would be ready in a following epoll_wait().
      
      Operation (calling f_op->poll() from inside f_op->poll()) that, like wake
      ups, must be handled with care because of the fact that epoll fds can be
      added to other epoll fds.
      
      Test code:
      
      /*
       *  epoll_test by Davide Libenzi (Simple code to test epoll internals)
       *  Copyright (C) 2008  Davide Libenzi
       *
       *  This program is free software; you can redistribute it and/or modify
       *  it under the terms of the GNU General Public License as published by
       *  the Free Software Foundation; either version 2 of the License, or
       *  (at your option) any later version.
       *
       *  This program is distributed in the hope that it will be useful,
       *  but WITHOUT ANY WARRANTY; without even the implied warranty of
       *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
       *  GNU General Public License for more details.
       *
       *  You should have received a copy of the GNU General Public License
       *  along with this program; if not, write to the Free Software
       *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
       *
       *  Davide Libenzi <davidel@xmailserver.org>
       *
       */
      
      #include <sys/types.h>
      #include <unistd.h>
      #include <stdio.h>
      #include <stdlib.h>
      #include <string.h>
      #include <errno.h>
      #include <signal.h>
      #include <limits.h>
      #include <poll.h>
      #include <sys/epoll.h>
      #include <sys/wait.h>
      
      #define EPWAIT_TIMEO	(1 * 1000)
      #ifndef POLLRDHUP
      #define POLLRDHUP 0x2000
      #endif
      
      #define EPOLL_MAX_CHAIN	100L
      
      #define EPOLL_TF_LOOP (1 << 0)
      
      struct epoll_test_cfg {
      	long size;
      	long flags;
      };
      
      static int xepoll_create(int n) {
      	int epfd;
      
      	if ((epfd = epoll_create(n)) == -1) {
      		perror("epoll_create");
      		exit(2);
      	}
      
      	return epfd;
      }
      
      static void xepoll_ctl(int epfd, int cmd, int fd, struct epoll_event *evt) {
      	if (epoll_ctl(epfd, cmd, fd, evt) < 0) {
      		perror("epoll_ctl");
      		exit(3);
      	}
      }
      
      static void xpipe(int *fds) {
      	if (pipe(fds)) {
      		perror("pipe");
      		exit(4);
      	}
      }
      
      static pid_t xfork(void) {
      	pid_t pid;
      
      	if ((pid = fork()) == (pid_t) -1) {
      		perror("pipe");
      		exit(5);
      	}
      
      	return pid;
      }
      
      static int run_forked_proc(int (*proc)(void *), void *data) {
      	int status;
      	pid_t pid;
      
      	if ((pid = xfork()) == 0)
      		exit((*proc)(data));
      	if (waitpid(pid, &status, 0) != pid) {
      		perror("waitpid");
      		return -1;
      	}
      
      	return WIFEXITED(status) ? WEXITSTATUS(status): -2;
      }
      
      static int check_events(int fd, int timeo) {
      	struct pollfd pfd;
      
      	fprintf(stdout, "Checking events for fd %d\n", fd);
      	memset(&pfd, 0, sizeof(pfd));
      	pfd.fd = fd;
      	pfd.events = POLLIN | POLLOUT;
      	if (poll(&pfd, 1, timeo) < 0) {
      		perror("poll()");
      		return 0;
      	}
      	if (pfd.revents & POLLIN)
      		fprintf(stdout, "\tPOLLIN\n");
      	if (pfd.revents & POLLOUT)
      		fprintf(stdout, "\tPOLLOUT\n");
      	if (pfd.revents & POLLERR)
      		fprintf(stdout, "\tPOLLERR\n");
      	if (pfd.revents & POLLHUP)
      		fprintf(stdout, "\tPOLLHUP\n");
      	if (pfd.revents & POLLRDHUP)
      		fprintf(stdout, "\tPOLLRDHUP\n");
      
      	return pfd.revents;
      }
      
      static int epoll_test_tty(void *data) {
      	int epfd, ifd = fileno(stdin), res;
      	struct epoll_event evt;
      
      	if (check_events(ifd, 0) != POLLOUT) {
      		fprintf(stderr, "Something is cooking on STDIN (%d)\n", ifd);
      		return 1;
      	}
      	epfd = xepoll_create(1);
      	fprintf(stdout, "Created epoll fd (%d)\n", epfd);
      	memset(&evt, 0, sizeof(evt));
      	evt.events = EPOLLIN;
      	xepoll_ctl(epfd, EPOLL_CTL_ADD, ifd, &evt);
      	if (check_events(epfd, 0) & POLLIN) {
      		res = epoll_wait(epfd, &evt, 1, 0);
      		if (res == 0) {
      			fprintf(stderr, "Epoll fd (%d) is ready when it shouldn't!\n",
      				epfd);
      			return 2;
      		}
      	}
      
      	return 0;
      }
      
      static int epoll_wakeup_chain(void *data) {
      	struct epoll_test_cfg *tcfg = data;
      	int i, res, epfd, bfd, nfd, pfds[2];
      	pid_t pid;
      	struct epoll_event evt;
      
      	memset(&evt, 0, sizeof(evt));
      	evt.events = EPOLLIN;
      
      	epfd = bfd = xepoll_create(1);
      
      	for (i = 0; i < tcfg->size; i++) {
      		nfd = xepoll_create(1);
      		xepoll_ctl(bfd, EPOLL_CTL_ADD, nfd, &evt);
      		bfd = nfd;
      	}
      	xpipe(pfds);
      	if (tcfg->flags & EPOLL_TF_LOOP)
      	{
      		xepoll_ctl(bfd, EPOLL_CTL_ADD, epfd, &evt);
      		/*
      		 * If we're testing for loop, we want that the wakeup
      		 * triggered by the write to the pipe done in the child
      		 * process, triggers a fake event. So we add the pipe
      		 * read size with EPOLLOUT events. This will trigger
      		 * an addition to the ready-list, but no real events
      		 * will be there. The the epoll kernel code will proceed
      		 * in calling f_op->poll() of the epfd, triggering the
      		 * loop we want to test.
      		 */
      		evt.events = EPOLLOUT;
      	}
      	xepoll_ctl(bfd, EPOLL_CTL_ADD, pfds[0], &evt);
      
      	/*
      	 * The pipe write must come after the poll(2) call inside
      	 * check_events(). This tests the nested wakeup code in
      	 * fs/eventpoll.c:ep_poll_safewake()
      	 * By having the check_events() (hence poll(2)) happens first,
      	 * we have poll wait queue filled up, and the write(2) in the
      	 * child will trigger the wakeup chain.
      	 */
      	if ((pid = xfork()) == 0) {
      		sleep(1);
      		write(pfds[1], "w", 1);
      		exit(0);
      	}
      
      	res = check_events(epfd, 2000) & POLLIN;
      
      	if (waitpid(pid, NULL, 0) != pid) {
      		perror("waitpid");
      		return -1;
      	}
      
      	return res;
      }
      
      static int epoll_poll_chain(void *data) {
      	struct epoll_test_cfg *tcfg = data;
      	int i, res, epfd, bfd, nfd, pfds[2];
      	pid_t pid;
      	struct epoll_event evt;
      
      	memset(&evt, 0, sizeof(evt));
      	evt.events = EPOLLIN;
      
      	epfd = bfd = xepoll_create(1);
      
      	for (i = 0; i < tcfg->size; i++) {
      		nfd = xepoll_create(1);
      		xepoll_ctl(bfd, EPOLL_CTL_ADD, nfd, &evt);
      		bfd = nfd;
      	}
      	xpipe(pfds);
      	if (tcfg->flags & EPOLL_TF_LOOP)
      	{
      		xepoll_ctl(bfd, EPOLL_CTL_ADD, epfd, &evt);
      		/*
      		 * If we're testing for loop, we want that the wakeup
      		 * triggered by the write to the pipe done in the child
      		 * process, triggers a fake event. So we add the pipe
      		 * read size with EPOLLOUT events. This will trigger
      		 * an addition to the ready-list, but no real events
      		 * will be there. The the epoll kernel code will proceed
      		 * in calling f_op->poll() of the epfd, triggering the
      		 * loop we want to test.
      		 */
      		evt.events = EPOLLOUT;
      	}
      	xepoll_ctl(bfd, EPOLL_CTL_ADD, pfds[0], &evt);
      
      	/*
      	 * The pipe write mush come before the poll(2) call inside
      	 * check_events(). This tests the nested f_op->poll calls code in
      	 * fs/eventpoll.c:ep_eventpoll_poll()
      	 * By having the pipe write(2) happen first, we make the kernel
      	 * epoll code to load the ready lists, and the following poll(2)
      	 * done inside check_events() will test nested poll code in
      	 * ep_eventpoll_poll().
      	 */
      	if ((pid = xfork()) == 0) {
      		write(pfds[1], "w", 1);
      		exit(0);
      	}
      	sleep(1);
      	res = check_events(epfd, 1000) & POLLIN;
      
      	if (waitpid(pid, NULL, 0) != pid) {
      		perror("waitpid");
      		return -1;
      	}
      
      	return res;
      }
      
      int main(int ac, char **av) {
      	int error;
      	struct epoll_test_cfg tcfg;
      
      	fprintf(stdout, "\n********** Testing TTY events\n");
      	error = run_forked_proc(epoll_test_tty, NULL);
      	fprintf(stdout, error == 0 ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	tcfg.size = 3;
      	tcfg.flags = 0;
      	fprintf(stdout, "\n********** Testing short wakeup chain\n");
      	error = run_forked_proc(epoll_wakeup_chain, &tcfg);
      	fprintf(stdout, error == POLLIN ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	tcfg.size = EPOLL_MAX_CHAIN;
      	tcfg.flags = 0;
      	fprintf(stdout, "\n********** Testing long wakeup chain (HOLD ON)\n");
      	error = run_forked_proc(epoll_wakeup_chain, &tcfg);
      	fprintf(stdout, error == 0 ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	tcfg.size = 3;
      	tcfg.flags = 0;
      	fprintf(stdout, "\n********** Testing short poll chain\n");
      	error = run_forked_proc(epoll_poll_chain, &tcfg);
      	fprintf(stdout, error == POLLIN ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	tcfg.size = EPOLL_MAX_CHAIN;
      	tcfg.flags = 0;
      	fprintf(stdout, "\n********** Testing long poll chain (HOLD ON)\n");
      	error = run_forked_proc(epoll_poll_chain, &tcfg);
      	fprintf(stdout, error == 0 ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	tcfg.size = 3;
      	tcfg.flags = EPOLL_TF_LOOP;
      	fprintf(stdout, "\n********** Testing loopy wakeup chain (HOLD ON)\n");
      	error = run_forked_proc(epoll_wakeup_chain, &tcfg);
      	fprintf(stdout, error == 0 ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	tcfg.size = 3;
      	tcfg.flags = EPOLL_TF_LOOP;
      	fprintf(stdout, "\n********** Testing loopy poll chain (HOLD ON)\n");
      	error = run_forked_proc(epoll_poll_chain, &tcfg);
      	fprintf(stdout, error == 0 ?
      		"********** OK\n": "********** FAIL (%d)\n", error);
      
      	return 0;
      }
      Signed-off-by: NDavide Libenzi <davidel@xmailserver.org>
      Cc: Pavel Pisa <pisa@cmp.felk.cvut.cz>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5071f97e
    • H
      ntfs: remove private wrapper of endian helpers · 63cd8854
      Harvey Harrison 提交于
      The base versions handle constant folding now and are shorter than these
      private wrappers, use them directly.
      Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      63cd8854
    • E
      filesystem freeze: allow SysRq emergency thaw to thaw frozen filesystems · c2d75438
      Eric Sandeen 提交于
      Now that the filesystem freeze operation has been elevated to the VFS, and
      is just an ioctl away, some sort of safety net for unintentionally frozen
      root filesystems may be in order.
      
      The timeout thaw originally proposed did not get merged, but perhaps
      something like this would be useful in emergencies.
      
      For example, freeze /path/to/mountpoint may freeze your root filesystem if
      you forgot that you had that unmounted.
      
      I chose 'j' as the last remaining character other than 'h' which is sort
      of reserved for help (because help is generated on any unknown character).
      
      I've tested this on a non-root fs with multiple (nested) freezers, as well
      as on a system rendered unresponsive due to a frozen root fs.
      
      [randy.dunlap@oracle.com: emergency thaw only if CONFIG_BLOCK enabled]
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Cc: Takashi Sato <t-sato@yk.jp.nec.com>
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c2d75438
    • K
      vmscan: fix it to take care of nodemask · 327c0e96
      KAMEZAWA Hiroyuki 提交于
      try_to_free_pages() is used for the direct reclaim of up to
      SWAP_CLUSTER_MAX pages when watermarks are low.  The caller to
      alloc_pages_nodemask() can specify a nodemask of nodes that are allowed to
      be used but this is not passed to try_to_free_pages().  This can lead to
      unnecessary reclaim of pages that are unusable by the caller and int the
      worst case lead to allocation failure as progress was not been make where
      it is needed.
      
      This patch passes the nodemask used for alloc_pages_nodemask() to
      try_to_free_pages().
      Reviewed-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: NMel Gorman <mel@csn.ul.ie>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      327c0e96
    • J
      ramfs-nommu: use generic lru cache · 2678958e
      Johannes Weiner 提交于
      Instead of open-coding the lru-list-add pagevec batching when expanding a
      file mapping from zero, defer to the appropriate page cache function that
      also takes care of adding the page to the lru list.
      
      This is cleaner, saves code and reduces the stack footprint by 16 words
      worth of pagevec.
      Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Acked-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.com>
      Cc: MinChan Kim <minchan.kim@gmail.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Greg Ungerer <gerg@snapgear.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2678958e
    • H
      mm: page_mkwrite change prototype to match fault: fix sysfs · 851a039c
      Hugh Dickins 提交于
      Fix warnings and return values in sysfs bin_page_mkwrite(), fixing
      fs/sysfs/bin.c: In function `bin_page_mkwrite':
      fs/sysfs/bin.c:250: warning: passing argument 2 of `bb->vm_ops->page_mkwrite' from incompatible pointer type
      fs/sysfs/bin.c: At top level:
      fs/sysfs/bin.c:280: warning: initialization from incompatible pointer type
      
      Expects to have my [PATCH next] sysfs: fix some bin_vm_ops errors
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: "Eric W. Biederman" <ebiederm@aristanetworks.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      851a039c
    • N
      fs: fix page_mkwrite error cases in core code and btrfs · 56a76f82
      Nick Piggin 提交于
      page_mkwrite is called with neither the page lock nor the ptl held.  This
      means a page can be concurrently truncated or invalidated out from
      underneath it.  Callers are supposed to prevent truncate races themselves,
      however previously the only thing they can do in case they hit one is to
      raise a SIGBUS.  A sigbus is wrong for the case that the page has been
      invalidated or truncated within i_size (eg.  hole punched).  Callers may
      also have to perform memory allocations in this path, where again, SIGBUS
      would be wrong.
      
      The previous patch ("mm: page_mkwrite change prototype to match fault")
      made it possible to properly specify errors.  Convert the generic buffer.c
      code and btrfs to return sane error values (in the case of page removed
      from pagecache, VM_FAULT_NOPAGE will cause the fault handler to exit
      without doing anything, and the fault will be retried properly).
      
      This fixes core code, and converts btrfs as a template/example.  All other
      filesystems defining their own page_mkwrite should be fixed in a similar
      manner.
      Acked-by: NChris Mason <chris.mason@oracle.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56a76f82
    • N
      mm: page_mkwrite change prototype to match fault · c2ec175c
      Nick Piggin 提交于
      Change the page_mkwrite prototype to take a struct vm_fault, and return
      VM_FAULT_xxx flags.  There should be no functional change.
      
      This makes it possible to return much more detailed error information to
      the VM (and also can provide more information eg.  virtual_address to the
      driver, which might be important in some special cases).
      
      This is required for a subsequent fix.  And will also make it easier to
      merge page_mkwrite() with fault() in future.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
      Cc: Miklos Szeredi <miklos@szeredi.hu>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <joel.becker@oracle.com>
      Cc: Artem Bityutskiy <dedekind@infradead.org>
      Cc: Felix Blyakher <felixb@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c2ec175c
    • R
      mm: reintroduce and deprecate rlimit based access for SHM_HUGETLB · 2584e517
      Ravikiran G Thirumalai 提交于
      Allow non root users with sufficient mlock rlimits to be able to allocate
      hugetlb backed shm for now.  Deprecate this though.  This is being
      deprecated because the mlock based rlimit checks for SHM_HUGETLB is not
      consistent with mmap based huge page allocations.
      Signed-off-by: NRavikiran Thirumalai <kiran@scalex86.org>
      Reviewed-by: NMel Gorman <mel@csn.ul.ie>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2584e517
    • R
      mm: fix SHM_HUGETLB to work with users in hugetlb_shm_group · 8a0bdec1
      Ravikiran G Thirumalai 提交于
      Fix hugetlb subsystem so that non root users belonging to
      hugetlb_shm_group can actually allocate hugetlb backed shm.
      
      Currently non root users cannot even map one large page using SHM_HUGETLB
      when they belong to the gid in /proc/sys/vm/hugetlb_shm_group.  This is
      because allocation size is verified against RLIMIT_MEMLOCK resource limit
      even if the user belongs to hugetlb_shm_group.
      
      This patch
      1. Fixes hugetlb subsystem so that users with CAP_IPC_LOCK and users
         belonging to hugetlb_shm_group don't need to be restricted with
         RLIMIT_MEMLOCK resource limits
      2. This patch also disables mlock based rlimit checking (which will
         be reinstated and marked deprecated in a subsequent patch).
      Signed-off-by: NRavikiran Thirumalai <kiran@scalex86.org>
      Reviewed-by: NMel Gorman <mel@csn.ul.ie>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8a0bdec1
    • E
      vfs: add/use account_page_dirtied() · e3a7cca1
      Edward Shishkin 提交于
      Add a helper function account_page_dirtied().  Use that from two
      callsites.  reiser4 adds a function which adds a third callsite.
      
      Signed-off-by: Edward Shishkin<edward.shishkin@gmail.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e3a7cca1
    • A
      proc tty: remove struct tty_operations::read_proc · 0f043a81
      Alexey Dobriyan 提交于
      struct tty_operations::proc_fops took it's place and there is one less
      create_proc_read_entry() user now!
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0f043a81
    • A
      proc tty: add struct tty_operations::proc_fops · ae149b6b
      Alexey Dobriyan 提交于
      Used for gradual switch of TTY drivers from using ->read_proc which helps
      with gradual switch from ->read_proc for the whole tree.
      
      As side effect, fix possible race condition when ->data initialized after
      PDE is hooked into proc tree.
      
      ->proc_fops takes precedence over ->read_proc.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ae149b6b
    • C
      Btrfs: try to free metadata pages when we free btree blocks · d57e62b8
      Chris Mason 提交于
      COW means we cycle though blocks fairly quickly, and once we
      free an extent on disk, it doesn't make much sense to keep the pages around.
      
      This commit tries to immediately free the page when we free the extent,
      which lowers our memory footprint significantly.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      d57e62b8
    • C
      Btrfs: add extra flushing for renames and truncates · 5a3f23d5
      Chris Mason 提交于
      Renames and truncates are both common ways to replace old data with new
      data.  The filesystem can make an effort to make sure the new data is
      on disk before actually replacing the old data.
      
      This is especially important for rename, which many application use as
      though it were atomic for both the data and the metadata involved.  The
      current btrfs code will happily replace a file that is fully on disk
      with one that was just created and still has pending IO.
      
      If we crash after transaction commit but before the IO is done, we'll end
      up replacing a good file with a zero length file.  The solution used
      here is to create a list of inodes that need special ordering and force
      them to disk before the commit is done.  This is similar to the
      ext3 style data=ordering, except it is only done on selected files.
      
      Btrfs is able to get away with this because it does not wait on commits
      very often, even for fsync (which use a sub-commit).
      
      For renames, we order the file when it wasn't already
      on disk and when it is replacing an existing file.  Larger files
      are sent to filemap_flush right away (before the transaction handle is
      opened).
      
      For truncates, we order if the file goes from non-zero size down to
      zero size.  This is a little different, because at the time of the
      truncate the file has no dirty bytes to order.  But, we flag the inode
      so that it is added to the ordered list on close (via release method).  We
      also immediately add it to the ordered list of the current transaction
      so that we can try to flush down any writes the application sneaks in
      before commit.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      5a3f23d5
  4. 31 3月, 2009 8 次提交