1. 04 12月, 2008 1 次提交
    • J
      time: catch xtime_nsec underflows and fix them · 6c9bacb4
      john stultz 提交于
      Impact: fix time warp bug
      
      Alex Shi, along with Yanmin Zhang have been noticing occasional time
      inconsistencies recently. Through their great diagnosis, they found that
      the xtime_nsec value used in update_wall_time was occasionally going
      negative. After looking through the code for awhile, I realized we have
      the possibility for an underflow when three conditions are met in
      update_wall_time():
      
        1) We have accumulated a second's worth of nanoseconds, so we
           incremented xtime.tv_sec and appropriately decrement xtime_nsec.
           (This doesn't cause xtime_nsec to go negative, but it can cause it
            to be small).
      
        2) The remaining offset value is large, but just slightly less then
           cycle_interval.
      
        3) clocksource_adjust() is speeding up the clock, causing a
           corrective amount (compensating for the increase in the multiplier
           being multiplied against the unaccumulated offset value) to be
           subtracted from xtime_nsec.
      
      This can cause xtime_nsec to underflow.
      
      Unfortunately, since we notify the NTP subsystem via second_overflow()
      whenever we accumulate a full second, and this effects the error
      accumulation that has already occured, we cannot simply revert the
      accumulated second from xtime nor move the second accumulation to after
      the clocksource_adjust call without a change in behavior.
      
      This leaves us with (at least) two options:
      
      1) Simply return from clocksource_adjust() without making a change if we
         notice the adjustment would cause xtime_nsec to go negative.
      
      This would work, but I'm concerned that if a large adjustment was needed
      (due to the error being large), it may be possible to get stuck with an
      ever increasing error that becomes too large to correct (since it may
      always force xtime_nsec negative). This may just be paranoia on my part.
      
      2) Catch xtime_nsec if it is negative, then add back the amount its
         negative to both xtime_nsec and the error.
      
      This second method is consistent with how we've handled earlier rounding
      issues, and also has the benefit that the error being added is always in
      the oposite direction also always equal or smaller then the correction
      being applied. So the risk of a corner case where things get out of
      control is lessened.
      
      This patch fixes bug 11970, as tested by Yanmin Zhang
      http://bugzilla.kernel.org/show_bug.cgi?id=11970
      
      Reported-by: alex.shi@intel.com
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Acked-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Tested-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6c9bacb4
  2. 24 11月, 2008 1 次提交
  3. 20 11月, 2008 5 次提交
    • L
      cgroups: fix a serious bug in cgroupstats · 33d283be
      Li Zefan 提交于
      Try this, and you'll get oops immediately:
       # cd Documentation/accounting/
       # gcc -o getdelays getdelays.c
       # mount -t cgroup -o debug xxx /mnt
       # ./getdelays -C /mnt/tasks
      
      Because a normal file's dentry->d_fsdata is a pointer to struct cftype,
      not struct cgroup.
      
      After the patch, it returns EINVAL if we try to get cgroupstats
      from a normal file.
      
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Acked-by: NPaul Menage <menage@google.com>
      Cc: <stable@kernel.org>		[2.6.25.x, 2.6.26.x, 2.6.27.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33d283be
    • H
      sprint_symbol(): use less stack · 966c8c12
      Hugh Dickins 提交于
      sprint_symbol(), itself used when dumping stacks, has been wasting 128
      bytes of stack: lookup the symbol directly into the buffer supplied by the
      caller, instead of using a locally declared namebuf.
      
      I believe the name != buffer strcpy() is obsolete: the design here dates
      from when module symbol lookup pointed into a supposedly const but sadly
      volatile table; nowadays it copies, but an uncalled strcpy() looks better
      here than the risk of a recursive BUG_ON().
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      966c8c12
    • K
      cgroup: fix potential deadlock in pre_destroy · 3fa59dfb
      KAMEZAWA Hiroyuki 提交于
      As Balbir pointed out, memcg's pre_destroy handler has potential deadlock.
      
      It has following lock sequence.
      
      	cgroup_mutex (cgroup_rmdir)
      	    -> pre_destroy -> mem_cgroup_pre_destroy-> force_empty
      		-> cpu_hotplug.lock. (lru_add_drain_all->
      				      schedule_work->
                                            get_online_cpus)
      
      But, cpuset has following.
      	cpu_hotplug.lock (call notifier)
      		-> cgroup_mutex. (within notifier)
      
      Then, this lock sequence should be fixed.
      
      Considering how pre_destroy works, it's not necessary to holding
      cgroup_mutex() while calling it.
      
      As a side effect, we don't have to wait at this mutex while memcg's
      force_empty works.(it can be long when there are tons of pages.)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3fa59dfb
    • M
      cpuset: update top cpuset's mems after adding a node · f481891f
      Miao Xie 提交于
      After adding a node into the machine, top cpuset's mems isn't updated.
      
      By reviewing the code, we found that the update function
      
        cpuset_track_online_nodes()
      
      was invoked after node_states[N_ONLINE] changes.  It is wrong because
      N_ONLINE just means node has pgdat, and if node has/added memory, we use
      N_HIGH_MEMORY.  So, We should invoke the update function after
      node_states[N_HIGH_MEMORY] changes, just like its commit says.
      
      This patch fixes it.  And we use notifier of memory hotplug instead of
      direct calling of cpuset_track_online_nodes().
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Acked-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Paul Menage <menage@google.com
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f481891f
    • U
      reintroduce accept4 · de11defe
      Ulrich Drepper 提交于
      Introduce a new accept4() system call.  The addition of this system call
      matches analogous changes in 2.6.27 (dup3(), evenfd2(), signalfd4(),
      inotify_init1(), epoll_create1(), pipe2()) which added new system calls
      that differed from analogous traditional system calls in adding a flags
      argument that can be used to access additional functionality.
      
      The accept4() system call is exactly the same as accept(), except that
      it adds a flags bit-mask argument.  Two flags are initially implemented.
      (Most of the new system calls in 2.6.27 also had both of these flags.)
      
      SOCK_CLOEXEC causes the close-on-exec (FD_CLOEXEC) flag to be enabled
      for the new file descriptor returned by accept4().  This is a useful
      security feature to avoid leaking information in a multithreaded
      program where one thread is doing an accept() at the same time as
      another thread is doing a fork() plus exec().  More details here:
      http://udrepper.livejournal.com/20407.html "Secure File Descriptor Handling",
      Ulrich Drepper).
      
      The other flag is SOCK_NONBLOCK, which causes the O_NONBLOCK flag
      to be enabled on the new open file description created by accept4().
      (This flag is merely a convenience, saving the use of additional calls
      fcntl(F_GETFL) and fcntl (F_SETFL) to achieve the same result.
      
      Here's a test program.  Works on x86-32.  Should work on x86-64, but
      I (mtk) don't have a system to hand to test with.
      
      It tests accept4() with each of the four possible combinations of
      SOCK_CLOEXEC and SOCK_NONBLOCK set/clear in 'flags', and verifies
      that the appropriate flags are set on the file descriptor/open file
      description returned by accept4().
      
      I tested Ulrich's patch in this thread by applying against 2.6.28-rc2,
      and it passes according to my test program.
      
      /* test_accept4.c
      
        Copyright (C) 2008, Linux Foundation, written by Michael Kerrisk
             <mtk.manpages@gmail.com>
      
        Licensed under the GNU GPLv2 or later.
      */
      #define _GNU_SOURCE
      #include <unistd.h>
      #include <sys/syscall.h>
      #include <sys/socket.h>
      #include <netinet/in.h>
      #include <stdlib.h>
      #include <fcntl.h>
      #include <stdio.h>
      #include <string.h>
      
      #define PORT_NUM 33333
      
      #define die(msg) do { perror(msg); exit(EXIT_FAILURE); } while (0)
      
      /**********************************************************************/
      
      /* The following is what we need until glibc gets a wrapper for
        accept4() */
      
      /* Flags for socket(), socketpair(), accept4() */
      #ifndef SOCK_CLOEXEC
      #define SOCK_CLOEXEC    O_CLOEXEC
      #endif
      #ifndef SOCK_NONBLOCK
      #define SOCK_NONBLOCK   O_NONBLOCK
      #endif
      
      #ifdef __x86_64__
      #define SYS_accept4 288
      #elif __i386__
      #define USE_SOCKETCALL 1
      #define SYS_ACCEPT4 18
      #else
      #error "Sorry -- don't know the syscall # on this architecture"
      #endif
      
      static int
      accept4(int fd, struct sockaddr *sockaddr, socklen_t *addrlen, int flags)
      {
         printf("Calling accept4(): flags = %x", flags);
         if (flags != 0) {
             printf(" (");
             if (flags & SOCK_CLOEXEC)
                 printf("SOCK_CLOEXEC");
             if ((flags & SOCK_CLOEXEC) && (flags & SOCK_NONBLOCK))
                 printf(" ");
             if (flags & SOCK_NONBLOCK)
                 printf("SOCK_NONBLOCK");
             printf(")");
         }
         printf("\n");
      
      #if USE_SOCKETCALL
         long args[6];
      
         args[0] = fd;
         args[1] = (long) sockaddr;
         args[2] = (long) addrlen;
         args[3] = flags;
      
         return syscall(SYS_socketcall, SYS_ACCEPT4, args);
      #else
         return syscall(SYS_accept4, fd, sockaddr, addrlen, flags);
      #endif
      }
      
      /**********************************************************************/
      
      static int
      do_test(int lfd, struct sockaddr_in *conn_addr,
             int closeonexec_flag, int nonblock_flag)
      {
         int connfd, acceptfd;
         int fdf, flf, fdf_pass, flf_pass;
         struct sockaddr_in claddr;
         socklen_t addrlen;
      
         printf("=======================================\n");
      
         connfd = socket(AF_INET, SOCK_STREAM, 0);
         if (connfd == -1)
             die("socket");
         if (connect(connfd, (struct sockaddr *) conn_addr,
                     sizeof(struct sockaddr_in)) == -1)
             die("connect");
      
         addrlen = sizeof(struct sockaddr_in);
         acceptfd = accept4(lfd, (struct sockaddr *) &claddr, &addrlen,
                            closeonexec_flag | nonblock_flag);
         if (acceptfd == -1) {
             perror("accept4()");
             close(connfd);
             return 0;
         }
      
         fdf = fcntl(acceptfd, F_GETFD);
         if (fdf == -1)
             die("fcntl:F_GETFD");
         fdf_pass = ((fdf & FD_CLOEXEC) != 0) ==
                    ((closeonexec_flag & SOCK_CLOEXEC) != 0);
         printf("Close-on-exec flag is %sset (%s); ",
                 (fdf & FD_CLOEXEC) ? "" : "not ",
                 fdf_pass ? "OK" : "failed");
      
         flf = fcntl(acceptfd, F_GETFL);
         if (flf == -1)
             die("fcntl:F_GETFD");
         flf_pass = ((flf & O_NONBLOCK) != 0) ==
                    ((nonblock_flag & SOCK_NONBLOCK) !=0);
         printf("nonblock flag is %sset (%s)\n",
                 (flf & O_NONBLOCK) ? "" : "not ",
                 flf_pass ? "OK" : "failed");
      
         close(acceptfd);
         close(connfd);
      
         printf("Test result: %s\n", (fdf_pass && flf_pass) ? "PASS" : "FAIL");
         return fdf_pass && flf_pass;
      }
      
      static int
      create_listening_socket(int port_num)
      {
         struct sockaddr_in svaddr;
         int lfd;
         int optval;
      
         memset(&svaddr, 0, sizeof(struct sockaddr_in));
         svaddr.sin_family = AF_INET;
         svaddr.sin_addr.s_addr = htonl(INADDR_ANY);
         svaddr.sin_port = htons(port_num);
      
         lfd = socket(AF_INET, SOCK_STREAM, 0);
         if (lfd == -1)
             die("socket");
      
         optval = 1;
         if (setsockopt(lfd, SOL_SOCKET, SO_REUSEADDR, &optval,
                        sizeof(optval)) == -1)
             die("setsockopt");
      
         if (bind(lfd, (struct sockaddr *) &svaddr,
                  sizeof(struct sockaddr_in)) == -1)
             die("bind");
      
         if (listen(lfd, 5) == -1)
             die("listen");
      
         return lfd;
      }
      
      int
      main(int argc, char *argv[])
      {
         struct sockaddr_in conn_addr;
         int lfd;
         int port_num;
         int passed;
      
         passed = 1;
      
         port_num = (argc > 1) ? atoi(argv[1]) : PORT_NUM;
      
         memset(&conn_addr, 0, sizeof(struct sockaddr_in));
         conn_addr.sin_family = AF_INET;
         conn_addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
         conn_addr.sin_port = htons(port_num);
      
         lfd = create_listening_socket(port_num);
      
         if (!do_test(lfd, &conn_addr, 0, 0))
             passed = 0;
         if (!do_test(lfd, &conn_addr, SOCK_CLOEXEC, 0))
             passed = 0;
         if (!do_test(lfd, &conn_addr, 0, SOCK_NONBLOCK))
             passed = 0;
         if (!do_test(lfd, &conn_addr, SOCK_CLOEXEC, SOCK_NONBLOCK))
             passed = 0;
      
         close(lfd);
      
         exit(passed ? EXIT_SUCCESS : EXIT_FAILURE);
      }
      
      [mtk.manpages@gmail.com: rewrote changelog, updated test program]
      Signed-off-by: NUlrich Drepper <drepper@redhat.com>
      Tested-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Acked-by: NMichael Kerrisk <mtk.manpages@gmail.com>
      Cc: <linux-api@vger.kernel.org>
      Cc: <linux-arch@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      de11defe
  4. 19 11月, 2008 5 次提交
    • S
      ftrace: fix dyn ftrace filter selection · 32464779
      Steven Rostedt 提交于
      Impact: clean up and fix for dyn ftrace filter selection
      
      The previous logic of the dynamic ftrace selection of enabling
      or disabling functions was complex and incorrect. This patch simplifies
      the code and corrects the usage. This simplification also makes the
      code more robust.
      
      Here is the correct logic:
      
        Given a function that can be traced by dynamic ftrace:
      
        If the function is not to be traced, disable it if it was enabled.
        (this is if the function is in the set_ftrace_notrace file)
      
        (filter is on if there exists any functions in set_ftrace_filter file)
      
        If the filter is on, and we are enabling functions:
          If the function is in set_ftrace_filter, enable it if it is not
            already enabled.
          If the function is not in set_ftrace_filter, disable it if it is not
            already disabled.
      
        Otherwise, if the filter is off and we are enabling function tracing:
          Enable the function if it is not already enabled.
      
        Otherwise, if we are disabling function tracing:
          Disable the function if it is not already disabled.
      
      This code now sets or clears the ENABLED flag in the record, and at the
      end it will enable the function if the flag is set, or disable the function
      if the flag is cleared.
      
      The parameters for the function that does the above logic is also
      simplified. Instead of passing in confusing "new" and "old" where
      they might be swapped if the "enabled" flag is not set. The old logic
      even had one of the above always NULL and had to be filled in. The new
      logic simply passes in one parameter called "nop". A "call" is calculated
      in the code, and at the end of the logic, when we know we need to either
      disable or enable the function, we can then use the "nop" and "call"
      properly.
      
      This code is more robust than the previous version.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      32464779
    • S
      ftrace: make filtered functions effective on setting · 82043278
      Steven Rostedt 提交于
      Impact: fix filter selection to apply when set
      
      It can be confusing when the set_filter_functions is set (or cleared)
      and the functions being recorded by the dynamic tracer does not
      match.
      
      This patch causes the code to be updated if the function tracer is
      enabled and the filter is changed.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      82043278
    • S
      ftrace: fix set_ftrace_filter · f10ed36e
      Steven Rostedt 提交于
      Impact: fix of output of set_ftrace_filter
      
      The commit "ftrace: do not show freed records in
                   available_filter_functions"
      
      Removed a bit too much from the set_ftrace_filter code, where we now see
      all functions in the set_ftrace_filter file even when we set a filter.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f10ed36e
    • V
      trace: introduce missing mutex_unlock() · 641d2f63
      Vegard Nossum 提交于
      Impact: fix tracing buffer mutex leak in case of allocation failure
      
      This error was spotted by this semantic patch:
      
        http://www.emn.fr/x-info/coccinelle/mut.html
      
      It looks correct as far as I can tell. Please review.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      641d2f63
    • A
      suspend: use WARN not WARN_ON to print the message · a6a0c4ca
      Arjan van de Ven 提交于
      By using WARN(), kerneloops.org can collect which component is causing
      the delay and make statistics about that. suspend_test_finish() is
      currently the number 2 item but unless we can collect who's causing
      it we're not going to be able to fix the hot topic ones..
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a6a0c4ca
  5. 18 11月, 2008 5 次提交
    • J
      tracing: kernel/trace/trace.c: introduce missing kfree() · 0bb943c7
      Julia Lawall 提交于
      Impact: fix memory leak
      
      Error handling code following a kzalloc should free the allocated data.
      
      The semantic match that finds the problem is as follows:
      (http://www.emn.fr/x-info/coccinelle/)
      
      // <smpl>
      @r exists@
      local idexpression x;
      statement S;
      expression E;
      identifier f,l;
      position p1,p2;
      expression *ptr != NULL;
      @@
      
      (
      if ((x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...)) == NULL) S
      |
      x@p1 = \(kmalloc\|kzalloc\|kcalloc\)(...);
      ...
      if (x == NULL) S
      )
      <... when != x
           when != if (...) { <+...x...+> }
      x->f = E
      ...>
      (
       return \(0\|<+...x...+>\|ptr\);
      |
       return@p2 ...;
      )
      
      @script:python@
      p1 << r.p1;
      p2 << r.p2;
      @@
      
      print "* file: %s kmalloc %s return %s" % (p1[0].file,p1[0].line,p2[0].line)
      // </smpl>
      Signed-off-by: NJulia Lawall <julia@diku.dk>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0bb943c7
    • L
      relay: fix cpu offline problem · 98ba4031
      Lai Jiangshan 提交于
      relay_open() will close allocated buffers when failed.
      but if cpu offlined, some buffer will not be closed.
      this patch fixed it.
      
      and did cleanup for relay_reset() too.
      Signed-off-by: NLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      98ba4031
    • R
      kernel/profile.c: fix section mismatch warning · e270219f
      Rakib Mullick 提交于
      Impact: fix section mismatch warning in kernel/profile.c
      
      Here, profile_nop function has been called from a non-init function
      create_hash_tables(void). Which generetes a section mismatch warning.
      Previously, create_hash_tables(void) was a init function. So, removing
      __init from create_hash_tables(void) requires profile_nop to be
      non-init.
      
      This patch makes profile_nop function inline and fixes the
      following warning:
      
       WARNING: vmlinux.o(.text+0x6ebb6): Section mismatch in reference from
       the function create_hash_tables() to the function
       .init.text:profile_nop()
       The function create_hash_tables() references
       the function __init profile_nop().
       This is often because create_hash_tables lacks a __init
       annotation or the annotation of profile_nop is wrong.
      Signed-off-by: NRakib Mullick <rakib.mullick@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e270219f
    • L
      cpuset: fix regression when failed to generate sched domains · 700018e0
      Li Zefan 提交于
      Impact: properly rebuild sched-domains on kmalloc() failure
      
      When cpuset failed to generate sched domains due to kmalloc()
      failure, the scheduler should fallback to the single partition
      'fallback_doms' and rebuild sched domains, but now it only
      destroys but not rebuilds sched domains.
      
      The regression was introduced by:
      
      | commit dfb512ec
      | Author: Max Krasnyansky <maxk@qualcomm.com>
      | Date:   Fri Aug 29 13:11:41 2008 -0700
      |
      |    sched: arch_reinit_sched_domains() must destroy domains to force rebuild
      
      After the above commit, partition_sched_domains(0, NULL, NULL) will
      only destroy sched domains and partition_sched_domains(1, NULL, NULL)
      will create the default sched domain.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Max Krasnyansky <maxk@qualcomm.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      700018e0
    • K
      Remove -mno-spe flags as they dont belong · 65ecc14a
      Kumar Gala 提交于
      For some unknown reason at Steven Rostedt added in disabling of the SPE
      instruction generation for e500 based PPC cores in commit
      6ec56232.
      
      We are removing it because:
      
      1. It generates e500 kernels that don't work
      2. its not the correct set of flags to do this
      3. we handle this in the arch/powerpc/Makefile already
      4. its unknown in talking to Steven why he did this
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Tested-and-Acked-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65ecc14a
  6. 17 11月, 2008 2 次提交
  7. 16 11月, 2008 4 次提交
    • W
      function tracing: fix wrong pos computing when read buffer has been fulfilled · 5821e1b7
      walimis 提交于
      Impact: make output of available_filter_functions complete
      
      phenomenon:
      
      The first value of dyn_ftrace_total_info is not equal with
      `cat available_filter_functions | wc -l`, but they should be equal.
      
      root cause:
      
      When printing functions with seq_printf in t_show, if the read buffer
      is just overflowed by current function record, then this function
      won't be printed to user space through read buffer, it will
      just be dropped. So we can't see this function printing.
      
      So, every time the last function to fill the read buffer, if overflowed,
      will be dropped.
      
      This also applies to set_ftrace_filter if set_ftrace_filter has
      more bytes than read buffer.
      
      fix:
      
      Through checking return value of seq_printf, if less than 0, we know
      this function doesn't be printed. Then we decrease position to force
      this function to be printed next time, in next read buffer.
      
      Another little fix is to show correct allocating pages count.
      Signed-off-by: Nwalimis <walimisdev@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5821e1b7
    • I
      sched: fix kernel warning on /proc/sched_debug access · 29d7b90c
      Ingo Molnar 提交于
      Luis Henriques reported that with CONFIG_PREEMPT=y + CONFIG_PREEMPT_DEBUG=y +
      CONFIG_SCHED_DEBUG=y + CONFIG_LATENCYTOP=y enabled, the following warning
      triggers when using latencytop:
      
      > [  775.663239] BUG: using smp_processor_id() in preemptible [00000000] code: latencytop/6585
      > [  775.663303] caller is native_sched_clock+0x3a/0x80
      > [  775.663314] Pid: 6585, comm: latencytop Tainted: G        W 2.6.28-rc4-00355-g9c7c3546 #1
      > [  775.663322] Call Trace:
      > [  775.663343]  [<ffffffff803a94e4>] debug_smp_processor_id+0xe4/0xf0
      > [  775.663356]  [<ffffffff80213f7a>] native_sched_clock+0x3a/0x80
      > [  775.663368]  [<ffffffff80213e19>] sched_clock+0x9/0x10
      > [  775.663381]  [<ffffffff8024550d>] proc_sched_show_task+0x8bd/0x10e0
      > [  775.663395]  [<ffffffff8034466e>] sched_show+0x3e/0x80
      > [  775.663408]  [<ffffffff8031039b>] seq_read+0xdb/0x350
      > [  775.663421]  [<ffffffff80368776>] ? security_file_permission+0x16/0x20
      > [  775.663435]  [<ffffffff802f4198>] vfs_read+0xc8/0x170
      > [  775.663447]  [<ffffffff802f4335>] sys_read+0x55/0x90
      > [  775.663460]  [<ffffffff8020c67a>] system_call_fastpath+0x16/0x1b
      > ...
      
      This breakage was caused by me via:
      
        7cbaef9c: sched: optimize sched_clock() a bit
      
      Change the calls to cpu_clock().
      Reported-by: NLuis Henriques <henrix@sapo.pt>
      29d7b90c
    • A
      Fix inotify watch removal/umount races · 8f7b0ba1
      Al Viro 提交于
      Inotify watch removals suck violently.
      
      To kick the watch out we need (in this order) inode->inotify_mutex and
      ih->mutex.  That's fine if we have a hold on inode; however, for all
      other cases we need to make damn sure we don't race with umount.  We can
      *NOT* just grab a reference to a watch - inotify_unmount_inodes() will
      happily sail past it and we'll end with reference to inode potentially
      outliving its superblock.
      
      Ideally we just want to grab an active reference to superblock if we
      can; that will make sure we won't go into inotify_umount_inodes() until
      we are done.  Cleanup is just deactivate_super().
      
      However, that leaves a messy case - what if we *are* racing with
      umount() and active references to superblock can't be acquired anymore?
      We can bump ->s_count, grab ->s_umount, which will almost certainly wait
      until the superblock is shut down and the watch in question is pining
      for fjords.  That's fine, but there is a problem - we might have hit the
      window between ->s_active getting to 0 / ->s_count - below S_BIAS (i.e.
      the moment when superblock is past the point of no return and is heading
      for shutdown) and the moment when deactivate_super() acquires
      ->s_umount.
      
      We could just do drop_super() yield() and retry, but that's rather
      antisocial and this stuff is luser-triggerable.  OTOH, having grabbed
      ->s_umount and having found that we'd got there first (i.e.  that
      ->s_root is non-NULL) we know that we won't race with
      inotify_umount_inodes().
      
      So we could grab a reference to watch and do the rest as above, just
      with drop_super() instead of deactivate_super(), right? Wrong.  We had
      to drop ih->mutex before we could grab ->s_umount.  So the watch
      could've been gone already.
      
      That still can be dealt with - we need to save watch->wd, do idr_find()
      and compare its result with our pointer.  If they match, we either have
      the damn thing still alive or we'd lost not one but two races at once,
      the watch had been killed and a new one got created with the same ->wd
      at the same address.  That couldn't have happened in inotify_destroy(),
      but inotify_rm_wd() could run into that.  Still, "new one got created"
      is not a problem - we have every right to kill it or leave it alone,
      whatever's more convenient.
      
      So we can use idr_find(...) == watch && watch->inode->i_sb == sb as
      "grab it and kill it" check.  If it's been our original watch, we are
      fine, if it's a newcomer - nevermind, just pretend that we'd won the
      race and kill the fscker anyway; we are safe since we know that its
      superblock won't be going away.
      
      And yes, this is far beyond mere "not very pretty"; so's the entire
      concept of inotify to start with.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Acked-by: NGreg KH <greg@kroah.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f7b0ba1
    • L
      Move "exit_robust_list" into mm_release() · 8141c7f3
      Linus Torvalds 提交于
      We don't want to get rid of the futexes just at exit() time, we want to
      drop them when doing an execve() too, since that gets rid of the
      previous VM image too.
      
      Doing it at mm_release() time means that we automatically always do it
      when we disassociate a VM map from the task.
      
      Reported-by: pageexec@freemail.hu
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Brad Spengler <spender@grsecurity.net>
      Cc: Alex Efros <powerman@powerman.name>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8141c7f3
  8. 13 11月, 2008 6 次提交
  9. 12 11月, 2008 4 次提交
    • B
      sched: fix stale value in average load per task · a2d47777
      Balbir Singh 提交于
      Impact: fix load balancer load average calculation accuracy
      
      cpu_avg_load_per_task() returns a stale value when nr_running is 0.
      It returns an older stale (caculated when nr_running was non zero) value.
      
      This patch returns and sets rq->avg_load_per_task to zero when nr_running
      is 0.
      
      Compile and boot tested on a x86_64 box.
      Signed-off-by: NBalbir Singh <balbir@linux.vnet.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a2d47777
    • S
      ring-buffer: no preempt for sched_clock() · 47e74f2b
      Steven Rostedt 提交于
      Impact: disable preemption when calling sched_clock()
      
      The ring_buffer_time_stamp still uses sched_clock as its counter.
      But it is a bug to call it with preemption enabled. This requirement
      should not be pushed to the ring_buffer_time_stamp callers, so
      the ring_buffer_time_stamp needs to disable preemption when calling
      sched_clock.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      47e74f2b
    • P
      hrtimer: clean up unused callback modes · 621a0d52
      Peter Zijlstra 提交于
      Impact: cleanup
      
      git grep HRTIMER_CB_IRQSAFE revealed half the callback modes are actually
      unused.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      621a0d52
    • S
      ring-buffer: buffer record on/off switch · a3583244
      Steven Rostedt 提交于
      Impact: enable/disable ring buffer recording API added
      
      Several kernel developers have requested that there be a way to stop
      recording into the ring buffers with a simple switch that can also
      be enabled from userspace. This patch addes a new kernel API to the
      ring buffers called:
      
       tracing_on()
       tracing_off()
      
      When tracing_off() is called, all ring buffers will not be able to record
      into their buffers.
      
      tracing_on() will enable the ring buffers again.
      
      These two act like an on/off switch. That is, there is no counting of the
      number of times tracing_off or tracing_on has been called.
      
      A new file is added to the debugfs/tracing directory called
      
        tracing_on
      
      This allows for userspace applications to also flip the switch.
      
        echo 0 > debugfs/tracing/tracing_on
      
      disables the tracing.
      
        echo 1 > /debugfs/tracing/tracing_on
      
      enables it.
      
      Note, this does not disable or enable any tracers. It only sets or clears
      a flag that needs to be set in order for the ring buffers to write to
      their buffers. It is a global flag, and affects all ring buffers.
      
      The buffers start out with tracing_on enabled.
      
      There are now three flags that control recording into the buffers:
      
       tracing_on: which affects all ring buffer tracers.
      
       buffer->record_disabled: which affects an allocated buffer, which may be set
           if an anomaly is detected, and tracing is disabled.
      
       cpu_buffer->record_disabled: which is set by tracing_stop() or if an
           anomaly is detected. tracing_start can not reenable this if
           an anomaly occurred.
      
      The userspace debugfs/tracing/tracing_enabled is implemented with
      tracing_stop() but the user space code can not enable it if the kernel
      called tracing_stop().
      
      Userspace can enable the tracing_on even if the kernel disabled it.
      It is just a switch used to stop tracing if a condition was hit.
      tracing_on is not for protecting critical areas in the kernel nor is
      it for stopping tracing if an anomaly occurred. This is because userspace
      can reenable it at any time.
      
      Side effect: With this patch, I discovered a dead variable in ftrace.c
        called tracing_on. This patch removes it.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      a3583244
  10. 11 11月, 2008 7 次提交
    • P
      sched: release buddies on yield · 2002c695
      Peter Zijlstra 提交于
      Clear buddies on yield, so that the buddy rules don't schedule them
      despite them being placed right-most.
      
      This fixed a performance regression with yield-happy binary JVMs.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Tested-by: NLin Ming <ming.m.lin@intel.com>
      2002c695
    • G
      timers: handle HRTIMER_CB_IRQSAFE_UNLOCKED correctly from softirq context · 5d5254f0
      Gautham R Shenoy 提交于
      Impact: fix incorrect locking triggered during hotplug-intense stress-tests
      
      While migrating the the CB_IRQSAFE_UNLOCKED timers during a cpu-offline,
      we queue them on the cb_pending list, so that they won't go
      stale.
      
      Thus, when the callbacks of the timers run from the softirq context,
      they could run into potential deadlocks, since these callbacks
      assume that they're running with irq's disabled, thereby annoying
      lockdep!
      
      Fix this by emulating hardirq context while running these callbacks from
      the hrtimer softirq.
      
      =================================
      [ INFO: inconsistent lock state ]
      2.6.27 #2
      --------------------------------
      inconsistent {in-hardirq-W} -> {hardirq-on-W} usage.
      ksoftirqd/0/4 [HC0[0]:SC1[1]:HE1:SE0] takes:
       (&rq->lock){++..}, at: [<c011db84>] sched_rt_period_timer+0x9e/0x1fc
      {in-hardirq-W} state was registered at:
        [<c014103c>] __lock_acquire+0x549/0x121e
        [<c0107890>] native_sched_clock+0x88/0x99
        [<c013aa12>] clocksource_get_next+0x39/0x3f
        [<c0139abc>] update_wall_time+0x616/0x7df
        [<c0141d6b>] lock_acquire+0x5a/0x74
        [<c0121724>] scheduler_tick+0x3a/0x18d
        [<c047ed45>] _spin_lock+0x1c/0x45
        [<c0121724>] scheduler_tick+0x3a/0x18d
        [<c0121724>] scheduler_tick+0x3a/0x18d
        [<c012c436>] update_process_times+0x3a/0x44
        [<c013c044>] tick_periodic+0x63/0x6d
        [<c013c062>] tick_handle_periodic+0x14/0x5e
        [<c010568c>] timer_interrupt+0x44/0x4a
        [<c0150c9f>] handle_IRQ_event+0x13/0x3d
        [<c0151c14>] handle_level_irq+0x79/0xbd
        [<c0105634>] do_IRQ+0x69/0x7d
        [<c01041e4>] common_interrupt+0x28/0x30
        [<c047007b>] aac_probe_one+0x1a3/0x3f3
        [<c047ec2d>] _spin_unlock_irqrestore+0x36/0x39
        [<c01512b4>] setup_irq+0x1be/0x1f9
        [<c065d70b>] start_kernel+0x259/0x2c5
        [<ffffffff>] 0xffffffff
      irq event stamp: 50102
      hardirqs last  enabled at (50102): [<c047ebf4>] _spin_unlock_irq+0x20/0x23
      hardirqs last disabled at (50101): [<c047edc2>] _spin_lock_irq+0xa/0x4b
      softirqs last  enabled at (50088): [<c0128ba6>] do_softirq+0x37/0x4d
      softirqs last disabled at (50099): [<c0128ba6>] do_softirq+0x37/0x4d
      
      other info that might help us debug this:
      no locks held by ksoftirqd/0/4.
      
      stack backtrace:
      Pid: 4, comm: ksoftirqd/0 Not tainted 2.6.27 #2
       [<c013f6cb>] print_usage_bug+0x13e/0x147
       [<c013fef5>] mark_lock+0x493/0x797
       [<c01410b1>] __lock_acquire+0x5be/0x121e
       [<c0141d6b>] lock_acquire+0x5a/0x74
       [<c011db84>] sched_rt_period_timer+0x9e/0x1fc
       [<c047ed45>] _spin_lock+0x1c/0x45
       [<c011db84>] sched_rt_period_timer+0x9e/0x1fc
       [<c011db84>] sched_rt_period_timer+0x9e/0x1fc
       [<c01210fd>] finish_task_switch+0x41/0xbd
       [<c0107890>] native_sched_clock+0x88/0x99
       [<c011dae6>] sched_rt_period_timer+0x0/0x1fc
       [<c0136dda>] run_hrtimer_pending+0x54/0xe5
       [<c011dae6>] sched_rt_period_timer+0x0/0x1fc
       [<c0128afb>] __do_softirq+0x7b/0xef
       [<c0128ba6>] do_softirq+0x37/0x4d
       [<c0128c12>] ksoftirqd+0x56/0xc5
       [<c0128bbc>] ksoftirqd+0x0/0xc5
       [<c0134649>] kthread+0x38/0x5d
       [<c0134611>] kthread+0x0/0x5d
       [<c0104477>] kernel_thread_helper+0x7/0x10
       =======================
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: N"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5d5254f0
    • O
      fix for account_group_exec_runtime(), make sure ->signal can't be freed under rq->lock · ad474cac
      Oleg Nesterov 提交于
      Impact: fix hang/crash on ia64 under high load
      
      This is ugly, but the simplest patch by far.
      
      Unlike other similar routines, account_group_exec_runtime() could be
      called "implicitly" from within scheduler after exit_notify(). This
      means we can race with the parent doing release_task(), we can't just
      check ->signal != NULL.
      
      Change __exit_signal() to do spin_unlock_wait(&task_rq(tsk)->lock)
      before __cleanup_signal() to make sure ->signal can't be freed under
      task_rq(tsk)->lock. Note that task_rq_unlock_wait() doesn't care
      about the case when tsk changes cpu/rq under us, this should be OK.
      
      Thanks to Ingo who nacked my previous buggy patch.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Reported-by: NDoug Chapman <doug.chapman@hp.com>
      ad474cac
    • S
      ring-buffer: prevent infinite looping on time stamping · 4143c5cb
      Steven Rostedt 提交于
      Impact: removal of unnecessary looping
      
      The lockless part of the ring buffer allows for reentry into the code
      from interrupts. A timestamp is taken, a test is preformed and if it
      detects that an interrupt occurred that did tracing, it tries again.
      
      The problem arises if the timestamp code itself causes a trace.
      The detection will detect this and loop again. The difference between
      this and an interrupt doing tracing, is that this will fail every time,
      and cause an infinite loop.
      
      Currently, we test if the loop happens 1000 times, and if so, it will
      produce a warning and disable the ring buffer.
      
      The problem with this approach is that it makes it difficult to perform
      some types of tracing (tracing the timestamp code itself).
      
      Each trace entry has a delta timestamp from the previous entry.
      If a trace entry is reserved but and interrupt occurs and traces before
      the previous entry is commited, the delta timestamp for that entry will
      be zero. This actually makes sense in terms of tracing, because the
      interrupt entry happened before the preempted entry was commited, so
      one may consider the two happening at the same time. The order is
      still preserved in the buffer.
      
      With this idea, instead of trying to get a new timestamp if an interrupt
      made it in between the timestamp and the test, the entry could simply
      make the delta zero and continue. This will prevent interrupts or
      tracers in the timer code from causing the above loop.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      4143c5cb
    • S
      ftrace: disable tracing on resize · bf5e6519
      Steven Rostedt 提交于
      Impact: fix for bug on resize
      
      This patch addresses the bug found here:
      
       http://bugzilla.kernel.org/show_bug.cgi?id=11996
      
      When ftrace converted to the new unified trace buffer, the resizing of
      the buffer was not protected as much as it was originally. If tracing
      is performed while the resize occurs, then the buffer can be corrupted.
      
      This patch disables all ftrace buffer modifications before a resize
      takes place.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      bf5e6519
    • T
      nohz: disable tick_nohz_kick_tick() for now · ae99286b
      Thomas Gleixner 提交于
      Impact: nohz powersavings and wakeup regression
      
      commit fb02fbc1 (NOHZ: restart tick
      device from irq_enter()) causes a serious wakeup regression.
      
      While the patch is correct it does not take into account that spurious
      wakeups happen on x86. A fix for this issue is available, but we just
      revert to the .27 behaviour and let long running softirqs screw
      themself.
      
      Disable it for now.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ae99286b
    • T
      irq: call __irq_enter() before calling the tick_idle_check · ee5f80a9
      Thomas Gleixner 提交于
      Impact: avoid spurious ksoftirqd wakeups
      
      The tick idle check which is called from irq_enter() was run before
      the call to __irq_enter() which did not set the in_interrupt() bits in
      preempt_count. That way the raise of a softirq woke up softirqd for
      nothing as the softirq was handled on return from interrupt.
      
      Call __irq_enter() before calling into the tick idle check code.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ee5f80a9