1. 20 8月, 2010 1 次提交
  2. 18 8月, 2010 1 次提交
    • D
      Make do_execve() take a const filename pointer · d7627467
      David Howells 提交于
      Make do_execve() take a const filename pointer so that kernel_execve() compiles
      correctly on ARM:
      
      arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type
      
      This also requires the argv and envp arguments to be consted twice, once for
      the pointer array and once for the strings the array points to.  This is
      because do_execve() passes a pointer to the filename (now const) to
      copy_strings_kernel().  A simpler alternative would be to cast the filename
      pointer in do_execve() when it's passed to copy_strings_kernel().
      
      do_execve() may not change any of the strings it is passed as part of the argv
      or envp lists as they are some of them in .rodata, so marking these strings as
      const should be fine.
      
      Further kernel_execve() and sys_execve() need to be changed to match.
      
      This has been test built on x86_64, frv, arm and mips.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d7627467
  3. 11 8月, 2010 2 次提交
  4. 10 8月, 2010 2 次提交
  5. 01 8月, 2010 1 次提交
    • S
      workqueue: mark init_workqueues() as early_initcall() · 6ee0578b
      Suresh Siddha 提交于
      Mark init_workqueues() as early_initcall() and thus it will be initialized
      before smp bringup. init_workqueues() registers for the hotcpu notifier
      and thus it should cope with the processors that are brought online after
      the workqueues are initialized.
      
      x86 smp bringup code uses workqueues and uses a workaround for the
      cold boot process (as the workqueues are initialized post smp_init()).
      Marking init_workqueues() as early_initcall() will pave the way for
      cleaning up this code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      6ee0578b
  6. 28 7月, 2010 3 次提交
  7. 23 7月, 2010 1 次提交
  8. 09 7月, 2010 1 次提交
  9. 30 6月, 2010 1 次提交
  10. 29 6月, 2010 1 次提交
  11. 28 6月, 2010 1 次提交
    • T
      percpu: allow limited allocation before slab is online · 099a19d9
      Tejun Heo 提交于
      This patch updates percpu allocator such that it can serve limited
      amount of allocation before slab comes online.  This is primarily to
      allow slab to depend on working percpu allocator.
      
      Two parameters, PERCPU_DYNAMIC_EARLY_SIZE and SLOTS, determine how
      much memory space and allocation map slots are reserved.  If this
      reserved area is exhausted, WARN_ON_ONCE() will trigger and allocation
      will fail till slab comes online.
      
      The following changes are made to implement early alloc.
      
      * pcpu_mem_alloc() now checks slab_is_available()
      
      * Chunks are allocated using pcpu_mem_alloc()
      
      * Init paths make sure ai->dyn_size is at least as large as
        PERCPU_DYNAMIC_EARLY_SIZE.
      
      * Initial alloc maps are allocated in __initdata and copied to
        kmalloc'd areas once slab is online.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      099a19d9
  12. 10 6月, 2010 1 次提交
    • T
      ACPI: Do not try to set up acpi processor stuff on cores exceeding maxcpus= · 75cbfb97
      Thomas Renninger 提交于
      Patch is against latest Linus master branch and is expected to be
      safe bug fix.
      
      You get:
      ACPI: HARDWARE addr space,NOT supported yet
      for each ACPI defined CPU which status is active, but exceeds
      maxcpus= count.
      
      As these "not booted" CPUs do not run an idle routine
      and echo X >/proc/acpi/processor/*/throttling did not work
      I couldn't find a way to really access not onlined/booted
      machines. Still this should get fixed and
      /proc/acpi/processor/X dirs of cores exceeding maxcpus
      should not show up.
      
      I wonder whether this could get cleaned up by truncating possible cpu mask
      and nr_cpu_ids to setup_max_cpus early some day
      (and not exporting setup_max_cpus anymore then).
      But this needs touching of a lot other places...
      Signed-off-by: NThomas Renninger <trenn@suse.de>
      CC: travis@sgi.com
      CC: linux-acpi@vger.kernel.org
      CC: lenb@kernel.org
      Signed-off-by: NLen Brown <len.brown@intel.com>
      75cbfb97
  13. 09 6月, 2010 2 次提交
  14. 25 5月, 2010 1 次提交
    • H
      mem-hotplug: avoid multiple zones sharing same boot strapping boot_pageset · 1f522509
      Haicheng Li 提交于
      For each new populated zone of hotadded node, need to update its pagesets
      with dynamically allocated per_cpu_pageset struct for all possible CPUs:
      
          1) Detach zone->pageset from the shared boot_pageset
             at end of __build_all_zonelists().
      
          2) Use mutex to protect zone->pageset when it's still
             shared in onlined_pages()
      
      Otherwises, multiple zones of different nodes would share same boot strapping
      boot_pageset for same CPU, which will finally cause below kernel panic:
      
        ------------[ cut here ]------------
        kernel BUG at mm/page_alloc.c:1239!
        invalid opcode: 0000 [#1] SMP
        ...
        Call Trace:
         [<ffffffff811300c1>] __alloc_pages_nodemask+0x131/0x7b0
         [<ffffffff81162e67>] alloc_pages_current+0x87/0xd0
         [<ffffffff81128407>] __page_cache_alloc+0x67/0x70
         [<ffffffff811325f0>] __do_page_cache_readahead+0x120/0x260
         [<ffffffff81132751>] ra_submit+0x21/0x30
         [<ffffffff811329c6>] ondemand_readahead+0x166/0x2c0
         [<ffffffff81132ba0>] page_cache_async_readahead+0x80/0xa0
         [<ffffffff8112a0e4>] generic_file_aio_read+0x364/0x670
         [<ffffffff81266cfa>] nfs_file_read+0xca/0x130
         [<ffffffff8117b20a>] do_sync_read+0xfa/0x140
         [<ffffffff8117bf75>] vfs_read+0xb5/0x1a0
         [<ffffffff8117c151>] sys_read+0x51/0x80
         [<ffffffff8103c032>] system_call_fastpath+0x16/0x1b
        RIP  [<ffffffff8112ff13>] get_page_from_freelist+0x883/0x900
         RSP <ffff88000d1e78a8>
        ---[ end trace 4bda28328b9990db ]
      
      [akpm@linux-foundation.org: merge fix]
      Signed-off-by: NHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: NWu Fengguang <fengguang.wu@intel.com>
      Reviewed-by: NAndi Kleen <andi.kleen@intel.com>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f522509
  15. 21 5月, 2010 2 次提交
    • J
      x86, kgdb, init: Add early and late debug states · 0b4b3827
      Jason Wessel 提交于
      The kernel debugger can operate well before mm_init(), but the x86
      hardware breakpoint code which uses the perf api requires that the
      kernel allocators are initialized.
      
      This means the kernel debug core needs to provide an optional arch
      specific call back to allow the initialization functions to run after
      the kernel has been further initialized.
      
      The kdb shell already had a similar restriction with an early
      initialization and late initialization.  The kdb_init() was moved into
      the debug core's version of the late init which is called
      dbg_late_init();
      
      CC: kgdb-bugreport@lists.sourceforge.net
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      0b4b3827
    • J
      kdb: core for kgdb back end (2 of 2) · 67fc4e0c
      Jason Wessel 提交于
      This patch contains the hooks and instrumentation into kernel which
      live outside the kernel/debug directory, which the kdb core
      will call to run commands like lsmod, dmesg, bt etc...
      
      CC: linux-arch@vger.kernel.org
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
      Signed-off-by: NMartin Hicks <mort@sgi.com>
      67fc4e0c
  16. 16 5月, 2010 2 次提交
    • F
      lockup_detector: Introduce CONFIG_HARDLOCKUP_DETECTOR · 23637d47
      Frederic Weisbecker 提交于
      This new config is deemed to simplify even more the lockup detector
      dependencies and can make it easier to bring a smooth sorting
      between archs that support the new generic lockup detector and those
      that still have their own, especially for those that are in the
      middle of this migration.
      
      Instead of checking whether we have CONFIG_LOCKUP_DETECTOR +
      CONFIG_PERF_EVENTS_NMI each time an arch wants to know if it needs
      to build its own lockup detector, take a shortcut with this new
      config. It is enabled only if the hardlockup detection part of
      the whole lockup detector is on.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      23637d47
    • F
      lockup_detector: Adapt CONFIG_PERF_EVENT_NMI to other archs · c01d4323
      Frederic Weisbecker 提交于
      CONFIG_PERF_EVENT_NMI is something that need to be enabled from the
      arch. This is fine on x86 as PERF_EVENTS is builtin but if other
      archs select it, they will need to handle the PERF_EVENTS dependency.
      
      Instead, handle the dependency in the generic layer:
      
      - archs need to tell what they support through HAVE_PERF_EVENTS_NMI
      - Enable magically PERF_EVENTS_NMI if we have PERF_EVENTS and
        HAVE_PERF_EVENTS_NMI.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Don Zickus <dzickus@redhat.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      c01d4323
  17. 13 5月, 2010 1 次提交
    • D
      lockup_detector: Combine nmi_watchdog and softlockup detector · 58687acb
      Don Zickus 提交于
      The new nmi_watchdog (which uses the perf event subsystem) is very
      similar in structure to the softlockup detector.  Using Ingo's
      suggestion, I combined the two functionalities into one file:
      kernel/watchdog.c.
      
      Now both the nmi_watchdog (or hardlockup detector) and softlockup
      detector sit on top of the perf event subsystem, which is run every
      60 seconds or so to see if there are any lockups.
      
      To detect hardlockups, cpus not responding to interrupts, I
      implemented an hrtimer that runs 5 times for every perf event
      overflow event.  If that stops counting on a cpu, then the cpu is
      most likely in trouble.
      
      To detect softlockups, tasks not yielding to the scheduler, I used the
      previous kthread idea that now gets kicked every time the hrtimer fires.
      If the kthread isn't being scheduled neither is anyone else and the
      warning is printed to the console.
      
      I tested this on x86_64 and both the softlockup and hardlockup paths
      work.
      
      V2:
      - cleaned up the Kconfig and softlockup combination
      - surrounded hardlockup cases with #ifdef CONFIG_PERF_EVENTS_NMI
      - seperated out the softlockup case from perf event subsystem
      - re-arranged the enabling/disabling nmi watchdog from proc space
      - added cpumasks for hardlockup failure cases
      - removed fallback to soft events if no PMU exists for hard events
      
      V3:
      - comment cleanups
      - drop support for older softlockup code
      - per_cpu cleanups
      - completely remove software clock base hardlockup detector
      - use per_cpu masking on hard/soft lockup detection
      - #ifdef cleanups
      - rename config option NMI_WATCHDOG to LOCKUP_DETECTOR
      - documentation additions
      
      V4:
      - documentation fixes
      - convert per_cpu to __get_cpu_var
      - powerpc compile fixes
      
      V5:
      - split apart warn flags for hard and soft lockups
      
      TODO:
      - figure out how to make an arch-agnostic clock2cycles call
        (if possible) to feed into perf events as a sample period
      
      [fweisbec: merged conflict patch]
      Signed-off-by: NDon Zickus <dzickus@redhat.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Eric Paris <eparis@redhat.com>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      LKML-Reference: <1273266711-18706-2-git-send-email-dzickus@redhat.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      58687acb
  18. 27 4月, 2010 1 次提交
    • V
      blk-cgroup: config options re-arrangement · afc24d49
      Vivek Goyal 提交于
      This patch fixes few usability and configurability issues.
      
      o All the cgroup based controller options are configurable from
        "Genral Setup/Control Group Support/" menu. blkio is the only exception.
        Hence make this option visible in above menu and make it configurable from
        there to bring it inline with rest of the cgroup based controllers.
      
      o Get rid of CONFIG_DEBUG_CFQ_IOSCHED.
      
        This option currently does two things.
      
        - Enable printing of cgroup paths in blktrace
        - Enables CONFIG_DEBUG_BLK_CGROUP, which in turn displays additional stat
          files in cgroup.
      
        If we are using group scheduling, blktrace data is of not really much use
        if cgroup information is not present. To get this data, currently one has to
        also enable CONFIG_DEBUG_CFQ_IOSCHED, which in turn brings the overhead of
        all the additional debug stat files which is not desired.
      
        Hence, this patch moves printing of cgroup paths under
        CONFIG_CFQ_GROUP_IOSCHED.
      
        This allows us to get rid of CONFIG_DEBUG_CFQ_IOSCHED completely. Now all
        the debug stat files are controlled only by CONFIG_DEBUG_BLK_CGROUP which
        can be enabled through config menu.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Acked-by: NDivyesh Shah <dpshah@google.com>
      Reviewed-by: NGui Jianfeng <guijianfeng@cn.fujitsu.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      afc24d49
  19. 25 4月, 2010 1 次提交
  20. 03 4月, 2010 1 次提交
  21. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  22. 25 3月, 2010 1 次提交
  23. 13 3月, 2010 1 次提交
    • K
      cgroup: implement eventfd-based generic API for notifications · 0dea1168
      Kirill A. Shutemov 提交于
      This patchset introduces eventfd-based API for notifications in cgroups
      and implements memory notifications on top of it.
      
      It uses statistics in memory controler to track memory usage.
      
      Output of time(1) on building kernel on tmpfs:
      
      Root cgroup before changes:
      	make -j2  506.37 user 60.93s system 193% cpu 4:52.77 total
      Non-root cgroup before changes:
      	make -j2  507.14 user 62.66s system 193% cpu 4:54.74 total
      Root cgroup after changes (0 thresholds):
      	make -j2  507.13 user 62.20s system 193% cpu 4:53.55 total
      Non-root cgroup after changes (0 thresholds):
      	make -j2  507.70 user 64.20s system 193% cpu 4:55.70 total
      Root cgroup after changes (1 thresholds, never crossed):
      	make -j2  506.97 user 62.20s system 193% cpu 4:53.90 total
      Non-root cgroup after changes (1 thresholds, never crossed):
      	make -j2  507.55 user 64.08s system 193% cpu 4:55.63 total
      
      This patch:
      
      Introduce the write-only file "cgroup.event_control" in every cgroup.
      
      To register new notification handler you need:
      - create an eventfd;
      - open a control file to be monitored. Callbacks register_event() and
        unregister_event() must be defined for the control file;
      - write "<event_fd> <control_fd> <args>" to cgroup.event_control.
        Interpretation of args is defined by control file implementation;
      
      eventfd will be woken up by control file implementation or when the
      cgroup is removed.
      
      To unregister notification handler just close eventfd.
      
      If you need notification functionality for a control file you have to
      implement callbacks register_event() and unregister_event() in the
      struct cftype.
      
      [kamezawa.hiroyu@jp.fujitsu.com: Kconfig fix]
      Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Paul Menage <menage@google.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Cc: Dan Malek <dan@embeddedalley.com>
      Cc: Vladislav Buzov <vbuzov@embeddedalley.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: Alexander Shishkin <virtuoso@slind.org>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0dea1168
  24. 07 3月, 2010 4 次提交
  25. 04 3月, 2010 1 次提交
    • E
      init: Open /dev/console from rootfs · 2bd3a997
      Eric W. Biederman 提交于
      To avoid potential problems with an empty /dev open /dev/console
      from rootfs instead of waiting to mount our root filesystem and
      mounting it there.   This effectively guarantees that there will
      be a device node, and it won't be on a filesystem that we will
      ever unmount, so there are no issues with leaving /dev/console
      open and pinning the filesystem.
      
      This is actually more effective than automatically mounting
      devtmpfs on /dev because it removes removes the occasionally
      problematic assumption that /dev/console exists from the boot
      code.
      
      With this patch I was able to throw busybox on my /boot partition
      (which has no /dev directory) and boot into userspace without
      problems.
      
      The only possible negative consequence I can think of is that
      someone out there deliberately used did not use a character device
      that is major 5 minor 2 for /dev/console.  Does anyone know of a
      situation in which that could make sense?
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2bd3a997
  26. 26 2月, 2010 1 次提交
  27. 25 2月, 2010 2 次提交
    • P
      rcu: Accelerate grace period if last non-dynticked CPU · 8bd93a2c
      Paul E. McKenney 提交于
      Currently, rcu_needs_cpu() simply checks whether the current CPU
      has an outstanding RCU callback, which means that the last CPU
      to go into dyntick-idle mode might wait a few ticks for the
      relevant grace periods to complete.  However, if all the other
      CPUs are in dyntick-idle mode, and if this CPU is in a quiescent
      state (which it is for RCU-bh and RCU-sched any time that we are
      considering going into dyntick-idle mode), then the grace period
      is instantly complete.
      
      This patch therefore repeatedly invokes the RCU grace-period
      machinery in order to force any needed grace periods to complete
      quickly.  It does so a limited number of times in order to
      prevent starvation by an RCU callback function that might pass
      itself to call_rcu().
      
      However, if any CPU other than the current one is not in
      dyntick-idle mode, fall back to simply checking (with fix to bug
      noted by Lai Jiangshan).  Also, take advantage of last
      grace-period forcing, the opportunity to do so noted by Steve
      Rostedt.  And apply simplified #ifdef condition suggested by
      Frederic Weisbecker.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-15-git-send-email-paulmck@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8bd93a2c
    • P
      sched: Use lockdep-based checking on rcu_dereference() · d11c563d
      Paul E. McKenney 提交于
      Update the rcu_dereference() usages to take advantage of the new
      lockdep-based checking.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1266887105-1528-6-git-send-email-paulmck@linux.vnet.ibm.com>
      [ -v2: fix allmodconfig missing symbol export build failure on x86 ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d11c563d
  28. 18 2月, 2010 2 次提交