1. 10 5月, 2007 14 次提交
    • O
      make cancel_rearming_delayed_work() work on any workqueue, not just keventd_wq · 1634c48f
      Oleg Nesterov 提交于
      cancel_rearming_delayed_workqueue(wq, dwork) doesn't need the first
      parameter.  We don't hang on un-queued dwork any longer, and work->data
      doesn't change its type.  This means we can always figure out "wq" from
      dwork when it is needed.
      
      Remove this parameter, and rename the function to
      cancel_rearming_delayed_work().  Re-create an inline "obsolete"
      cancel_rearming_delayed_workqueue(wq) which just calls
      cancel_rearming_delayed_work().
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1634c48f
    • O
      workqueue: kill run_scheduled_work() · 7097a87a
      Oleg Nesterov 提交于
      Because it has no callers.
      
      Actually, I think the whole idea of run_scheduled_work() was not right, not
      good to mix "unqueue this work and execute its ->func()" in one function.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7097a87a
    • G
      Define and use new events,CPU_LOCK_ACQUIRE and CPU_LOCK_RELEASE · baaca49f
      Gautham R Shenoy 提交于
      This is an attempt to provide an alternate mechanism for postponing
      a hotplug event instead of using a global mechanism like lock_cpu_hotplug.
      
      The proposal is to add two new events namely CPU_LOCK_ACQUIRE and
      CPU_LOCK_RELEASE. The notification for these two events would be sent
      out before and after a cpu_hotplug event respectively.
      
      During the CPU_LOCK_ACQUIRE event, a cpu-hotplug-aware subsystem is
      supposed to acquire any per-subsystem hotcpu mutex ( Eg. workqueue_mutex
      in kernel/workqueue.c ).
      
      During the CPU_LOCK_RELEASE release event the cpu-hotplug-aware subsystem
      is supposed to release the per-subsystem hotcpu mutex.
      
      The reasons for defining new events as opposed to reusing the existing events
      like CPU_UP_PREPARE/CPU_UP_FAILED/CPU_ONLINE for locking/unlocking of
      per-subsystem hotcpu mutexes are as follow:
      
      	- CPU_LOCK_ACQUIRE: All hotcpu mutexes are taken before subsystems
      	start handling pre-hotplug events like CPU_UP_PREPARE/CPU_DOWN_PREPARE
      	etc, thus ensuring a clean handling of these events.
      
      	- CPU_LOCK_RELEASE: The hotcpu mutexes will be released only after
      	all subsystems have handled post-hotplug events like CPU_DOWN_FAILED,
      	CPU_DEAD,CPU_ONLINE etc thereby ensuring that there are no subsequent
      	clashes amongst the interdependent subsystems after a cpu hotplugs.
      
      This patch also uses __raw_notifier_call chain in _cpu_up to take care
      of the dependency between the two consequetive calls to
      raw_notifier_call_chain.
      
      [akpm@linux-foundation.org: fix a bug]
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      baaca49f
    • G
      Extend notifier_call_chain to count nr_calls made · 6f7cc11a
      Gautham R Shenoy 提交于
      Since 2.6.18-something, the community has been bugged by the problem to
      provide a clean and a stable mechanism to postpone a cpu-hotplug event as
      lock_cpu_hotplug was badly broken.
      
      This is another proposal towards solving that problem.  This one is along the
      lines of the solution provided in kernel/workqueue.c
      
      Instead of having a global mechanism like lock_cpu_hotplug, we allow the
      subsytems to define their own per-subsystem hot cpu mutexes.  These would be
      taken(released) where ever we are currently calling
      lock_cpu_hotplug(unlock_cpu_hotplug).
      
      Also, in the per-subsystem hotcpu callback function,we take this mutex before
      we handle any pre-cpu-hotplug events and release it once we finish handling
      the post-cpu-hotplug events.  A standard means for doing this has been
      provided in [PATCH 2/4] and demonstrated in [PATCH 3/4].
      
      The ordering of these per-subsystem mutexes might still prove to be a
      problem, but hopefully lockdep should help us get out of that muddle.
      
      The patch set to be applied against linux-2.6.19-rc5 is as follows:
      
      [PATCH 1/4] :	Extend notifier_call_chain with an option to specify the
      		number of notifications to be sent and also count the
      		number of notifications actually sent.
      
      [PATCH 2/4] :	Define events CPU_LOCK_ACQUIRE and CPU_LOCK_RELEASE
      		and send out notifications for these in _cpu_up and
      		_cpu_down. This would help us standardise the acquire and
      		release of the subsystem locks in the hotcpu
      		callback functions of these subsystems.
      
      [PATCH 3/4] :	Eliminate lock_cpu_hotplug from kernel/sched.c.
      
      [PATCH 4/4] :	In workqueue_cpu_callback function, acquire(release) the
      		workqueue_mutex while handling
      		CPU_LOCK_ACQUIRE(CPU_LOCK_RELEASE).
      
      If the per-subsystem-locking approach survives the test of time, we can expect
      a slow phasing out of lock_cpu_hotplug, which has not yet been eliminated in
      these patches :)
      
      This patch:
      
      Provide notifier_call_chain with an option to call only a specified number of
      notifiers and also record the number of call to notifiers made.
      
      The need for this enhancement was identified in the post entitled
      "Slab - Eliminate lock_cpu_hotplug from slab"
      (http://lkml.org/lkml/2006/10/28/92) by Ravikiran G Thirumalai and
      Andrew Morton.
      
      This patch adds two additional parameters to notifier_call_chain API namely
       - int nr_to_calls : Number of notifier_functions to be called.
       		     The don't care value is -1.
      
       - unsigned int *nr_calls : Records the total number of notifier_funtions
      			    called by notifier_call_chain. The don't care
      			    value is NULL.
      
      [michal.k.k.piotrowski@gmail.com: build fix]
      Credit: Andrew Morton <akpm@osdl.org>
      Signed-off-by: NGautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NMichal Piotrowski <michal.k.k.piotrowski@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6f7cc11a
    • T
      relay: use plain timer instead of delayed work · 7c9cb383
      Tom Zanussi 提交于
      relay doesn't need to use schedule_delayed_work() for waking readers
      when a simple timer will do.
      Signed-off-by: NTom Zanussi <zanussi@comcast.net>
      Cc: Satyam Sharma <satyam.sharma@gmail.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7c9cb383
    • A
      kblockd: use flush_work · 19a75d83
      Andrew Morton 提交于
      Switch the kblockd flushing from a global flush to a more specific
      flush_work().
      
      (akpm: bypassed maintainers, sorry.  There are other patches which depend on
      this)
      
      Cc: "Maciej W. Rozycki" <macro@linux-mips.org>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jens Axboe <axboe@suse.de>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      19a75d83
    • O
      implement flush_work() · b89deed3
      Oleg Nesterov 提交于
      A basic problem with flush_scheduled_work() is that it blocks behind _all_
      presently-queued works, rather than just the work whcih the caller wants to
      flush.  If the caller holds some lock, and if one of the queued work happens
      to want that lock as well then accidental deadlocks can occur.
      
      One example of this is the phy layer: it wants to flush work while holding
      rtnl_lock().  But if a linkwatch event happens to be queued, the phy code will
      deadlock because the linkwatch callback function takes rtnl_lock.
      
      So we implement a new function which will flush a *single* work - just the one
      which the caller wants to free up.  Thus we avoid the accidental deadlocks
      which can arise from unrelated subsystems' callbacks taking shared locks.
      
      flush_work() non-blockingly dequeues the work_struct which we want to kill,
      then it waits for its handler to complete on all CPUs.
      
      Add ->current_work to the "struct cpu_workqueue_struct", it points to
      currently running "struct work_struct". When flush_work(work) detects
      ->current_work == work, it inserts a barrier at the _head_ of ->worklist
      (and thus right _after_ that work) and waits for completition. This means
      that the next work fired on that CPU will be this barrier, or another
      barrier queued by concurrent flush_work(), so the caller of flush_work()
      will be woken before any "regular" work has a chance to run.
      
      When wait_on_work() unlocks workqueue_mutex (or whatever we choose to protect
      against CPU hotplug), CPU may go away. But in that case take_over_work() will
      move a barrier we queued to another CPU, it will be fired sometime, and
      wait_on_work() will be woken.
      
      Actually, we are doing cleanup_workqueue_thread()->kthread_stop() before
      take_over_work(), so cwq->thread should complete its ->worklist (and thus
      the barrier), because currently we don't check kthread_should_stop() in
      run_workqueue(). But even if we did, everything should be ok.
      
      [akpm@osdl.org: cleanup]
      [akpm@osdl.org: add flush_work_keventd() wrapper]
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b89deed3
    • A
      mutex_lock_interruptible(): add __must_check · 18d8362d
      Andrew Morton 提交于
      It's not sane to use mutex_lock_interruptible() and to then ignore the result.
      
      Ditto down_interruptible(), but I'm lazy.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      18d8362d
    • R
      Move sig_kernel_* et al macros to linux/signal.h · 55c0d1f8
      Roland McGrath 提交于
      This patch moves the sig_kernel_* and related macros from kernel/signal.c
      to linux/signal.h, and cleans them up slightly.  I need the sig_kernel_*
      macros for default signal behavior in the utrace code, and want to avoid
      duplication or overhead to share the knowledge.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      55c0d1f8
    • J
      mca: add integrated device bus matching · 8813d1c0
      James Bottomley 提交于
      The MCA bus has a few "integrated" functions, which are effectively virtual
      slots on the bus.  The problem is that these special functions don't have
      dedicated pos IDs, so we have to manufacture ids for them outside the pos
      space ...  and these ids can't be matched by the standard matching function,
      so add a special registration that requests a list of pos ids or a particular
      integrated function.
      Signed-off-by: NJames Bottomley <James.Bottomley@SteelEye.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8813d1c0
    • F
      Remove hardcoding of hard_smp_processor_id on UP systems · 2f4dfe20
      Fernando Luis Vazquez Cao 提交于
      With the advent of kdump, the assumption that the boot CPU when booting an UP
      kernel is always the CPU with a particular hardware ID (often 0) (usually
      referred to as BSP on some architectures) is not valid anymore.  The reason
      being that the dump capture kernel boots on the crashed CPU (the CPU that
      invoked crash_kexec), which may be or may not be that particular CPU.
      
      Move definition of hard_smp_processor_id for the UP case to
      architecture-specific code ("asm/smp.h") where it belongs, so that each
      architecture can provide its own implementation.
      Signed-off-by: NFernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Acked-by: NAndi Kleen <ak@suse.de>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Vivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2f4dfe20
    • D
      Display all possible partitions when the root filesystem failed to mount · dd2a345f
      Dave Gilbert 提交于
      Display all possible partitions when the root filesystem is not mounted.
      This helps to track spell'o's and missing drivers.
      
      Updated to work with newer kernels.
      
      Example output:
      
      VFS: Cannot open root device "foobar" or unknown-block(0,0)
      Please append a correct "root=" boot option; here are the available partitions:
      0800    8388608 sda driver: sd
        0801     192748 sda1
        0802    8193150 sda2
      0810    4194304 sdb driver: sd
      Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
      
      [akpm@linux-foundation.org: cleanups, fix printk warnings]
      Signed-off-by: NJan Engelhardt <jengelh@gmx.de>
      Cc: Dave Gilbert <linux@treblig.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      dd2a345f
    • R
      PM: Separate hibernation code from suspend code · a3d25c27
      Rafael J. Wysocki 提交于
      [ With Johannes Berg <johannes@sipsolutions.net> ]
      
      Separate the hibernation (aka suspend to disk code) from the other suspend
      code.  In particular:
      
       * Remove the definitions related to hibernation from include/linux/pm.h
       * Introduce struct hibernation_ops and a new hibernate() function to hibernate
         the system, defined in include/linux/suspend.h
       * Separate suspend code in kernel/power/main.c from hibernation-related code
         in kernel/power/disk.c and kernel/power/user.c (with the help of
         hibernation_ops)
       * Switch ACPI (the only user of pm_ops.pm_disk_mode) to hibernation_ops
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Greg KH <greg@kroah.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Nigel Cunningham <nigel@nigel.suspend2.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a3d25c27
    • S
      Declare {compat_}sys_utimensat · 97416ce8
      Stephen Rothwell 提交于
      This is needed before Powerpc can wire up the syscall.
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      97416ce8
  2. 09 5月, 2007 26 次提交