1. 24 6月, 2005 40 次提交
    • P
      [PATCH] kprobes: Temporary disarming of reentrant probe · ea32c65c
      Prasanna S Panchamukhi 提交于
      In situations where a kprobes handler calls a routine which has a probe on it,
      then kprobes_handler() disarms the new probe forever.  This patch removes the
      above limitation by temporarily disarming the new probe.  When the another
      probe hits while handling the old probe, the kprobes_handler() saves previous
      kprobes state and handles the new probe without calling the new kprobes
      registered handlers.  kprobe_post_handler() restores back the previous kprobes
      state and the normal execution continues.
      
      However on x86_64 architecture, re-rentrancy is provided only through
      pre_handler().  If a routine having probe is referenced through
      post_handler(), then the probes on that routine are disarmed forever, since
      the exception stack is gets changed after the processor single steps the
      instruction of the new probe.
      
      This patch includes generic changes to support temporary disarming on
      reentrancy of probes.
      Signed-of-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ea32c65c
    • A
      [PATCH] Kprobes IA64: cmp ctype unc support · 1674eafc
      Anil S Keshavamurthy 提交于
      The current Kprobes when patching the original instruction with the break
      instruction tries to retain the original qualifying predicate(qp), however
      for cmp.crel.ctype where ctype == unc, which is a special instruction
      always needs to be executed irrespective of qp.  Hence, if the instruction
      we are patching is of this type, then we should not copy the original qp to
      the break instruction, this is because we always want the break fault to
      happen so that we can emulate the instruction.
      
      This patch is based on the feedback given by David Mosberger
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1674eafc
    • R
      [PATCH] Kprobes ia64 cleanup · 8bc76772
      Rusty Lynch 提交于
      A cleanup of the ia64 kprobes implementation such that all of the bundle
      manipulation logic is concentrated in arch_prepare_kprobe().
      
      With the current design for kprobes, the arch specific code only has a
      chance to return failure inside the arch_prepare_kprobe() function.
      
      This patch moves all of the work that was happening in arch_copy_kprobe()
      and most of the work that was happening in arch_arm_kprobe() into
      arch_prepare_kprobe().  By doing this we can add further robustness checks
      in arch_arm_kprobe() and refuse to insert kprobes that will cause problems.
      Signed-off-by: NRusty Lynch <Rusty.lynch@intel.com>
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8bc76772
    • A
      [PATCH] Kprobes/IA64: support kprobe on branch/call instructions · cd2675bf
      Anil S Keshavamurthy 提交于
      This patch is required to support kprobe on branch/call instructions.
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cd2675bf
    • A
      [PATCH] Kprobes/IA64: architecture specific JProbes support · b2761dc2
      Anil S Keshavamurthy 提交于
      This patch adds IA64 architecture specific JProbes support on top of Kprobes
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NRusty Lynch <Rusty.lynch@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b2761dc2
    • A
      [PATCH] Kprobes/IA64: arch specific handling · fd7b231f
      Anil S Keshavamurthy 提交于
      This is an IA64 arch specific handling of Kprobes
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NRusty Lynch <Rusty.lynch@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fd7b231f
    • A
      [PATCH] Kprobes/IA64: kdebug die notification mechanism · 7213b252
      Anil S Keshavamurthy 提交于
      As many of you know that kprobes exist in the main line kernel for various
      architecture including i386, x86_64, ppc64 and sparc64.  Attached patches
      following this mail are a port of Kprobes and Jprobes for IA64.
      
      I have tesed this patches for kprobes and Jprobes and this seems to work fine.
       I have tested this patch by inserting kprobes on various slots and various
      templates including various types of branch instructions.
      
      I have also tested this patch using the tool
      http://marc.theaimsgroup.com/?l=linux-kernel&m=111657358022586&w=2 and the
      kprobes for IA64 works great.
      
      Here is list of TODO things and pathes for the same will appear soon.
      
      1) Support kprobes on "mov r1=ip" type of instruction
      2) Support Kprobes and Jprobes to exist on the same address
      3) Support Return probes
      3) Architecture independent cleanup of kprobes
      
      This patch adds the kdebug die notification mechanism needed by Kprobes.
      
      For break instruction on Branch type slot, imm21 is ignored and value
      zero is placed in IIM register, hence we need to handle kprobes
      for switch case zero.
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NRusty Lynch <Rusty.lynch@intel.com>
      
      From: Rusty Lynch <rusty.lynch@intel.com>
      
      At the point in traps.c where we recieve a break with a zero value, we can
      not say if the break was a result of a kprobe or some other debug facility.
      
      This simple patch changes the informational string to a more correct "break
      0" value, and applies to the 2.6.12-rc2-mm2 tree with all the kprobes
      patches that were just recently included for the next mm cut.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7213b252
    • H
      [PATCH] kprobes: moves lock-unlock to non-arch kprobe_flush_task · 0aa55e4d
      Hien Nguyen 提交于
      This patch moves the lock/unlock of the arch specific kprobe_flush_task()
      to the non-arch specific kprobe_flusk_task().
      Signed-off-by: NHien Nguyen <hien@us.ibm.com>
      Acked-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0aa55e4d
    • R
      [PATCH] Move kprobe [dis]arming into arch specific code · 7e1048b1
      Rusty Lynch 提交于
      The architecture independent code of the current kprobes implementation is
      arming and disarming kprobes at registration time.  The problem is that the
      code is assuming that arming and disarming is a just done by a simple write
      of some magic value to an address.  This is problematic for ia64 where our
      instructions look more like structures, and we can not insert break points
      by just doing something like:
      
      *p->addr = BREAKPOINT_INSTRUCTION;
      
      The following patch to 2.6.12-rc4-mm2 adds two new architecture dependent
      functions:
      
           * void arch_arm_kprobe(struct kprobe *p)
           * void arch_disarm_kprobe(struct kprobe *p)
      
      and then adds the new functions for each of the architectures that already
      implement kprobes (spar64/ppc64/i386/x86_64).
      
      I thought arch_[dis]arm_kprobe was the most descriptive of what was really
      happening, but each of the architectures already had a disarm_kprobe()
      function that was really a "disarm and do some other clean-up items as
      needed when you stumble across a recursive kprobe." So...  I took the
      liberty of changing the code that was calling disarm_kprobe() to call
      arch_disarm_kprobe(), and then do the cleanup in the block of code dealing
      with the recursive kprobe case.
      
      So far this patch as been tested on i386, x86_64, and ppc64, but still
      needs to be tested in sparc64.
      Signed-off-by: NRusty Lynch <rusty.lynch@intel.com>
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7e1048b1
    • R
      [PATCH] x86_64 specific function return probes · 73649dab
      Rusty Lynch 提交于
      The following patch adds the x86_64 architecture specific implementation
      for function return probes.
      
      Function return probes is a mechanism built on top of kprobes that allows
      a caller to register a handler to be called when a given function exits.
      For example, to instrument the return path of sys_mkdir:
      
      static int sys_mkdir_exit(struct kretprobe_instance *i, struct pt_regs *regs)
      {
      	printk("sys_mkdir exited\n");
      	return 0;
      }
      static struct kretprobe return_probe = {
      	.handler = sys_mkdir_exit,
      };
      
      <inside setup function>
      
      return_probe.kp.addr = (kprobe_opcode_t *) kallsyms_lookup_name("sys_mkdir");
      if (register_kretprobe(&return_probe)) {
      	printk(KERN_DEBUG "Unable to register return probe!\n");
      	/* do error path */
      }
      
      <inside cleanup function>
      unregister_kretprobe(&return_probe);
      
      The way this works is that:
      
      * At system initialization time, kernel/kprobes.c installs a kprobe
        on a function called kretprobe_trampoline() that is implemented in
        the arch/x86_64/kernel/kprobes.c  (More on this later)
      
      * When a return probe is registered using register_kretprobe(),
        kernel/kprobes.c will install a kprobe on the first instruction of the
        targeted function with the pre handler set to arch_prepare_kretprobe()
        which is implemented in arch/x86_64/kernel/kprobes.c.
      
      * arch_prepare_kretprobe() will prepare a kretprobe instance that stores:
        - nodes for hanging this instance in an empty or free list
        - a pointer to the return probe
        - the original return address
        - a pointer to the stack address
      
        With all this stowed away, arch_prepare_kretprobe() then sets the return
        address for the targeted function to a special trampoline function called
        kretprobe_trampoline() implemented in arch/x86_64/kernel/kprobes.c
      
      * The kprobe completes as normal, with control passing back to the target
        function that executes as normal, and eventually returns to our trampoline
        function.
      
      * Since a kprobe was installed on kretprobe_trampoline() during system
        initialization, control passes back to kprobes via the architecture
        specific function trampoline_probe_handler() which will lookup the
        instance in an hlist maintained by kernel/kprobes.c, and then call
        the handler function.
      
      * When trampoline_probe_handler() is done, the kprobes infrastructure
        single steps the original instruction (in this case just a top), and
        then calls trampoline_post_handler().  trampoline_post_handler() then
        looks up the instance again, puts the instance back on the free list,
        and then makes a long jump back to the original return instruction.
      
      So to recap, to instrument the exit path of a function this implementation
      will cause four interruptions:
      
        - A breakpoint at the very beginning of the function allowing us to
          switch out the return address
        - A single step interruption to execute the original instruction that
          we replaced with the break instruction (normal kprobe flow)
        - A breakpoint in the trampoline function where our instrumented function
          returned to
        - A single step interruption to execute the original instruction that
          we replaced with the break instruction (normal kprobe flow)
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      73649dab
    • H
      [PATCH] kprobes: function-return probes · b94cce92
      Hien Nguyen 提交于
      This patch adds function-return probes to kprobes for the i386
      architecture.  This enables you to establish a handler to be run when a
      function returns.
      
      1. API
      
      Two new functions are added to kprobes:
      
      	int register_kretprobe(struct kretprobe *rp);
      	void unregister_kretprobe(struct kretprobe *rp);
      
      2. Registration and unregistration
      
      2.1 Register
      
        To register a function-return probe, the user populates the following
        fields in a kretprobe object and calls register_kretprobe() with the
        kretprobe address as an argument:
      
        kp.addr - the function's address
      
        handler - this function is run after the ret instruction executes, but
        before control returns to the return address in the caller.
      
        maxactive - The maximum number of instances of the probed function that
        can be active concurrently.  For example, if the function is non-
        recursive and is called with a spinlock or mutex held, maxactive = 1
        should be enough.  If the function is non-recursive and can never
        relinquish the CPU (e.g., via a semaphore or preemption), NR_CPUS should
        be enough.  maxactive is used to determine how many kretprobe_instance
        objects to allocate for this particular probed function.  If maxactive <=
        0, it is set to a default value (if CONFIG_PREEMPT maxactive=max(10, 2 *
        NR_CPUS) else maxactive=NR_CPUS)
      
        For example:
      
          struct kretprobe rp;
          rp.kp.addr = /* entrypoint address */
          rp.handler = /*return probe handler */
          rp.maxactive = /* e.g., 1 or NR_CPUS or 0, see the above explanation */
          register_kretprobe(&rp);
      
        The following field may also be of interest:
      
        nmissed - Initialized to zero when the function-return probe is
        registered, and incremented every time the probed function is entered but
        there is no kretprobe_instance object available for establishing the
        function-return probe (i.e., because maxactive was set too low).
      
      2.2 Unregister
      
        To unregiter a function-return probe, the user calls
        unregister_kretprobe() with the same kretprobe object as registered
        previously.  If a probed function is running when the return probe is
        unregistered, the function will return as expected, but the handler won't
        be run.
      
      3. Limitations
      
      3.1 This patch supports only the i386 architecture, but patches for
          x86_64 and ppc64 are anticipated soon.
      
      3.2 Return probes operates by replacing the return address in the stack
          (or in a known register, such as the lr register for ppc).  This may
          cause __builtin_return_address(0), when invoked from the return-probed
          function, to return the address of the return-probes trampoline.
      
      3.3 This implementation uses the "Multiprobes at an address" feature in
          2.6.12-rc3-mm3.
      
      3.4 Due to a limitation in multi-probes, you cannot currently establish
          a return probe and a jprobe on the same function.  A patch to remove
          this limitation is being tested.
      
      This feature is required by SystemTap (http://sourceware.org/systemtap),
      and reflects ideas contributed by several SystemTap developers, including
      Will Cohen and Ananth Mavinakayanahalli.
      Signed-off-by: NHien Nguyen <hien@us.ibm.com>
      Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NFrederik Deweerdt <frederik.deweerdt@laposte.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b94cce92
    • C
      [PATCH] quota: consolidate code surrounding vfs_quota_on_mount · 84de856e
      Christoph Hellwig 提交于
      Move some code duplicated in both callers into vfs_quota_on_mount
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: NJan Kara <jack@ucw.cz>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      84de856e
    • N
      [PATCH] add check to /proc/devices read routines · ac20427e
      Neil Horman 提交于
      Patch to add check to get_chrdev_list and get_blkdev_list to prevent reads
      of /proc/devices from spilling over the provided page if more than 4096
      bytes of string data are generated from all the registered character and
      block devices in a system
      Signed-off-by: NNeil Horman <nhorman@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: <viro@parcelfarce.linux.theplanet.co.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ac20427e
    • J
      [PATCH] streamline preempt_count type across archs · dcd497f9
      Jesper Juhl 提交于
      The preempt_count member of struct thread_info is currently either defined
      as int, unsigned int or __s32 depending on arch.  This patch makes the type
      of preempt_count an int on all archs.
      
      Having preempt_count be an unsigned type prevents the catching of
      preempt_count < 0 bugs, and using int on some archs and __s32 on others is
      not exactely "neat" - much nicer when it's just int all over.
      
      A previous version of this patch was already ACK'ed by Robert Love, and the
      only change in this version of the patch compared to the one he ACK'ed is
      that this one also makes sure the preempt_count member is consistently
      commented.
      Signed-off-by: NJesper Juhl <juhl-lkml@dif.dk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dcd497f9
    • N
      [PATCH] optimise loop driver a bit · 35a82d1a
      Nick Piggin 提交于
      Looks like locking can be optimised quite a lot.  Increase lock widths
      slightly so lo_lock is taken fewer times per request.  Also it was quite
      trivial to cover lo_pending with that lock, and remove the atomic
      requirement.  This also makes memory ordering explicitly correct, which is
      nice (not that I particularly saw any mem ordering bugs).
      
      Test was reading 4 250MB files in parallel on ext2-on-tmpfs filesystem (1K
      block size, 4K page size).  System is 2 socket Xeon with HT (4 thread).
      
      intel:/home/npiggin# umount /dev/loop0 ; mount /dev/loop0 /mnt/loop ; /usr/bin/time ./mtloop.sh
      
      Before:
      0.24user 5.51system 0:02.84elapsed 202%CPU (0avgtext+0avgdata 0maxresident)k
      0.19user 5.52system 0:02.88elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k
      0.19user 5.57system 0:02.89elapsed 198%CPU (0avgtext+0avgdata 0maxresident)k
      0.22user 5.51system 0:02.90elapsed 197%CPU (0avgtext+0avgdata 0maxresident)k
      0.19user 5.44system 0:02.91elapsed 193%CPU (0avgtext+0avgdata 0maxresident)k
      
      After:
      0.07user 2.34system 0:01.68elapsed 143%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.37system 0:01.68elapsed 144%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.39system 0:01.68elapsed 145%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.36system 0:01.68elapsed 144%CPU (0avgtext+0avgdata 0maxresident)k
      0.06user 2.42system 0:01.68elapsed 147%CPU (0avgtext+0avgdata 0maxresident)k
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      35a82d1a
    • P
      [PATCH] create a kstrdup library function · 543537bd
      Paulo Marques 提交于
      This patch creates a new kstrdup library function and changes the "local"
      implementations in several places to use this function.
      
      Most of the changes come from the sound and net subsystems.  The sound part
      had already been acknowledged by Takashi Iwai and the net part by David S.
      Miller.
      
      I left UML alone for now because I would need more time to read the code
      carefully before making changes there.
      Signed-off-by: NPaulo Marques <pmarques@grupopie.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      543537bd
    • A
      [PATCH] fix for prune_icache()/forced final iput() races · 991114c6
      Alexander Viro 提交于
      Based on analysis and a patch from Russ Weight <rweight@us.ibm.com>
      
      There is a race condition that can occur if an inode is allocated and then
      released (using iput) during the ->fill_super functions.  The race
      condition is between kswapd and mount.
      
      For most filesystems this can only happen in an error path when kswapd is
      running concurrently.  For isofs, however, the error can occur in a more
      common code path (which is how the bug was found).
      
      The logic here is "we want final iput() to free inode *now* instead of
      letting it sit in cache if fs is going down or had not quite come up".  The
      problem is with kswapd seeing such inodes in the middle of being killed and
      happily taking over.
      
      The clean solution would be to tell kswapd to leave those inodes alone and
      let our final iput deal with them.  I.e.  add a new flag
      (I_FORCED_FREEING), set it before write_inode_now() there and make
      prune_icache() leave those alone.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      991114c6
    • O
      [PATCH] timers: introduce try_to_del_timer_sync() · fd450b73
      Oleg Nesterov 提交于
      This patch splits del_timer_sync() into 2 functions.  The new one,
      try_to_del_timer_sync(), returns -1 when it hits executing timer.
      
      It can be used in interrupt context, or when the caller hold locks which
      can prevent completion of the timer's handler.
      
      NOTE.  Currently it can't be used in interrupt context in UP case, because
      ->running_timer is used only with CONFIG_SMP.
      
      Should the need arise, it is possible to kill #ifdef CONFIG_SMP in
      set_running_timer(), it is cheap.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fd450b73
    • O
      [PATCH] timers fixes/improvements · 55c888d6
      Oleg Nesterov 提交于
      This patch tries to solve following problems:
      
      1. del_timer_sync() is racy. The timer can be fired again after
         del_timer_sync have checked all cpus and before it will recheck
         timer_pending().
      
      2. It has scalability problems. All cpus are scanned to determine
         if the timer is running on that cpu.
      
         With this patch del_timer_sync is O(1) and no slower than plain
         del_timer(pending_timer), unless it has to actually wait for
         completion of the currently running timer.
      
         The only restriction is that the recurring timer should not use
         add_timer_on().
      
      3. The timers are not serialized wrt to itself.
      
         If CPU_0 does mod_timer(jiffies+1) while the timer is currently
         running on CPU 1, it is quite possible that local interrupt on
         CPU_0 will start that timer before it finished on CPU_1.
      
      4. The timers locking is suboptimal. __mod_timer() takes 3 locks
         at once and still requires wmb() in del_timer/run_timers.
      
         The new implementation takes 2 locks sequentially and does not
         need memory barriers.
      
      Currently ->base != NULL means that the timer is pending. In that case
      ->base.lock is used to lock the timer. __mod_timer also takes timer->lock
      because ->base can be == NULL.
      
      This patch uses timer->entry.next != NULL as indication that the timer is
      pending. So it does __list_del(), entry->next = NULL instead of list_del()
      when the timer is deleted.
      
      The ->base field is used for hashed locking only, it is initialized
      in init_timer() which sets ->base = per_cpu(tvec_bases). When the
      tvec_bases.lock is locked, it means that all timers which are tied
      to this base via timer->base are locked, and the base itself is locked
      too.
      
      So __run_timers/migrate_timers can safely modify all timers which could
      be found on ->tvX lists (pending timers).
      
      When the timer's base is locked, and the timer removed from ->entry list
      (which means that _run_timers/migrate_timers can't see this timer), it is
      possible to set timer->base = NULL and drop the lock: the timer remains
      locked.
      
      This patch adds lock_timer_base() helper, which waits for ->base != NULL,
      locks the ->base, and checks it is still the same.
      
      __mod_timer() schedules the timer on the local CPU and changes it's base.
      However, it does not lock both old and new bases at once. It locks the
      timer via lock_timer_base(), deletes the timer, sets ->base = NULL, and
      unlocks old base. Then __mod_timer() locks new_base, sets ->base = new_base,
      and adds this timer. This simplifies the code, because AB-BA deadlock is not
      possible. __mod_timer() also ensures that the timer's base is not changed
      while the timer's handler is running on the old base.
      
      __run_timers(), del_timer() do not change ->base anymore, they only clear
      pending flag.
      
      So del_timer_sync() can test timer->base->running_timer == timer to detect
      whether it is running or not.
      
      We don't need timer_list->lock anymore, this patch kills it.
      
      We also don't need barriers. del_timer() and __run_timers() used smp_wmb()
      before clearing timer's pending flag. It was needed because __mod_timer()
      did not lock old_base if the timer is not pending, so __mod_timer()->list_add()
      could race with del_timer()->list_del(). With this patch these functions are
      serialized through base->lock.
      
      One problem. TIMER_INITIALIZER can't use per_cpu(tvec_bases). So this patch
      adds global
      
              struct timer_base_s {
                      spinlock_t lock;
                      struct timer_list *running_timer;
              } __init_timer_base;
      
      which is used by TIMER_INITIALIZER. The corresponding fields in tvec_t_base_s
      struct are replaced by struct timer_base_s t_base.
      
      It is indeed ugly. But this can't have scalability problems. The global
      __init_timer_base.lock is used only when __mod_timer() is called for the first
      time AND the timer was compile time initialized. After that the timer migrates
      to the local CPU.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NRenaud Lienhart <renaud.lienhart@free.fr>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      55c888d6
    • T
      [PATCH] blk: remove BLK_TAGS_{PER_LONG|MASK} · f7d37d02
      Tejun Heo 提交于
      Replace BLK_TAGS_PER_LONG with BITS_PER_LONG and remove unused BLK_TAGS_MASK.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Acked-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f7d37d02
    • T
      [PATCH] blk: remove blk_queue_tag->real_max_depth optimization · fa72b903
      Tejun Heo 提交于
      blk_queue_tag->real_max_depth was used to optimize out unnecessary
      allocations/frees on tag resize.  However, the whole thing was very broken -
      tag_map was never allocated to real_max_depth resulting in access beyond the
      end of the map, bits in [max_depth..real_max_depth] were set when initializing
      a map and copied when resizing resulting in pre-occupied tags.
      
      As the gain of the optimization is very small, well, almost nill, remove the
      whole thing.
      Signed-off-by: NTejun Heo <htejun@gmail.com>
      Acked-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa72b903
    • V
      [PATCH] xen: x86_64: Add macro for debugreg · e9129e56
      Vincent Hanquez 提交于
      Add 2 macros to set and get debugreg on x86_64.  This is useful for Xen
      because it will need only to redefine each macro to a hypervisor call.
      Signed-off-by: NVincent Hanquez <vincent.hanquez@cl.cam.ac.uk>
      Cc: Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e9129e56
    • V
      [PATCH] xen: x86: Rename usermode macro · fa1e1bdf
      Vincent Hanquez 提交于
      Rename user_mode to user_mode_vm and add a user_mode macro similar to the
      x86-64 one.
      
      This is useful for Xen because the linux xen kernel does not runs on the same
      priviledge that a vanilla linux kernel, and with this we just need to redefine
      user_mode().
      Signed-off-by: NVincent Hanquez <vincent.hanquez@cl.cam.ac.uk>
      Cc: Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa1e1bdf
    • V
      [PATCH] xen: x86: add macro for debugreg · f5012310
      Vincent Hanquez 提交于
      Add 2 macros to set and get debugreg on x86.  This is useful for Xen because
      it will need only to redefine each macro to a hypervisor call.
      Signed-off-by: NVincent Hanquez <vincent.hanquez@cl.cam.ac.uk>
      Cc: Ian Pratt <m+Ian.Pratt@cl.cam.ac.uk>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f5012310
    • J
      [PATCH] eliminate duplicate rdpmc definition · 32ecd42b
      Jan Beulich 提交于
      Eliminate duplicate definition of rdpmc in x86-64's mtrr.h.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      32ecd42b
    • A
      [PATCH] x86: cpu_khz type fix · a3a255e7
      Andrew Morton 提交于
      x86_64's cpu_khz is unsigned int and there is no reason why x86 needs to use
      unsigned long.
      
      So make cpu_khz unsigned int on x86 as well.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a3a255e7
    • A
      [PATCH] x86: #include asm/uaccess.h in asm/checksum.h · a9ed8817
      Alexey Dobriyan 提交于
      csum_and_copy_to_user is static inline and uses VERIFY_WRITE.  Patch allows
      to remove asm/uaccess.h from i386_ksyms.c without dependency surprises.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a9ed8817
    • C
      [PATCH] ia64: Selectable Timer Interrupt Frequency · b5d23e5b
      Christoph Lameter 提交于
      It allows a selectable timer interrupt frequency of 100, 250 and 1000 HZ.
      Reducing the timer frequency may have important performance benefits on
      large systems.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b5d23e5b
    • C
      [PATCH] i386: Selectable Frequency of the Timer Interrupt · 59121003
      Christoph Lameter 提交于
      Make the timer frequency selectable. The timer interrupt may cause bus
      and memory contention in large NUMA systems since the interrupt occurs
      on each processor HZ times per second.
      Signed-off-by: NChristoph Lameter <christoph@lameter.com>
      Signed-off-by: NShai Fultheim <shai@scalex86.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      59121003
    • N
      [PATCH] Do not enforce unique IO_APIC_ID check for xAPIC systems (i386) · ca05fea6
      Natalie Protasevich 提交于
      This patch is per Andi's request to remove NO_IOAPIC_CHECK from genapic and
      use heuristics to prevent unique I/O APIC ID check for systems that don't
      need it.  The patch disables unique I/O APIC ID check for Xeon-based and
      other platforms that don't use serial APIC bus for interrupt delivery.
      Andi stated that AMD systems don't need unique IO_APIC_IDs either.
      Signed-off-by: NNatalie Protasevich <Natalie.Protasevich@unisys.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ca05fea6
    • C
      [PATCH] NUMA aware block device control structure allocation · 1946089a
      Christoph Lameter 提交于
      Patch to allocate the control structures for for ide devices on the node of
      the device itself (for NUMA systems).  The patch depends on the Slab API
      change patch by Manfred and me (in mm) and the pcidev_to_node patch that I
      posted today.
      
      Does some realignment too.
      Signed-off-by: NJustin M. Forbes <jmforbes@linuxtx.org>
      Signed-off-by: NChristoph Lameter <christoph@lameter.com>
      Signed-off-by: NPravin Shelar <pravin@calsoftinc.com>
      Signed-off-by: NShobhit Dayal <shobhit@calsoftinc.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1946089a
    • C
      [PATCH] x86/x86_64: pcibus_to_node · 8c5a0908
      Christoph Lameter 提交于
      Define pcibus_to_node to be able to figure out which NUMA node contains a
      given PCI device.  This defines pcibus_to_node(bus) in
      include/linux/topology.h and adjusts the macros for i386 and x86_64 that
      already provided a way to determine the cpumask of a pci device.
      
      x86_64 was changed to not build an array of cpumasks anymore.  Instead an
      array of nodes is build which can be used to generate the cpumask via
      node_to_cpumask.
      Signed-off-by: NChristoph Lameter <christoph@lameter.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8c5a0908
    • C
      [PATCH] ppc64: pcibus_to_node fix · e164f557
      Christoph Lameter 提交于
      asm-generic/topology.h must also be included if CONFIG_NUMA is set in order to
      provide the fall back pcibus_to_node function.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e164f557
    • H
      [PATCH] m32r: build fix for asm-m32r/topology.h · 4a352936
      Hirokazu Takata 提交于
      Use asm-generic/topology.h to fix yet another pcibus_to_node() build error.
      
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4a352936
    • V
      [PATCH] Platform SMIs and their interferance with tsc based delay calibration · 8a9e1b0f
      Venkatesh Pallipadi 提交于
      Issue:
      Current tsc based delay_calibration can result in significant errors in
      loops_per_jiffy count when the platform events like SMIs
      (System Management Interrupts that are non-maskable) are present. This could
      lead to potential kernel panic(). This issue is becoming more visible with 2.6
      kernel (as default HZ is 1000) and on platforms with higher SMI handling
      latencies. During the boot time, SMIs are mostly used by BIOS (for things
      like legacy keyboard emulation).
      
      Description:
      The psuedocode for current delay calibration with tsc based delay looks like
      (0) Estimate a value for loops_per_jiffy
      (1) While (loops_per_jiffy estimate is accurate enough)
      (2)   wait for jiffy transition (jiffy1)
      (3)   Note down current tsc (tsc1)
      (4)   loop until tsc becomes tsc1 + loops_per_jiffy
      (5)   check whether jiffy changed since jiffy1 or not and refine
      loops_per_jiffy estimate
      
      Consider the following cases
      Case 1:
      If SMIs happen between (2) and (3) above, we can end up with a
      loops_per_jiffy value that is too low. This results in shorted delays and
      kernel can panic () during boot (Mostly at IOAPIC timer initialization
      timer_irq_works() as we don't have enough timer interrupts in a specified
      interval).
      
      Case 2:
      If SMIs happen between (3) and (4) above, then we can end up with a
      loops_per_jiffy value that is too high. And with current i386 code, too
      high lpj value (greater than 17M) can result in a overflow in
      delay.c:__const_udelay() again resulting in shorter delay and panic().
      
      Solution:
      The patch below makes the calibration routine aware of asynchronous events
      like SMIs. We increase the delay calibration time and also identify any
      significant errors (greater than 12.5%) in the calibration and notify it to
      user.
      
      Patch below changes both i386 and x86-64 architectures to use this
      new and improved calibrate_delay_direct() routine.
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8a9e1b0f
    • M
      [PATCH] add x86-64 specific support for sparsemem · bbfceef4
      Matt Tolentino 提交于
      This patch adds in the necessary support for sparsemem such that x86-64
      kernels may use sparsemem as an alternative to discontigmem for NUMA
      kernels.  Note that this does no preclude one from continuing to build NUMA
      kernels using discontigmem, but merely allows the option to build NUMA
      kernels with sparsemem.
      
      Interestingly, the use of sparsemem in lieu of discontigmem in NUMA kernels
      results in reduced text size for otherwise equivalent kernels as shown in
      the example builds below:
      
         text	   data	    bss	    dec	    hex	filename
      2371036	 765884	1237108	4374028	 42be0c	vmlinux.discontig
      2366549	 776484	1302772	4445805	 43d66d	vmlinux.sparse
      Signed-off-by: NMatt Tolentino <matthew.e.tolentino@intel.com>
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bbfceef4
    • M
      [PATCH] reorganize x86-64 NUMA and DISCONTIGMEM config options · 2b97690f
      Matt Tolentino 提交于
      In order to use the alternative sparsemem implmentation for NUMA kernels,
      we need to reorganize the config options.  This patch effectively abstracts
      out the CONFIG_DISCONTIGMEM options to CONFIG_NUMA in most cases.  Thus,
      the discontigmem implementation may be employed as always, but the
      sparsemem implementation may be used alternatively.
      Signed-off-by: NMatt Tolentino <matthew.e.tolentino@intel.com>
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2b97690f
    • A
      [PATCH] ppc64: sparsemem memory model · 145e6642
      Andy Whitcroft 提交于
      Provide the architecture specific implementation for SPARSEMEM for PPC64
      systems.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: Mike Kravetz <kravetz@us.ibm.com> (in part)
      Signed-off-by: NMartin Bligh <mbligh@aracnet.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      145e6642
    • A
      [PATCH] ppc64: add early_pfn_to_nid · 510f8fa7
      Andy Whitcroft 提交于
      Provide an implementation of early_pfn_to_nid for PPC64.  This is used by
      memory models to determine the node from which to take allocations before the
      memory allocators are fully initialised.
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NMartin Bligh <mbligh@aracnet.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      510f8fa7
    • A
      [PATCH] sparsemem hotplug base · 29751f69
      Andy Whitcroft 提交于
      Make sparse's initalization be accessible at runtime.  This allows sparse
      mappings to be created after boot in a hotplug situation.
      
      This patch is separated from the previous one just to give an indication how
      much of the sparse infrastructure is *just* for hotplug memory.
      
      The section_mem_map doesn't really store a pointer.  It stores something that
      is convenient to do some math against to get a pointer.  It isn't valid to
      just do *section_mem_map, so I don't think it should be stored as a pointer.
      
      There are a couple of things I'd like to store about a section.  First of all,
      the fact that it is !NULL does not mean that it is present.  There could be
      such a combination where section_mem_map *is* NULL, but the math gets you
      properly to a real mem_map.  So, I don't think that check is safe.
      
      Since we're storing 32-bit-aligned structures, we have a few bits in the
      bottom of the pointer to play with.  Use one bit to encode whether there's
      really a mem_map there, and the other one to tell whether there's a valid
      section there.  We need to distinguish between the two because sometimes
      there's a gap between when a section is discovered to be present and when we
      can get the mem_map for it.
      Signed-off-by: NDave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NBob Picco <bob.picco@hp.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      29751f69