1. 30 3月, 2012 1 次提交
  2. 29 3月, 2012 1 次提交
  3. 13 1月, 2012 1 次提交
  4. 14 6月, 2011 1 次提交
  5. 24 5月, 2011 1 次提交
  6. 18 8月, 2010 1 次提交
    • D
      Make do_execve() take a const filename pointer · d7627467
      David Howells 提交于
      Make do_execve() take a const filename pointer so that kernel_execve() compiles
      correctly on ARM:
      
      arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type
      
      This also requires the argv and envp arguments to be consted twice, once for
      the pointer array and once for the strings the array points to.  This is
      because do_execve() passes a pointer to the filename (now const) to
      copy_strings_kernel().  A simpler alternative would be to cast the filename
      pointer in do_execve() when it's passed to copy_strings_kernel().
      
      do_execve() may not change any of the strings it is passed as part of the argv
      or envp lists as they are some of them in .rodata, so marking these strings as
      const should be fine.
      
      Further kernel_execve() and sys_execve() need to be changed to match.
      
      This has been test built on x86_64, frv, arm and mips.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d7627467
  7. 16 8月, 2010 1 次提交
  8. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  9. 26 1月, 2010 1 次提交
    • P
      sh: Mass ctrl_in/outX to __raw_read/writeX conversion. · 9d56dd3b
      Paul Mundt 提交于
      The old ctrl in/out routines are non-portable and unsuitable for
      cross-platform use. While drivers/sh has already been sanitized, there
      is still quite a lot of code that is not. This converts the arch/sh/ bits
      over, which permits us to flag the routines as deprecated whilst still
      building with -Werror for the architecture code, and to ensure that
      future users are not added.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      9d56dd3b
  10. 20 1月, 2010 1 次提交
    • P
      sh: machine_ops based reboot support. · fbb82b03
      Paul Mundt 提交于
      This provides a machine_ops-based reboot interface loosely cloned from
      x86, and converts the native sh32 and sh64 cases over to it.
      
      Necessary both for tying in SMP support and also enabling platforms like
      SDK7786 to add support for their microcontroller-based power managers.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      fbb82b03
  11. 13 1月, 2010 1 次提交
    • P
      sh: Move over to dynamically allocated FPU context. · 0ea820cf
      Paul Mundt 提交于
      This follows the x86 xstate changes and implements a task_xstate slab
      cache that is dynamically sized to match one of hard FP/soft FP/FPU-less.
      
      This also tidies up and consolidates some of the SH-2A/SH-4 FPU
      fragmentation. Now fpu state restorers are commonly defined, with the
      init_fpu()/fpu_init() mess reworked to follow the x86 convention.
      The fpu_init() register initialization has been replaced by xstate setup
      followed by writing out to hardware via the standard restore path.
      
      As init_fpu() now performs a slab allocation a secondary lighterweight
      restorer is also introduced for the context switch.
      
      In the future the DSP state will be rolled in here, too.
      
      More work remains for math emulation and the SH-5 FPU, which presently
      uses its own special (UP-only) interfaces.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      0ea820cf
  12. 12 1月, 2010 1 次提交
  13. 05 1月, 2010 1 次提交
  14. 08 12月, 2009 1 次提交
    • P
      sh: hw-breakpoints: Add preliminary support for SH-4A UBC. · 09a07294
      Paul Mundt 提交于
      This adds preliminary support for the SH-4A UBC to the hw-breakpoints API.
      Presently only a single channel is implemented, and the ptrace interface
      still needs to be converted. This is the first step to cleaning up the
      long-standing UBC mess, making the UBC more generally accessible, and
      finally making it SMP safe.
      
      An additional abstraction will be layered on top of this as with the perf
      events code to permit the various CPU families to wire up support for
      their own specific UBCs, as many variations exist.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      09a07294
  15. 25 11月, 2009 1 次提交
  16. 24 11月, 2009 2 次提交
    • S
      sh: Minor optimisations to FPU handling · d3ea9fa0
      Stuart Menefy 提交于
      A number of small optimisations to FPU handling, in particular:
      
       - move the task USEDFPU flag from the thread_info flags field (which
         is accessed asynchronously to the thread) to a new status field,
         which is only accessed by the thread itself. This allows locking to
         be removed in most cases, or can be reduced to a preempt_lock().
         This mimics the i386 behaviour.
      
       - move the modification of regs->sr and thread_info->status flags out
         of save_fpu() to __unlazy_fpu(). This gives the compiler a better
         chance to optimise things, as well as making save_fpu() symmetrical
         with restore_fpu() and init_fpu().
      
       - implement prepare_to_copy(), so that when creating a thread, we can
         unlazy the FPU prior to copying the thread data structures.
      
      Also make sure that the FPU is disabled while in the kernel, in
      particular while booting, and for newly created kernel threads,
      
      In a very artificial benchmark, the execution time for 2500000
      context switches was reduced from 50 to 45 seconds.
      Signed-off-by: NStuart Menefy <stuart.menefy@st.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      d3ea9fa0
    • G
      sh: add sleazy FPU optimization · a0458b07
      Giuseppe CAVALLARO 提交于
      sh port of the sLeAZY-fpu feature currently implemented for some architectures
      such us i386.
      
      Right now the SH kernel has a 100% lazy fpu behaviour.
      This is of course great for applications that have very sporadic or no FPU use.
      However for very frequent FPU users...  you take an extra trap every context
      switch.
      The patch below adds a simple heuristic to this code: after 5 consecutive
      context switches of FPU use, the lazy behavior is disabled and the context
      gets restored every context switch.
      After 256 switches, this is reset and the 100% lazy behavior is returned.
      
      Tests with LMbench showed no regression.
      I saw a little improvement due to the prefetching (~2%).
      
      The tests below also show that, with this sLeazy patch, indeed,
      the number of FPU exceptions is reduced.
      To test this. I hacked the lat_ctx LMBench to use the FPU a little more.
      
         sLeasy implementation
         ===========================================
         switch_to calls            |  79326
         sleasy   calls             |  42577
         do_fpu_state_restore  calls|  59232
         restore_fpu   calls        |  59032
      
         Exceptions:  0x800 (FPU disabled  ): 16604
      
         100% Leazy (default implementation)
         ===========================================
         switch_to  calls            |  79690
         do_fpu_state_restore calls  |  53299
         restore_fpu  calls          |   53101
      
         Exceptions: 0x800 (FPU disabled  ):  53273
      Signed-off-by: NGiuseppe Cavallaro <peppe.cavallaro@st.com>
      Signed-off-by: NStuart Menefy <stuart.menefy@st.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      a0458b07
  17. 27 10月, 2009 1 次提交
  18. 24 8月, 2009 1 次提交
  19. 11 7月, 2009 1 次提交
    • M
      sh: Mark __switch_to() as __notrace_funcgraph · 7816fecd
      Matt Fleming 提交于
      Annotate __switch_to() so that the function graph tracer does not try to
      trace it. Use __notrace_funcgraph, as opposed to notrace, so that other
      tracers can continue to trace __switch_to().
      
      The reason that we don't want to trace __switch_to() with the function
      graph tracer is because of how the return address stack in task_struct
      is implemented. When we enter __switch_to we store the real return
      address on prev's ret_stack. When we return from __switch_to() we've
      patched the return address on the kernel stack to be
      return_to_handler. Calling return_to_handler we do,
      
             -> ftrace_return_to_handler()
             	  -> ftrace_pop_return_ftrace()
      
      Which tries to pop the real return address from current->ret_stack. The
      problem being that we stored the return address on prev->ret_stack, but
      current now points to next, and next->ret_stack doesn't contain the
      correct return address (and is possibly even empty).
      Signed-off-by: NMatt Fleming <matt@console-pimps.org>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      7816fecd
  20. 19 6月, 2009 1 次提交
  21. 18 6月, 2009 1 次提交
  22. 08 5月, 2009 1 次提交
  23. 04 4月, 2009 1 次提交
    • M
      sh: Fix up DSP context save/restore. · 01ab1039
      Michael Trimarchi 提交于
      There were a number of issues with the DSP context save/restore code,
      mostly left-over relics from when it was introduced on SH3-DSP with
      little follow-up testing, resulting in things like task_pt_dspregs()
      referencing incorrect state on the stack.
      
      This follows the MIPS convention of tracking the DSP state in the
      thread_struct and handling the state save/restore in switch_to() and
      finish_arch_switch() respectively. The regset interface is also updated,
      which allows us to finally be rid of task_pt_dspregs() and the special
      cased task_pt_regs().
      Signed-off-by: NMichael Trimarchi <michael@evidence.eu.com>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      01ab1039
  24. 03 4月, 2009 1 次提交
  25. 22 12月, 2008 5 次提交
  26. 21 9月, 2008 2 次提交
    • P
      sh: Add FPU registers to regset interface. · e7ab3cd2
      Paul Mundt 提交于
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      e7ab3cd2
    • P
      sh: Trivial trace_mark() instrumentation for core events. · 3d58695e
      Paul Mundt 提交于
      This implements a few trace points across events that are deemed
      interesting. This implements a number of trace points:
      
      	- The page fault handler / TLB miss
      	- IPC calls
      	- Kernel thread creation
      
      The original LTTng patch had the slow-path instrumented, which
      fails to account for the vast majority of events. In general
      placing this in the fast-path is not a huge performance hit, as
      we don't take page faults for kernel addresses.
      
      The other bits of interest are some of the other trap handlers, as
      well as the syscall entry/exit (which is better off being handled
      through the tracehook API). Most of the other trap handlers are corner
      cases where alternate means of notification exist, so there is little
      value in placing extra trace points in these locations.
      
      Based on top of the points provided both by the LTTng instrumentation
      patch as well as the patch shipping in the ST-Linux tree, albeit in a
      stripped down form.
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      3d58695e
  27. 08 9月, 2008 2 次提交
  28. 28 7月, 2008 1 次提交
    • A
      sh/kernel/ cleanups · 4c1cfab1
      Adrian Bunk 提交于
      This patch contains the following cleanups:
      - make the following needlessly global code static:
        - cf-enabler.c: cf_init()
        - cpu/clock.c: __clk_enable()
        - cpu/clock.c: __clk_disable()
        - process_32.c: default_idle()
        - time_32.c: struct clocksource_sh
        - timers/timer-tmu.c: struct tmu_timer_ops
      - remove the following unused functions (no CONFIG_BLK_DEV_FD on sh):
        - process_{32,64}.c: disable_hlt()
        - process_{32,64}.c: enable_hlt()
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      4c1cfab1
  29. 19 7月, 2008 1 次提交
    • T
      nohz: prevent tick stop outside of the idle loop · b8f8c3cf
      Thomas Gleixner 提交于
      Jack Ren and Eric Miao tracked down the following long standing
      problem in the NOHZ code:
      
      	scheduler switch to idle task
      	enable interrupts
      
      Window starts here
      
      	----> interrupt happens (does not set NEED_RESCHED)
      	      	irq_exit() stops the tick
      
      	----> interrupt happens (does set NEED_RESCHED)
      
      	return from schedule()
      	
      	cpu_idle(): preempt_disable();
      
      Window ends here
      
      The interrupts can happen at any point inside the race window. The
      first interrupt stops the tick, the second one causes the scheduler to
      rerun and switch away from idle again and we end up with the tick
      disabled.
      
      The fact that it needs two interrupts where the first one does not set
      NEED_RESCHED and the second one does made the bug obscure and extremly
      hard to reproduce and analyse. Kudos to Jack and Eric.
      
      Solution: Limit the NOHZ functionality to the idle loop to make sure
      that we can not run into such a situation ever again.
      
      cpu_idle()
      {
      	preempt_disable();
      
      	while(1) {
      		 tick_nohz_stop_sched_tick(1); <- tell NOHZ code that we
      		 			          are in the idle loop
      
      		 while (!need_resched())
      		       halt();
      
      		 tick_nohz_restart_sched_tick(); <- disables NOHZ mode
      		 preempt_enable_no_resched();
      		 schedule();
      		 preempt_disable();
      	}
      }
      
      In hindsight we should have done this forever, but ... 
      
      /me grabs a large brown paperbag.
      
      Debugged-by: Jack Ren <jack.ren@marvell.com>, 
      Debugged-by: Neric miao <eric.y.miao@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      b8f8c3cf
  30. 26 3月, 2008 1 次提交
  31. 28 1月, 2008 3 次提交