1. 21 7月, 2007 1 次提交
  2. 20 7月, 2007 5 次提交
    • Y
      [IA64] Delete iosapic_free_rte() · bf903d0a
      Yasuaki Ishimatsu 提交于
      >   arch/ia64/kernel/iosapic.c:597: warning: 'iosapic_free_rte' defined but not used
      >
      > This isn't spurious, the only call to iosapic_free_rte() has been removed, but there
      > is still a call to iosapic_alloc_rte() ... which means we must have a memory leak.
      
      I did it on purpose (and gave the warning a miss...) and I consider
      iosapic_free_rte() is no longer needed.
      
      I decided to remain iosapic_rte_info to keep gsi-to-irq binding
      after device disable. Indeed it needs some extra memory, but it
      is only "sizeof(iosapic_rte_info) * <the number of removed devices>"
      bytes and has no memory leak becasue re-enabled devices use the
      iosapic_rte_info which they used before disabling.
      Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      bf903d0a
    • D
      [IA64] fallocate system call · 3d7559e6
      David Chinner 提交于
      sys_fallocate for ia64. This uses an empty slot #1303 erroneously
      marked as reserved for move_pages (which had already been allocated
      as syscall #1276)
      Signed-Off-By: NDave Chinner <dgc@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      3d7559e6
    • F
      use the new percpu interface for shared data · f34e3b61
      Fenghua Yu 提交于
      Currently most of the per cpu data, which is accessed by different cpus,
      has a ____cacheline_aligned_in_smp attribute.  Move all this data to the
      new per cpu shared data section: .data.percpu.shared_aligned.
      
      This will seperate the percpu data which is referenced frequently by other
      cpus from the local only percpu data.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f34e3b61
    • F
      define new percpu interface for shared data · 5fb7dc37
      Fenghua Yu 提交于
      per cpu data section contains two types of data.  One set which is
      exclusively accessed by the local cpu and the other set which is per cpu,
      but also shared by remote cpus.  In the current kernel, these two sets are
      not clearely separated out.  This can potentially cause the same data
      cacheline shared between the two sets of data, which will result in
      unnecessary bouncing of the cacheline between cpus.
      
      One way to fix the problem is to cacheline align the remotely accessed per
      cpu data, both at the beginning and at the end.  Because of the padding at
      both ends, this will likely cause some memory wastage and also the
      interface to achieve this is not clean.
      
      This patch:
      
      Moves the remotely accessed per cpu data (which is currently marked
      as ____cacheline_aligned_in_smp) into a different section, where all the data
      elements are cacheline aligned. And as such, this differentiates the local
      only data and remotely accessed data cleanly.
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Christoph Lameter <clameter@sgi.com>
      Cc: <linux-arch@vger.kernel.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5fb7dc37
    • M
      jprobes: make jprobes a little safer for users · 3d7e3382
      Michael Ellerman 提交于
      I realise jprobes are a razor-blades-included type of interface, but that
      doesn't mean we can't try and make them safer to use.  This guy I know once
      wrote code like this:
      
      struct jprobe jp = { .kp.symbol_name = "foo", .entry = "jprobe_foo" };
      
      And then his kernel exploded. Oops.
      
      This patch adds an arch hook, arch_deref_entry_point() (I don't like it
      either) which takes the void * in a struct jprobe, and gives back the text
      address that it represents.
      
      We can then use that in register_jprobe() to check that the entry point we're
      passed is actually in the kernel text, rather than just some random value.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3d7e3382
  3. 18 7月, 2007 14 次提交
  4. 17 7月, 2007 1 次提交
    • Y
      serial: convert early_uart to earlycon for 8250 · 18a8bd94
      Yinghai Lu 提交于
      Beacuse SERIAL_PORT_DFNS is removed from include/asm-i386/serial.h and
      include/asm-x86_64/serial.h.  the serial8250_ports need to be probed late in
      serial initializing stage.  the console_init=>serial8250_console_init=>
      register_console=>serial8250_console_setup will return -ENDEV, and console
      ttyS0 can not be enabled at that time.  need to wait till uart_add_one_port in
      drivers/serial/serial_core.c to call register_console to get console ttyS0.
      that is too late.
      
      Make early_uart to use early_param, so uart console can be used earlier.  Make
      it to be bootconsole with CON_BOOT flag, so can use console handover feature.
      and it will switch to corresponding normal serial console automatically.
      
      new command line will be:
      	console=uart8250,io,0x3f8,9600n8
      	console=uart8250,mmio,0xff5e0000,115200n8
      or
      	earlycon=uart8250,io,0x3f8,9600n8
      	earlycon=uart8250,mmio,0xff5e0000,115200n8
      
      it will print in very early stage:
      	Early serial console at I/O port 0x3f8 (options '9600n8')
      	console [uart0] enabled
      later for console it will print:
      	console handover: boot [uart0] -> real [ttyS0]
      
      Signed-off-by: <yinghai.lu@sun.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Gerd Hoffmann <kraxel@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      18a8bd94
  5. 14 7月, 2007 1 次提交
    • H
      [IA64] ar.itc access must really be after xtime_lock.sequence has been read · 829a9996
      Hidetoshi Seto 提交于
      The ".acq" semantics of the load only apply w.r.t. other data access.
      Reading the clock (ar.itc) isn't a data access so strange things can
      happen here.  Specifically the read of ar.itc can be launched as soon
      as the read of xtime_lock.sequence is ISSUED.  Since this may cache
      miss, and that might cause a thread switch, and there may be cache
      contention for the line containing xtime_lock, it may be a long time
      before the actual value is returned, so the ar.itc value may be very
      stale.
      
      Move the consumption of r28 up before the read of ar.itc to make sure
      that we really have got the current value of xtime_lock.sequence
      before look at ar.itc.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      829a9996
  6. 12 7月, 2007 2 次提交
    • R
      [IA64] Support multiple CPUs going through OS_MCA · 1612b18c
      Russ Anderson 提交于
      Linux does not gracefully deal with multiple processors going
      through OS_MCA aa part of the same MCA event.  The first cpu
      into OS_MCA grabs the ia64_mca_serialize lock.  Subsequent
      cpus wait for that lock, preventing them from reporting in as
      rendezvoused.  The first cpu waits 5 seconds then complains
      that all the cpus have not rendezvoused.  The first cpu then
      handles its MCA and frees up all the rendezvoused cpus and
      releases the ia64_mca_serialize lock.  One of the subsequent
      cpus going thought OS_MCA then gets the ia64_mca_serialize
      lock, waits another 5 seconds and then complains that none of
      the other cpus have rendezvoused.
      
      This patch allows multiple CPUs to gracefully go through OS_MCA.
      
      The first CPU into ia64_mca_handler() grabs a mca_count lock.
      Subsequent CPUs into ia64_mca_handler() are added to a list of cpus
      that need to go through OS_MCA (a bit set in mca_cpu), and report
      in as rendezvoused, and but spin waiting their turn.
      
      The first CPU sees everyone rendezvous, handles his MCA, wakes up
      one of the other CPUs waiting to process their MCA (by clearing
      one mca_cpu bit), and then waits for the other cpus to complete
      their MCA handling.  The next CPU handles his MCA and the process
      repeats until all the CPUs have handled their MCA.  When the last
      CPU has handled it's MCA, it sets monarch_cpu to -1, releasing all
      the CPUs.
      
      In testing this works more reliably and faster.
      
      Thanks to Keith Owens for suggesting numerous improvements
      to this code.
      Signed-off-by: NRuss Anderson <rja@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      1612b18c
    • J
      [IA64] silence GCC ia64 unused variable warnings · 256a7e09
      Jes Sorensen 提交于
      Tell GCC to stop spewing out unnecessary warnings for unused variables
      passed to functions as pointers for ia64 files.
      Signed-off-by: NJes Sorensen <jes@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      256a7e09
  7. 10 7月, 2007 3 次提交
    • C
      [IA64] Stop bit for brl instruction · c6255e98
      Christian Kandeler 提交于
      SDM says that brl instruction must be followed by a stop bit.
      Fix instance in BRL_COND_FSYS_BUBBLE_DOWN where it isn't.
      Signed-off-by: NChristian Kandeler <christian.kandeler@hob.de>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      c6255e98
    • T
      [IA64] Don't set psr.ic and psr.i simultaneously · 83ce6ef8
      Tony Luck 提交于
      It's not a good idea to use "ssm psr.ic | psr.i" to simultaneously
      enable interrupts and interrupt state collection, the two bits can
      take effect asynchronously, so it is possible for an interrupt to
      be serviced while psr.ic is still zero.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      83ce6ef8
    • I
      sched: zap the migration init / cache-hot balancing code · 0437e109
      Ingo Molnar 提交于
      the SMP load-balancer uses the boot-time migration-cost estimation
      code to attempt to improve the quality of balancing. The reason for
      this code is that the discrete priority queues do not preserve
      the order of scheduling accurately, so the load-balancer skips
      tasks that were running on a CPU 'recently'.
      
      this code is fundamental fragile: the boot-time migration cost detector
      doesnt really work on systems that had large L3 caches, it caused boot
      delays on large systems and the whole cache-hot concept made the
      balancing code pretty undeterministic as well.
      
      (and hey, i wrote most of it, so i can say it out loud that it sucks ;-)
      
      under CFS the same purpose of cache affinity can be achieved without
      any special cache-hot special-case: tasks are sorted in the 'timeline'
      tree and the SMP balancer picks tasks from the left side of the
      tree, thus the most cache-cold task is balanced automatically.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0437e109
  8. 27 6月, 2007 2 次提交
  9. 25 5月, 2007 2 次提交
  10. 24 5月, 2007 1 次提交
  11. 23 5月, 2007 2 次提交
  12. 19 5月, 2007 2 次提交
  13. 17 5月, 2007 1 次提交
  14. 16 5月, 2007 1 次提交
  15. 15 5月, 2007 2 次提交
    • T
      [IA64] s/scalibility/scalability/ · c47953cf
      Tony Luck 提交于
      Previous spelling patch from Simon Arlott broke one spot that
      didn't need fixing (reported by Simon within 35 minutes of the
      patch ... but not until after I'd applied to GIT and pushed :-(
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      c47953cf
    • J
      [IA64] kdump on INIT needs multi-nodes sync-up (v.2) · 311f594d
      Jay Lan 提交于
      The current implementation of kdump on INIT events would enter
      kdump processing on DIE_INIT_MONARCH_ENTER and DIE_INIT_SLAVE_ENTER
      events. Thus, the monarch cpu would go ahead and boot up the kdump
      
      On SN shub2 systems, this out-of-sync situation causes some slave
      cpus on different nodes to enter POD.
      
      This patch moves kdump entry points to DIE_INIT_MONARCH_LEAVE and
      DIE_INIT_SLAVE_LEAVE. It also sets kdump_in_progress variable in
      the DIE_INIT_MONARCH_PROCESS event to not dump all active stack
      traces to the console in the case of kdump.
      
      I have tested this patch on an SN machine and a HP RX2600.
      Signed-off-by: NJay Lan <jlan@sgi.com>
      Acked-by: NZou Nan hai <nanhai.zou@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      311f594d