1. 29 10月, 2005 1 次提交
  2. 28 10月, 2005 4 次提交
    • A
      [PATCH] gfp_t: remaining bits of arch/* · 53f9fc93
      Al Viro 提交于
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      53f9fc93
    • A
      [PATCH] gfp_t: dma-mapping (ia64) · 06a54497
      Al Viro 提交于
      ... and related annotations for amd64 - swiotlb code is shared, but
      prototypes are not.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      06a54497
    • C
      [IA64] ptrace - find memory sharers on children list · 4ac0068f
      Cliff Wickman 提交于
      In arch/ia64/kernel/ptrace.c there is a test for a peek or poke of a
      register image (in register backing storage).
      The test can be unnecessarily long (and occurs while holding the tasklist_lock).
      Especially long on a large system with thousands of active tasks.
      
      The ptrace caller (presumably a debugger) specifies the pid of
      its target and an address to peek or poke.  But the debugger could be
      attached to several tasks.
      The idea of find_thread_for_addr() is to find whether the target address
      is in the RBS for any of those tasks.
      
      Currently it searches the thread-list of the target pid.  If that search
      does not find a match, and the shared mm-struct's user count indicates
      that there are other tasks sharing this address space (a rare occurrence),
      a search is made of all the tasks in the system.
      
      Another approach can drastically shorten this procedure.
      It depends upon the fact that in order to peek or poke from/to any task,
      the debugger must first attach to that task.  And when it does, the
      attached task is made a child of the debugger (is chained to its children list).
      
      Therefore we can search just the debugger's children list.
      Signed-off-by: NCliff Wickman <cpw@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      4ac0068f
    • D
      [IA64] - Avoid slow TLB purges on SGI Altix systems · c1902aae
      Dean Roe 提交于
      flush_tlb_all() can be a scaling issue on large SGI Altix systems
      since it uses the global call_lock and always executes on all cpus.
      When a process enters flush_tlb_range() to purge TLBs for another
      process, it is possible to avoid flush_tlb_all() and instead allow
      sn2_global_tlb_purge() to purge TLBs only where necessary.
      
      This patch modifies flush_tlb_range() so that this case can be handled
      by platform TLB purge functions and updates ia64_global_tlb_purge()
      accordingly.  sn2_global_tlb_purge() now calculates the region register
      value from the mm argument introduced with this patch.
      Signed-off-by: NDean Roe <roe@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      c1902aae
  3. 26 10月, 2005 9 次提交
  4. 20 10月, 2005 1 次提交
    • Y
      [PATCH] swiotlb: make sure initial DMA allocations really are in DMA memory · 281dd25c
      Yasunori Goto 提交于
      This introduces a limit parameter to the core bootmem allocator; The new
      parameter indicates that physical memory allocated by the bootmem
      allocator should be within the requested limit.
      
      We also introduce alloc_bootmem_low_pages_limit, alloc_bootmem_node_limit,
      alloc_bootmem_low_pages_node_limit apis, but alloc_bootmem_low_pages_limit
      is the only api used for swiotlb.
      
      The existing alloc_bootmem_low_pages() api could instead have been
      changed and made to pass right limit to the core allocator.  But that
      would make the patch more intrusive for 2.6.14, as other arches use
      alloc_bootmem_low_pages().  We may be done that post 2.6.14 as a
      cleanup.
      
      With this, swiotlb gets memory within 4G for both x86_64 and ia64
      arches.
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Ravikiran G Thirumalai <kiran@scalex86.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      281dd25c
  5. 07 10月, 2005 1 次提交
    • B
      [IA64] Avoid kernel hang during CMC interrupt storm · 76e677e2
      Bryan Sutula 提交于
      I've noticed a kernel hang during a storm of CMC interrupts, which was
      tracked down to the continual execution of the interrupt handler.
      
      There's code in the CMC handler that's supposed to disable CMC
      interrupts and switch to polling mode when it sees a bunch of CMCs.
      Because disabling CMCs across all CPUs isn't safe in interrupt context,
      the disable is done with a schedule_work().  But with continual CMC
      interrupts, the schedule_work() never gets executed.
      
      The following patch immediately disables CMC interrupts for the current
      CPU.  This then allows (at least) one CPU to ignore CMC interrupts,
      execute the schedule_work() code, and disable CMC interrupts on the rest
      of the CPUs.
      Acked-by: NKeith Owens <kaos@sgi.com>
      Signed-off-by: NBryan Sutula <Bryan.Sutula@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      76e677e2
  6. 05 10月, 2005 4 次提交
  7. 29 9月, 2005 1 次提交
  8. 24 9月, 2005 2 次提交
  9. 23 9月, 2005 3 次提交
  10. 20 9月, 2005 3 次提交
  11. 18 9月, 2005 1 次提交
  12. 17 9月, 2005 3 次提交
  13. 16 9月, 2005 2 次提交
    • J
      [IA64] Cleanup use of various #defines related to nodes · 24ee0a6d
      Jack Steiner 提交于
      Some of the SN code & #defines related to compact nodes & IO discovery
      have gotten stale over the years. This patch attempts to clean them up.
      Some of the various SN MAX_xxx #defines were also unclear & misused.
      
      The primary changes are:
      
      	- use MAX_NUMNODES. This is the generic linux #define for the number
      	  of nodes that are known to the generic kernel. Arrays & loops
      	  for constructs that are 1:1 with linux-defined nodes should
      	  use the linux #define - not an SN equivalent.
      
      	- use MAX_COMPACT_NODES for MAX_NUMNODES + NUM_TIOS. This is the
      	  number of nodes in the SSI system. Compact nodes are a hack to
      	  get around the IA64 architectural limit of 256 nodes. Large SGI
      	  systems have more than 256 nodes. When we upgrade to ACPI3.0,
      	  I _hope_ that all nodes will be real nodes that are known to
      	  the generic kernel. That will allow us to delete the notion
      	  of "compact nodes".
      
      	- add MAX_NUMALINK_NODES for the total number of nodes that
      	  are in the numalink domain - all partitions.
      
      	- simplified (understandable) scan_for_ionodes()
      
      	- small amount of cleanup related to cnodes
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      24ee0a6d
    • B
      [IA64] Update default configs · f2b518d7
      Bjorn Helgaas 提交于
      PNP and PNPACPI turned on
      
          i8042 recently changed from ACPI to PNP detection.  Without PNP, it
          probes legacy I/O ports for the keyboard controller, which causes an
          MCA on HP boxes.
      
          Also, I'm about to remove 8250_acpi.c, so we'll need PNP to detect
          non-PCI serial ports.  Until 8250_acpi.c is removed, some systems
          will see serial ports reported twice (once from 8250_acpi.c and again
          from 8250_pnp.c).  This is harmless.
      
          PNPACPI is still marked EXPERIMENTAL, but I'm not aware of any
          outstanding issues on ia64.
      
      IDE_GENERIC turned off (except for SGI simulator, all ia64 IDE is PCI)
      
          ide-generic probes compiled-in legacy I/O ports for IDE devices, which
          again causes an MCA.  It would be nicer to just get rid of all the
          legacy junk from include/asm-ia64/ide.h, but that is a bit riskier
          because it could break ide-cs and the HDIO_REGISTER_HWIF ioctl
          (http://www.ussg.iu.edu/hypermail/linux/kernel/0508.2/0049.html).
      
      Here's the essence of the patch:
      
          -# CONFIG_PNP is not set
          +CONFIG_PNP=y
          +CONFIG_PNPACPI=y
      
          -CONFIG_IDE_GENERIC=y
          +# CONFIG_IDE_GENERIC is not set
      
      Tested on tiger, bigsur, and zx1.
      Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      f2b518d7
  14. 15 9月, 2005 3 次提交
    • D
      [LIB]: Consolidate _atomic_dec_and_lock() · 4db2ce01
      David S. Miller 提交于
      Several implementations were essentialy a common piece of C code using
      the cmpxchg() macro.  Put the implementation in one spot that everyone
      can share, and convert sparc64 over to using this.
      
      Alpha is the lone arch-specific implementation, which codes up a
      special fast path for the common case in order to avoid GP reloading
      which a pure C version would require.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4db2ce01
    • A
      [IA64] more robust zx1/sx1000 machvec support · 0b9afede
      Alex Williamson 提交于
      Machine vector selection has always been a bit of a hack given how
      early in system boot it needs to be done.  Services like ACPI namespace
      are not available and there are non-trivial problems to moving them to
      early boot.  However, there's no reason we can't change to a different
      machvec later in boot when the services we need are available.  By
      adding a entry point for later initialization of the swiotlb, we can add
      an error path for the hpzx1 machevec initialization and fall back to the
      DIG machine vector if IOMMU hardware isn't found in the system.  Since
      ia64 uses 4GB for zone DMA (no ISA support), it's trivial to allocate a
      contiguous range from the slab for bounce buffer usage.
      Signed-off-by: NAlex Williamson <alex.williamson@hp.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      0b9afede
    • H
      [PATCH] error path in setup_arg_pages() misses vm_unacct_memory() · 2fd4ef85
      Hugh Dickins 提交于
      Pavel Emelianov and Kirill Korotaev observe that fs and arch users of
      security_vm_enough_memory tend to forget to vm_unacct_memory when a
      failure occurs further down (typically in setup_arg_pages variants).
      
      These are all users of insert_vm_struct, and that reservation will only
      be unaccounted on exit if the vma is marked VM_ACCOUNT: which in some
      cases it is (hidden inside VM_STACK_FLAGS) and in some cases it isn't.
      
      So x86_64 32-bit and ppc64 vDSO ELFs have been leaking memory into
      Committed_AS each time they're run.  But don't add VM_ACCOUNT to them,
      it's inappropriate to reserve against the very unlikely case that gdb
      be used to COW a vDSO page - we ought to do something about that in
      do_wp_page, but there are yet other inconsistencies to be resolved.
      
      The safe and economical way to fix this is to let insert_vm_struct do
      the security_vm_enough_memory check when it finds VM_ACCOUNT is set.
      
      And the MIPS irix_brk has been calling security_vm_enough_memory before
      calling do_brk which repeats it, doubly accounting and so also leaking.
      Remove that, and all the fs and arch calls to security_vm_enough_memory:
      give it a less misleading name later on.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-Off-By: NKirill Korotaev <dev@sw.ru>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2fd4ef85
  15. 13 9月, 2005 2 次提交