1. 02 12月, 2006 6 次提交
    • J
      Altix: Initial ACPI support - ROM shadowing. · a2302c68
      John Keller 提交于
      Support a shadowed ROM when running with an ACPI capable PROM.
      
      Define a new dev.resource flag IORESOURCE_ROM_BIOS_COPY to
      describe the case of a BIOS shadowed ROM, which can then
      be used to avoid pci_map_rom() making an unneeded call to
      pci_enable_rom().
      Signed-off-by: NJohn Keller <jpk@sgi.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      
      a2302c68
    • J
      Altix: Add initial ACPI IO support · 8ea6091f
      John Keller 提交于
      First phase in introducing ACPI support to SN.
      In this phase, when running with an ACPI capable PROM,
      the DSDT will define the root busses and all SN nodes
      (SGIHUB, SGITIO). An ACPI bus driver will be registered
      for the node devices, with the acpi_pci_root_driver being
      used for the root busses. An ACPI vendor descriptor is
      now used to pass platform specific information for both
      nodes and busses, eliminating the need for the current
      SAL calls. Also, with ACPI support, SN fixup code is no longer
      needed to initiate the PCI bus scans, as the acpi_pci_root_driver
      does that.
      
      However, to maintain backward compatibility with non-ACPI capable
      PROMs, none of the current 'fixup' code can been deleted, though
      much restructuring has been done. For example, the bulk of the code
      in io_common.c is relocated code that is now common regardless
      of what PROM is running, while io_acpi_init.c and io_init.c contain
      routines specific to an ACPI or non ACPI capable PROM respectively.
      
      A new pci bus fixup platform vector has been created to provide
      a hook for invoking platform specific bus fixup from pcibios_fixup_bus().
      
      The size of io_space[] has been increased to support systems with
      large IO configurations.
      Signed-off-by: NJohn Keller <jpk@sgi.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      
      8ea6091f
    • M
      PCI: Delete unused extern in powermac/pci.c · e08cf02f
      Matthew Wilcox 提交于
      This file no longer uses pci_cache_line_size, so delete the declaration
      Signed-off-by: NMatthew Wilcox <matthew@wil.cx>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NJeff Garzik <jeff@garzik.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      e08cf02f
    • M
      PCI: Use pci_generic_prep_mwi on sparc64 · ebf5a248
      Matthew Wilcox 提交于
      The setting of the CACHE_LINE_SIZE register in sparc64's pci
      initialisation code isn't quite adequate as the device may have
      incompatible requirements.  The generic code tests for this, so switch
      sparc64 over to using it.
      
      Since sparc64 has different L1 cache line size and PCI cache line size,
      it would need to override the generic code like i386 and ia64 do.  We
      know what the cache line size is at compile time though, so introduce a
      new optional constant PCI_CACHE_LINE_BYTES.
      Signed-off-by: NMatthew Wilcox <matthew@wil.cx>
      Signed-off-by: NDavid Miller <davem@davemloft.net>
      Acked-by: NJeff Garzik <jeff@garzik.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      ebf5a248
    • M
      PCI: Use pci_generic_prep_mwi on ia64 · 3efe2d84
      Matthew Wilcox 提交于
      The pci_generic_prep_mwi() code does everything that pcibios_prep_mwi()
      does on ia64.  All we need to do is be sure that pci_cache_line_size
      is set appropriately, and we can delete pcibios_prep_mwi().
      
      Using SMP_CACHE_BYTES as the default was wrong on uniprocessor machines
      as it is only 8 bytes.  The default in the generic code of L1_CACHE_BYTES
      is at least as good.
      Signed-off-by: NMatthew Wilcox <matthew@wil.cx>
      Acked-by: NJeff Garzik <jeff@garzik.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      3efe2d84
    • A
      PCI: quirks: fix the festering mess that claims to handle IDE quirks · 368c73d4
      Alan Cox 提交于
      The number of permutations of crap we do is amazing and almost all of it
      has the wrong effect in 2.6.
      
      At the heart of this is the PCI SFF magic which says that compatibility
      mode PCI IDE controllers use ISA IRQ routing and hard coded addresses
      not the BAR values. The old quirks variously clears them, sets them,
      adjusts them and then IDE ignores the result.
      
      In order to drive all this garbage out and to do it portably we need to
      handle the SFF rules directly and properly. Because we know the device
      BAR 0-3 are not used in compatibility mode we load them with the values
      that are implied (and indeed which many controllers actually
      thoughtfully put there in this mode anyway).
      
      This removes special cases in the IDE layer and libata which now knows
      that bar 0/1/2/3 always contain the correct address. It means our
      resource allocation map is accurate from boot, not "mostly accurate"
      after ide is loaded, and it shoots lots of code. There is also lots more
      code and magic constant knowledge to shoot once this is in and settled.
      
      Been in my test tree for a while both with drivers/ide and with libata.
      Wants some -mm shakedown in case I've missed something dumb or there are
      corner cases lurking.
      Signed-off-by: NAlan Cox <alan@redhat.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      368c73d4
  2. 30 11月, 2006 1 次提交
  3. 29 11月, 2006 4 次提交
  4. 27 11月, 2006 2 次提交
  5. 26 11月, 2006 1 次提交
    • P
      [PATCH] uml: make execvp safe for our usage · 5d48545e
      Paolo 'Blaisorblade' Giarrusso 提交于
      Reimplement execvp for our purposes - after we call fork() it is fundamentally
      unsafe to use the kernel allocator - current is not valid there.  So we simply
      pass to our modified execvp() a preallocated buffer.  This fixes a real bug
      and works very well in testing (I've seen indirectly warning messages from the
      forked thread - they went on the pipe connected to its stdout and where read
      as a number by UML, when calling read_output().  I verified the obtained
      number corresponded to "BUG:").
      
      The added use of __cant_sleep() is not a new bug since __cant_sleep() is
      already used in the same function - passing an atomicity parameter would be
      better but it would require huge change, stating that this function must not
      be called in atomic context and can sleep is a better idea (will make sure of
      this gradually).
      Signed-off-by: NPaolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
      Acked-by: NJeff Dike <jdike@addtoit.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5d48545e
  6. 23 11月, 2006 2 次提交
  7. 22 11月, 2006 3 次提交
  8. 21 11月, 2006 4 次提交
  9. 20 11月, 2006 2 次提交
  10. 18 11月, 2006 3 次提交
    • L
      x86: be more careful when walking back the frame pointer chain · 808dbbb6
      Linus Torvalds 提交于
      When showing the stack backtrace, make sure that we never accept not
      only an unchanging frame pointer, but also a frame pointer that moves
      back down the stack frame.  It must always grow up (toward older stack
      frames).
      
      I doubt this has triggered, but a subtly corrupt stack with extremely
      unlucky contents could cause us to loop forever on a bogus endless frame
      pointer chain.
      
      This review was triggered by much worse problems happening in some of
      the other stack unwinding code.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      808dbbb6
    • I
      [PATCH] i386/x86_64: ACPI cpu_idle_wait() fix · dc1829a4
      Ingo Molnar 提交于
      The scheduler on Andreas Friedrich's hyperthreading system stopped
      working properly: the scheduler would never move tasks to another CPU!
      The lask known working kernel was 2.6.8.
      
      After a couple of attempts to corner the bug, the following smoking gun
      was found:
      
        BIOS reported wrong ACPI idfor the processor
        CPU#1: set_cpus_allowed(), swapper:1, 3 -> 2
         [<c0103bbe>] show_trace_log_lvl+0x34/0x4a
         [<c0103ceb>] show_trace+0x2c/0x2e
         [<c01045f8>] dump_stack+0x2b/0x2d
         [<c0116a77>] set_cpus_allowed+0x52/0xec
         [<c0101d86>] cpu_idle_wait+0x2e/0x100
         [<c0259c57>] acpi_processor_power_exit+0x45/0x58
         [<c0259752>] acpi_processor_remove+0x46/0xea
         [<c025c6fb>] acpi_start_single_object+0x47/0x54
         [<c025cee5>] acpi_bus_register_driver+0xa4/0xd3
         [<c04ab2d7>] acpi_processor_init+0x57/0x77
         [<c01004d7>] init+0x146/0x2fd
         [<c0103a87>] kernel_thread_helper+0x7/0x10
      
      a quick look at cpu_idle_wait() shows how broken that code is
      on i386: it changes the init task's affinity map but never
      restores it ...
      
      and because all userspace tasks get forked by init, they all
      inherited that single-CPU affinity mask. x86_64 cloned this
      bug too.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Andreas Friedrich <andreas.friedrich@fujitsu-siemens.com>
      Cc: Wolfgang Erig <Wolfgang.Erig@fujitsu-siemens.com>
      Cc: Andrew Morton <akpm@osdl.org>
      Cc: Adrian Bunk <bunk@stusta.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dc1829a4
    • I
      [PATCH] x86_64: stack unwinder crash fix · 0796bdb7
      Ingo Molnar 提交于
      the new dwarf2 unwinder crashes while trying to dump the stack:
      
        Leftover inexact backtrace:
      
        Unable to handle kernel paging request at ffffffff82800000 RIP:
         [<ffffffff8026cf26>] dump_trace+0x35b/0x3d2
        PGD 203027 PUD 205027 PMD 0
        Oops: 0000 [2] PREEMPT SMP
        CPU 0
        Modules linked in:
        Pid: 30, comm: khelper Not tainted 2.6.19-rc6-rt1 #11
        RIP: 0010:[<ffffffff8026cf26>]  [<ffffffff8026cf26>] dump_trace+0x35b/0x3d2
        RSP: 0000:ffff81003fb9d848  EFLAGS: 00010006
        RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
        RDX: 0000000000000000 RSI: ffffffff805b3520 RDI: 0000000000000000
        RBP: ffffffff827ffff9 R08: ffffffff80aad000 R09: 0000000000000005
        R10: ffffffff80aae000 R11: ffffffff8037961b R12: ffff81003fb9d858
        R13: 0000000000000000 R14: ffffffff80598460 R15: ffffffff80ab1fc0
        FS:  0000000000000000(0000) GS:ffffffff806c4200(0000) knlGS:0000000000000000
        CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
        CR2: ffffffff82800000 CR3: 0000000000201000 CR4: 00000000000006e0
      
      this crash happened because it did not sanitize the dwarf2 data it
      got, and got an unaligned stack pointer - which happily walked past
      the process stack (and eventually reached the end of kernel memory
      and pagefaulted there) due to this naive iteration condition:
      
              HANDLE_STACK (((long) stack & (THREAD_SIZE-1)) != 0);
      
      note that i386 is alot more conservative when it comes to trusting
      stack pointers:
      
        static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
        {
               return  p > (void *)tinfo &&
                       p < (void *)tinfo + THREAD_SIZE - 3;
        }
      
      but the x86_64 code did not take this bit of i386 code.
      
      The fix is to align the stack pointer.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Jan Beulich <jbeulich@novell.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0796bdb7
  11. 17 11月, 2006 6 次提交
  12. 16 11月, 2006 2 次提交
    • R
      [IA64] bte_unaligned_copy() transfers one extra cache line. · cbf093e8
      Robin Holt 提交于
      When called to do a transfer that has a start offset within the cache
      line which is uneven between source and destination and a length which
      terminates the source of the copy exactly on a cache line, one extra
      line gets copied into a temporary buffer.  This is normally not an issue
      since the buffer is a kernel buffer and only the requested information
      gets copied into the user buffer.
      
      The problem arises when the source ends at the very last physical page
      of memory.  That last cache line does not exist and results in the SHUB
      chip raising an MCA.
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Signed-off-by: NDean Nelson <dcn@sgi.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      cbf093e8
    • E
      [PATCH] Use delayed disable mode of ioapic edge triggered interrupts · 45c99533
      Eric W. Biederman 提交于
      Komuro reports that ISA interrupts do not work after a disable_irq(),
      causing some PCMCIA drivers to not work, with messages like
      
      	eth0: Asix AX88190: io 0x300, irq 3, hw_addr xx:xx:xx:xx:xx:xx
      	eth0: found link beat
      	eth0: autonegotiation complete: 100baseT-FD selected
      	eth0: interrupt(s) dropped!
      	eth0: interrupt(s) dropped!
      	eth0: interrupt(s) dropped!
      	...
      
      Linus Torvalds <torvalds@osdl.org> said:
      
        "Now, edge-triggered interrupts are a _lot_ harder to mask, because the
         Intel APIC is an unbelievable piece of sh*t, and has the edge-detect logic
         _before_ the mask logic, so if a edge happens _while_ the device is
         masked, you'll never ever see the edge ever again (unmasking will not
         cause a new edge, so you simply lost the interrupt).
      
         So when you "mask" an edge-triggered IRQ, you can't really mask it at all,
         because if you did that, you'd lose it forever if the IRQ comes in while
         you masked it. Instead, we're supposed to leave it active, and set a flag,
         and IF the IRQ comes in, we just remember it, and mask it at that point
         instead, and then on unmasking, we have to replay it by sending a
         self-IPI."
      
      This trivial patch solves the problem.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Acked-by: NKomuro <komurojun-mbn@nifty.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      45c99533
  13. 15 11月, 2006 2 次提交
    • H
      [PATCH] hugetlb: prepare_hugepage_range check offset too · 68589bc3
      Hugh Dickins 提交于
      (David:)
      
      If hugetlbfs_file_mmap() returns a failure to do_mmap_pgoff() - for example,
      because the given file offset is not hugepage aligned - then do_mmap_pgoff
      will go to the unmap_and_free_vma backout path.
      
      But at this stage the vma hasn't been marked as hugepage, and the backout path
      will call unmap_region() on it.  That will eventually call down to the
      non-hugepage version of unmap_page_range().  On ppc64, at least, that will
      cause serious problems if there are any existing hugepage pagetable entries in
      the vicinity - for example if there are any other hugepage mappings under the
      same PUD.  unmap_page_range() will trigger a bad_pud() on the hugepage pud
      entries.  I suspect this will also cause bad problems on ia64, though I don't
      have a machine to test it on.
      
      (Hugh:)
      
      prepare_hugepage_range() should check file offset alignment when it checks
      virtual address and length, to stop MAP_FIXED with a bad huge offset from
      unmapping before it fails further down.  PowerPC should apply the same
      prepare_hugepage_range alignment checks as ia64 and all the others do.
      
      Then none of the alignment checks in hugetlbfs_file_mmap are required (nor
      is the check for too small a mapping); but even so, move up setting of
      VM_HUGETLB and add a comment to warn of what David Gibson discovered - if
      hugetlbfs_file_mmap fails before setting it, do_mmap_pgoff's unmap_region
      when unwinding from error will go the non-huge way, which may cause bad
      behaviour on architectures (powerpc and ia64) which segregate their huge
      mappings into a separate region of the address space.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Acked-by: NAdam Litke <agl@us.ibm.com>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      68589bc3
    • D
      [PATCH] fix via586 irq routing for pirq 5 · f3ac8432
      Daniel Ritz 提交于
      Fix interrupt routing for via 586 bridges.  pirq can be 5 which needs to be
      mapped to INTD.  But currently the access functions can handle only pirq
      1-4.  this is similar to the other via chipsets where pirq 4 and 5 are both
      mapped to INTD.  Fixes bugzilla #7490
      
      Cc: Daniel Paschka <monkey20181@gmx.net>
      Cc: Adrian Bunk <bunk@susta.de>
      Signed-off-by: NDaniel Ritz <daniel.ritz@gmx.ch>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f3ac8432
  14. 14 11月, 2006 2 次提交
    • A
      [PATCH] x86-64: Fix race in exit_idle · 9446868b
      Andi Kleen 提交于
      When another interrupt happens in exit_idle the exit idle notifier
      could be called an incorrect number of times.
      
      Add a test_and_clear_bit_pda and use it handle the bit
      atomically against interrupts to avoid this.
      
      Pointed out by Stephane Eranian
      Signed-off-by: NAndi Kleen <ak@suse.de>
      9446868b
    • A
      [PATCH] x86-64: Fix vgetcpu when CONFIG_HOTPLUG_CPU is disabled · 8c131af1
      Andi Kleen 提交于
      The vgetcpu per CPU initialization previously relied on CPU hotplug
      events for all CPUs to initialize the per CPU state. That only
      worked only on kernels with CONFIG_HOTPLUG_CPU enabled.  On the
      others some CPUs didn't get their state initialized properly
      and vgetcpu wouldn't work.
      
      Change the initialization sequence to instead run in a normal
      initcall (which runs after the normal CPU bootup) and initialize
      all running CPUs there. Later hotplug CPUs are still handled
      with an hotplug notifier.
      
      This actually simplifies the code somewhat.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      8c131af1