1. 11 5月, 2008 5 次提交
  2. 09 5月, 2008 1 次提交
  3. 08 5月, 2008 3 次提交
  4. 07 5月, 2008 2 次提交
    • S
      pcspkr: fix dependancies · e5e1d3cb
      Stas Sergeev 提交于
      fix pcspkr dependancies: make the pcspkr platform
      drivers to depend on a platform device, and
      not the other way around.
      Signed-off-by: NStas Sergeev <stsp@aknet.ru>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NDmitry Torokhov <dtor@mail.ru>
      CC: Vojtech Pavlik <vojtech@suse.cz>
      CC: Michael Opdenacker <michael-lists@free-electrons.com>
      [fixed for 2.6.26-rc1 by tiwai]
      Signed-off-by: NTakashi Iwai <tiwai@suse.de>
      e5e1d3cb
    • H
      x86: fix PAE pmd_bad bootup warning · aeed5fce
      Hugh Dickins 提交于
      Fix warning from pmd_bad() at bootup on a HIGHMEM64G HIGHPTE x86_32.
      
      That came from 9fc34113 x86: debug pmd_bad();
      but we understand now that the typecasting was wrong for PAE in the previous
      version: pagetable pages above 4GB looked bad and stopped Arjan from booting.
      
      And revert that cded932b x86: fix pmd_bad
      and pud_bad to support huge pages.  It was the wrong way round: we shouldn't
      weaken every pmd_bad and pud_bad check to let huge pages slip through - in
      part they check that we _don't_ have a huge page where it's not expected.
      
      Put the x86 pmd_bad() and pud_bad() definitions back to what they have long
      been: they can be improved (x86_32 should use PTE_MASK, to stop PAE thinking
      junk in the upper word is good; and x86_64 should follow x86_32's stricter
      comparison, to stop thinking any subset of required bits is good); but that
      should be a later patch.
      
      Fix Hans' good observation that follow_page() will never find pmd_huge()
      because that would have already failed the pmd_bad test: test pmd_huge in
      between the pmd_none and pmd_bad tests.  Tighten x86's pmd_huge() check?
      No, once it's a hugepage entry, it can get quite far from a good pmd: for
      example, PROT_NONE leaves it with only ACCESSED of the KERN_PGTABLE bits.
      
      However... though follow_page() contains this and another test for huge
      pages, so it's nice to keep it working on them, where does it actually get
      called on a huge page?  get_user_pages() checks is_vm_hugetlb_page(vma) to
      to call alternative hugetlb processing, as does unmap_vmas() and others.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Earlier-version-tested-by: NIngo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeff Chua <jeff.chua.linux@gmail.com>
      Cc: Hans Rosenfeld <hans.rosenfeld@amd.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      aeed5fce
  5. 06 5月, 2008 3 次提交
  6. 05 5月, 2008 11 次提交
  7. 04 5月, 2008 15 次提交