1. 23 9月, 2009 1 次提交
  2. 22 9月, 2009 3 次提交
  3. 21 9月, 2009 14 次提交
  4. 20 9月, 2009 7 次提交
  5. 19 9月, 2009 3 次提交
  6. 18 9月, 2009 6 次提交
    • J
      x86: SGI UV: Map MMIO-High memory range · daf7b9c9
      Jack Steiner 提交于
      UV depends on the MMRHI space being identity mapped. The patch:
      
      	x86: Make 64-bit efi_ioremap use ioremap on MMIO regions
      
      changed this to make efi regions at a different address using
      ioremap. Add the identity mapping to uv_system_init.
      
      ( Note this code was previously present but was deleted when BIOS
        added the ranges to the EFI map - previous efi code identify
        mapped the ranges. )
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      LKML-Reference: <20090909154339.GA7946@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      daf7b9c9
    • J
      x86: SGI UV: Add volatile semantics to macros that access chipset registers · 8dc579e8
      Jack Steiner 提交于
      Add volatile-semantics to the SGI UV read/write macros that are
      used to access chipset memory mapped registers. No direct
      references to volatile are made. Instead the readq/writeq macros
      are used.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Cc: linux-mm@kvack.org
      Cc: dwalker@fifo99.com
      Cc: cfriesen@nortel.com
      LKML-Reference: <20090910143149.GA14273@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8dc579e8
    • J
      x86: SGI UV: Fix IPI macros · d2374aec
      Jack Steiner 提交于
      The UV BIOS has changed the way interrupt remapping is being done.
      This affects the id used for sending IPIs. The upper id bits no
      longer need to be masked off.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <20090909154104.GA25083@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d2374aec
    • D
      x86: apic: Convert BUG() to BUG_ON() · c2777f98
      Daniel Walker 提交于
      This was done using Coccinelle's BUG_ON semantic patch.
      Signed-off-by: NDaniel Walker <dwalker@fifo99.com>
      Cc: Julia Lawall <julia@diku.dk>
      LKML-Reference: <1252777220-30796-1-git-send-email-dwalker@fifo99.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c2777f98
    • A
      x86: Remove final bits of CONFIG_X86_OLD_MCE · bc3eb707
      Andi Kleen 提交于
      Caught by Linus.
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Cc: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Huang Ying <ying.huang@intel.com>
      Cc: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
      Cc: Borislav Petkov <borislav.petkov@amd.com>
      [ fixed up context conflict manually. ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      bc3eb707
    • S
      x86, pat: don't use rb-tree based lookup in reserve_memtype() · dcb73bf4
      Suresh Siddha 提交于
      Recent enhancement of rb-tree based lookup exposed a  bug with the lookup
      mechanism in the reserve_memtype() which ensures that there are no conflicting
      memtype requests for the memory range.
      
      memtype_rb_search() returns an entry which has a start address <= new start
      address. And from here we traverse the linear linked list to check if there
      any conflicts with the existing mappings. As the rbtree is based on the
      start address of the memory range, it is quite possible that we have several
      overlapped mappings whose start address is much less than new requested start
      but the end is >= new requested end. This results in conflicting memtype
      mappings.
      
      Same bug exists with the old code which uses cached_entry from where
      we traverse the linear linked list. But the new rb-tree code exposes this
      bug fairly easily.
      
      For now, don't use the memtype_rb_search() and always start the search from
      the head of linear linked list in reserve_memtype(). Linear linked list
      for most of the systems grow's to few 10's of entries(as we track memory type
      of RAM pages using struct page). So we should be ok for now.
      
      We still retain the rbtree and use it to speed up free_memtype() which
      doesn't have the same bug(as we know what exactly we are searching for
      in free_memtype).
      
      Also use list_for_each_entry_from() in free_memtype() so that we start
      the search from rb-tree lookup result.
      Reported-by: NMarkus Trippelsdorf <markus@trippelsdorf.de>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      LKML-Reference: <1253136483.4119.12.camel@sbs-t61.sc.intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      dcb73bf4
  7. 16 9月, 2009 6 次提交