1. 10 11月, 2009 1 次提交
    • X
      x86: pat: Remove ioremap_default() · 2fb8f4e6
      Xiaotian Feng 提交于
      Commit:
      
        b6ff32d9: x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype()
      
      consolidated reserve_memtype() and pat_x_mtrr_type,
      this made ioremap_default() same as ioremap_cache().
      
      Remove the redundant function and change the only caller to use
      ioremap_cache.
      Signed-off-by: NXiaotian Feng <dfeng@redhat.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      LKML-Reference: <1257845005-7938-1-git-send-email-dfeng@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2fb8f4e6
  2. 08 11月, 2009 1 次提交
  3. 11 9月, 2009 1 次提交
  4. 27 8月, 2009 1 次提交
  5. 11 4月, 2009 1 次提交
  6. 10 4月, 2009 2 次提交
  7. 25 3月, 2009 1 次提交
  8. 22 3月, 2009 1 次提交
  9. 13 3月, 2009 2 次提交
  10. 09 3月, 2009 2 次提交
    • J
      x86-32: make sure virt_addr_valid() returns false for fixmap addresses · 0feca851
      Jeremy Fitzhardinge 提交于
      I found that virt_addr_valid() was returning true for fixmap addresses.
      
      I'm not sure whether pfn_valid() is supposed to include this test,
      but there's no harm in being explicit.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <49B166D6.2080505@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0feca851
    • W
      x86: don't define __this_fixmap_does_not_exist() · 8827247f
      Wang Chen 提交于
      Impact: improve out-of-range fixmap index debugging
      
      Commit "1b42f516"
      defined the __this_fixmap_does_not_exist() function
      with a WARN_ON(1) in it.
      
      This causes the linker to not report an error when
      __this_fixmap_does_not_exist() is called with a
      non-constant parameter.
      
      Ingo defined __this_fixmap_does_not_exist() because he
      wanted to get virt addresses of fix memory of nest level
      by non-constant index.
      
      But we can fix this and still keep the link-time check:
      
      We can get the four slot virt addresses on link time and
      store them to array slot_virt[].
      
      Then we can then refer the slot_virt with non-constant index,
      in the ioremap-leak detection code.
      Signed-off-by: NWang Chen <wangchen@cn.fujitsu.com>
      LKML-Reference: <49B2075B.4070509@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8827247f
  11. 05 3月, 2009 2 次提交
  12. 12 2月, 2009 1 次提交
  13. 22 1月, 2009 1 次提交
  14. 16 1月, 2009 1 次提交
    • J
      x86: fix assumed to be contiguous leaf page tables for kmap_atomic region (take 2) · a3c6018e
      Jan Beulich 提交于
      Debugging and original patch from Nick Piggin <npiggin@suse.de>
      
      The early fixmap pmd entry inserted at the very top of the KVA is causing the
      subsequent fixmap mapping code to not provide physically linear pte pages over
      the kmap atomic portion of the fixmap (which relies on said property to
      calculate pte addresses).
      
      This has caused weird boot failures in kmap_atomic much later in the boot
      process (initial userspace faults) on a 32-bit PAE system with a larger number
      of CPUs (smaller CPU counts tend not to run over into the next page so don't
      show up the problem).
      
      Solve this by attempting to clear out the page table, and copy any of its
      entries to the new one. Also, add a bug if a nonlinear condition is encountered
      and can't be resolved, which might save some hours of debugging if this fragile
      scheme ever breaks again...
      
      Once we have such logic, we can also use it to eliminate the early ioremap
      trickery around the page table setup for the fixmap area. This also fixes
      potential issues with FIX_* entries sharing the leaf page table with the early
      ioremap ones getting discarded by early_ioremap_clear() and not restored by
      early_ioremap_reset(). It at once eliminates the temporary (and configuration,
      namely NR_CPUS, dependent) unavailability of early fixed mappings during the
      time the fixmap area page tables get constructed.
      
      Finally, also replace the hard coded calculation of the initial table space
      needed for the fixmap area with a proper one, allowing kernels configured for
      large CPU counts to actually boot.
      
      Based-on: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a3c6018e
  15. 12 12月, 2008 1 次提交
  16. 29 10月, 2008 1 次提交
    • H
      x86: start annotating early ioremap pointers with __iomem · 1d6cf1fe
      Harvey Harrison 提交于
      Impact: some new sparse warnings in e820.c etc, but no functional change.
      
      As with regular ioremap, iounmap etc, annotate with __iomem.
      
      Fixes the following sparse warnings, will produce some new ones
      elsewhere in arch/x86 that will get worked out over time.
      
      arch/x86/mm/ioremap.c:402:9: warning: cast removes address space of expression
      arch/x86/mm/ioremap.c:406:10: warning: cast adds address space to expression (<asn:2>)
      arch/x86/mm/ioremap.c:782:19: warning: Using plain integer as NULL pointer
      Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1d6cf1fe
  17. 13 10月, 2008 4 次提交
  18. 12 10月, 2008 1 次提交
    • A
      x86, early_ioremap: fix fencepost error · c613ec1a
      Alan Cox 提交于
      The x86 implementation of early_ioremap has an off by one error. If we get
      an object which ends on the first byte of a page we undermap by one page and
      this causes a crash on boot with the ASUS P5QL whose DMI table happens to fit
      this alignment.
      
      The size computation is currently
      
      	last_addr = phys_addr + size - 1;
      	npages = (PAGE_ALIGN(last_addr) - phys_addr)
      
      (Consider a request for 1 byte at alignment 0...)
      
      Closes #11693
      
      Debugging work by Ian Campbell/Felix Geyer
      Signed-off-by: NAlan Cox <alan@rehat.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c613ec1a
  19. 11 10月, 2008 1 次提交
  20. 26 9月, 2008 1 次提交
  21. 21 8月, 2008 1 次提交
  22. 16 8月, 2008 1 次提交
    • A
      x86: Fix ioremap off by one BUG · e213e877
      Andi Kleen 提交于
      Jean Delvare's machine triggered this BUG
      
      acpi_os_map_memory phys ffff0000 size 65535
      ------------[ cut here ]------------
      kernel BUG at arch/x86/mm/pat.c:233!
      
      with ACPI in the backtrace.
      
      Adding some debugging output showed that ACPI calls
      
      acpi_os_map_memory phys ffff0000 size 65535
      
      And ioremap/PAT does this check in 32bit, so addr+size wraps and the BUG
      in reserve_memtype() triggers incorrectly.
      
              BUG_ON(start >= end); /* end is exclusive */
      
      But reserve_memtype already uses u64:
      
      int reserve_memtype(u64 start, u64 end,
      
      so the 32bit truncation must happen in the caller. Presumably in ioremap
      when it passes this information to reserve_memtype().
      
      This patch does this computation in 64bit.
      
      http://bugzilla.kernel.org/show_bug.cgi?id=11346Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      e213e877
  23. 25 7月, 2008 1 次提交
  24. 23 7月, 2008 1 次提交
  25. 10 7月, 2008 1 次提交
  26. 08 7月, 2008 4 次提交
    • J
      x86, 64-bit: adjust mapping of physical pagetables to work with Xen · 4f9c11dd
      Jeremy Fitzhardinge 提交于
      This makes a few of changes to the construction of the initial
      pagetables to work better with paravirt_ops/Xen.  The main areas
      are:
      
       1. Support non-PSE mapping of memory, since Xen doesn't currently
          allow 2M pages to be mapped in guests.
      
       2. Make sure that the ioremap alias of all pages are dropped before
          attaching the new page to the pagetable.  This avoids having
          writable aliases of pagetable pages.
      
       3. Preserve existing pagetable entries, rather than overwriting.  Its
          possible that a fair amount of pagetable has already been constructed,
          so reuse what's already in place rather than ignoring and overwriting it.
      
      The algorithm relies on the invariant that any page which is part of
      the kernel pagetable is itself mapped in the linear memory area.  This
      way, it can avoid using ioremap on a pagetable page.
      
      The invariant holds because it maps memory from low to high addresses,
      and also allocates memory from low to high.  Each allocated page can
      map at least 2M of address space, so the mapped area will always
      progress much faster than the allocated area.  It relies on the early
      boot code mapping enough pages to get started.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: xen-devel <xen-devel@lists.xensource.com>
      Cc: Stephen Tweedie <sct@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Mark McLoughlin <markmc@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4f9c11dd
    • J
      x86, 64-bit: unify early_ioremap · 4583ed51
      Jeremy Fitzhardinge 提交于
      The 32-bit early_ioremap will work equally well for 64-bit, so just use it.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: xen-devel <xen-devel@lists.xensource.com>
      Cc: Stephen Tweedie <sct@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Mark McLoughlin <markmc@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4583ed51
    • J
      build: add __page_aligned_data and __page_aligned_bss · a7bf0bd5
      Jeremy Fitzhardinge 提交于
      Making a variable page-aligned by using
      __attribute__((section(".data.page_aligned"))) is fragile because if
      sizeof(variable) is not also a multiple of page size, it leaves
      variables in the remainder of the section unaligned.
      
      This patch introduces two new qualifiers, __page_aligned_data and
      __page_aligned_bss to set the section *and* the alignment of
      variables.  This makes page-aligned variables more robust because the
      linker will make sure they're aligned properly.  Unfortunately it
      requires *all* page-aligned data to use these macros...
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a7bf0bd5
    • T
      x86: add sparse annotations to ioremap · 6e92a5a6
      Thomas Gleixner 提交于
      arch/x86/mm/ioremap.c:308:11: error: incompatible types in comparison expression (different address spaces)
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e92a5a6
  27. 24 6月, 2008 1 次提交
  28. 19 6月, 2008 2 次提交
    • J
      x86, MM: virtual address debug, v2 · a1bf9631
      Jiri Slaby 提交于
      I've removed the test from phys_to_nid and made a function from __phys_addr
      only when the debugging is enabled (on x86_32).
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Cc: tglx@linutronix.de
      Cc: hpa@zytor.com
      Cc: Mike Travis <travis@sgi.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: <x86@kernel.org>
      Cc: linux-mm@kvack.org
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a1bf9631
    • J
      MM: virtual address debug · 59ea7463
      Jiri Slaby 提交于
      Add some (configurable) expensive sanity checking to catch wrong address
      translations on x86.
      
      - create linux/mmdebug.h file to be able include this file in
        asm headers to not get unsolvable loops in header files
      - __phys_addr on x86_32 became a function in ioremap.c since
        PAGE_OFFSET, is_vmalloc_addr and VMALLOC_* non-constasts are undefined
        if declared in page_32.h
      - add __phys_addr_const for initializing doublefault_tss.__cr3
      
      Tested on 386, 386pae, x86_64 and x86_64 numa=fake=2.
      
      Contains Andi's enable numa virtual address debug patch.
      Signed-off-by: NJiri Slaby <jirislaby@gmail.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      59ea7463
  29. 12 6月, 2008 1 次提交