1. 14 10月, 2010 1 次提交
    • J
      xen: Cope with unmapped pages when initializing kernel pagetable · fef5ba79
      Jeremy Fitzhardinge 提交于
      Xen requires that all pages containing pagetable entries to be mapped
      read-only.  If pages used for the initial pagetable are already mapped
      then we can change the mapping to RO.  However, if they are initially
      unmapped, we need to make sure that when they are later mapped, they
      are also mapped RO.
      
      We do this by knowing that the kernel pagetable memory is pre-allocated
      in the range e820_table_start - e820_table_end, so any pfn within this
      range should be mapped read-only.  However, the pagetable setup code
      early_ioremaps the pages to write their entries, so we must make sure
      that mappings created in the early_ioremap fixmap area are mapped RW.
      (Those mappings are removed before the pages are presented to Xen
      as pagetable pages.)
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      LKML-Reference: <4CB63A80.8060702@goop.org>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      fef5ba79
  2. 21 7月, 2010 1 次提交
  3. 10 7月, 2010 2 次提交
  4. 30 4月, 2010 1 次提交
    • L
      x86: Fix 'reservetop=' functionality · e67a807f
      Liang Li 提交于
      When specifying the 'reservetop=0xbadc0de' kernel parameter,
      the kernel will stop booting due to a early_ioremap bug that
      relates to commit 8827247f.
      
      The root cause of boot failure problem is the value of
      'slot_virt[i]' was initialized in setup_arch->early_ioremap_init().
      But later in setup_arch, the function 'parse_early_param' will
      modify 'FIXADDR_TOP' when 'reservetop=0xbadc0de' being specified.
      
      The simplest fix might be use __fix_to_virt(idx0) to get updated
      value of 'FIXADDR_TOP' in '__early_ioremap' instead of reference
      old value from slot_virt[slot] directly.
      
      Changelog since v0:
      
      -v1: When reservetop being handled then FIXADDR_TOP get
           adjusted, Hence check prev_map then re-initialize slot_virt and
           PMD based on new FIXADDR_TOP.
      
      -v2: place fixup_early_ioremap hence call early_ioremap_init in
           reserve_top_address  to re-initialize slot_virt and
           corresponding PMD when parse_reservertop
      
      -v3: move fixup_early_ioremap out of reserve_top_address to make
           sure other clients of reserve_top_address like xen/lguest won't
           broken
      Signed-off-by: NLiang Li <liang.li@windriver.com>
      Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NYinghai Lu <yinghai@kernel.org>
      Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Wang Chen <wangchen@cn.fujitsu.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <1272621711-8683-1-git-send-email-liang.li@windriver.com>
      [ fixed three small cleanliness details in fixup_early_ioremap() ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e67a807f
  5. 02 2月, 2010 2 次提交
  6. 30 12月, 2009 1 次提交
    • J
      x86: Lift restriction on the location of FIX_BTMAP_* · 499a5f1e
      Jan Beulich 提交于
      The early ioremap fixmap entries cover half (or for 32-bit
      non-PAE, a quarter) of a page table, yet they got
      uncondtitionally aligned so far to a 256-entry boundary. This is
      not necessary if the range of page table entries anyway falls
      into a single page table.
      
      This buys back, for (theoretically) 50% of all configurations
      (25% of all non-PAE ones), at least some of the lowmem
      necessarily lost with commit e621bd18.
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <4B2BB66F0200007800026AD6@vpn.id2.novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      499a5f1e
  7. 10 11月, 2009 1 次提交
    • X
      x86: pat: Remove ioremap_default() · 2fb8f4e6
      Xiaotian Feng 提交于
      Commit:
      
        b6ff32d9: x86, PAT: Consolidate code in pat_x_mtrr_type() and reserve_memtype()
      
      consolidated reserve_memtype() and pat_x_mtrr_type,
      this made ioremap_default() same as ioremap_cache().
      
      Remove the redundant function and change the only caller to use
      ioremap_cache.
      Signed-off-by: NXiaotian Feng <dfeng@redhat.com>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      LKML-Reference: <1257845005-7938-1-git-send-email-dfeng@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2fb8f4e6
  8. 08 11月, 2009 1 次提交
  9. 11 9月, 2009 1 次提交
  10. 27 8月, 2009 1 次提交
  11. 11 4月, 2009 1 次提交
  12. 10 4月, 2009 2 次提交
  13. 25 3月, 2009 1 次提交
  14. 22 3月, 2009 1 次提交
  15. 13 3月, 2009 2 次提交
  16. 09 3月, 2009 2 次提交
    • J
      x86-32: make sure virt_addr_valid() returns false for fixmap addresses · 0feca851
      Jeremy Fitzhardinge 提交于
      I found that virt_addr_valid() was returning true for fixmap addresses.
      
      I'm not sure whether pfn_valid() is supposed to include this test,
      but there's no harm in being explicit.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jiri Slaby <jirislaby@gmail.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <49B166D6.2080505@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0feca851
    • W
      x86: don't define __this_fixmap_does_not_exist() · 8827247f
      Wang Chen 提交于
      Impact: improve out-of-range fixmap index debugging
      
      Commit "1b42f516"
      defined the __this_fixmap_does_not_exist() function
      with a WARN_ON(1) in it.
      
      This causes the linker to not report an error when
      __this_fixmap_does_not_exist() is called with a
      non-constant parameter.
      
      Ingo defined __this_fixmap_does_not_exist() because he
      wanted to get virt addresses of fix memory of nest level
      by non-constant index.
      
      But we can fix this and still keep the link-time check:
      
      We can get the four slot virt addresses on link time and
      store them to array slot_virt[].
      
      Then we can then refer the slot_virt with non-constant index,
      in the ioremap-leak detection code.
      Signed-off-by: NWang Chen <wangchen@cn.fujitsu.com>
      LKML-Reference: <49B2075B.4070509@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8827247f
  17. 05 3月, 2009 2 次提交
  18. 12 2月, 2009 1 次提交
  19. 22 1月, 2009 1 次提交
  20. 16 1月, 2009 1 次提交
    • J
      x86: fix assumed to be contiguous leaf page tables for kmap_atomic region (take 2) · a3c6018e
      Jan Beulich 提交于
      Debugging and original patch from Nick Piggin <npiggin@suse.de>
      
      The early fixmap pmd entry inserted at the very top of the KVA is causing the
      subsequent fixmap mapping code to not provide physically linear pte pages over
      the kmap atomic portion of the fixmap (which relies on said property to
      calculate pte addresses).
      
      This has caused weird boot failures in kmap_atomic much later in the boot
      process (initial userspace faults) on a 32-bit PAE system with a larger number
      of CPUs (smaller CPU counts tend not to run over into the next page so don't
      show up the problem).
      
      Solve this by attempting to clear out the page table, and copy any of its
      entries to the new one. Also, add a bug if a nonlinear condition is encountered
      and can't be resolved, which might save some hours of debugging if this fragile
      scheme ever breaks again...
      
      Once we have such logic, we can also use it to eliminate the early ioremap
      trickery around the page table setup for the fixmap area. This also fixes
      potential issues with FIX_* entries sharing the leaf page table with the early
      ioremap ones getting discarded by early_ioremap_clear() and not restored by
      early_ioremap_reset(). It at once eliminates the temporary (and configuration,
      namely NR_CPUS, dependent) unavailability of early fixed mappings during the
      time the fixmap area page tables get constructed.
      
      Finally, also replace the hard coded calculation of the initial table space
      needed for the fixmap area with a proper one, allowing kernels configured for
      large CPU counts to actually boot.
      
      Based-on: Nick Piggin <npiggin@suse.de>
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a3c6018e
  21. 12 12月, 2008 1 次提交
  22. 29 10月, 2008 1 次提交
    • H
      x86: start annotating early ioremap pointers with __iomem · 1d6cf1fe
      Harvey Harrison 提交于
      Impact: some new sparse warnings in e820.c etc, but no functional change.
      
      As with regular ioremap, iounmap etc, annotate with __iomem.
      
      Fixes the following sparse warnings, will produce some new ones
      elsewhere in arch/x86 that will get worked out over time.
      
      arch/x86/mm/ioremap.c:402:9: warning: cast removes address space of expression
      arch/x86/mm/ioremap.c:406:10: warning: cast adds address space to expression (<asn:2>)
      arch/x86/mm/ioremap.c:782:19: warning: Using plain integer as NULL pointer
      Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1d6cf1fe
  23. 13 10月, 2008 4 次提交
  24. 12 10月, 2008 1 次提交
    • A
      x86, early_ioremap: fix fencepost error · c613ec1a
      Alan Cox 提交于
      The x86 implementation of early_ioremap has an off by one error. If we get
      an object which ends on the first byte of a page we undermap by one page and
      this causes a crash on boot with the ASUS P5QL whose DMI table happens to fit
      this alignment.
      
      The size computation is currently
      
      	last_addr = phys_addr + size - 1;
      	npages = (PAGE_ALIGN(last_addr) - phys_addr)
      
      (Consider a request for 1 byte at alignment 0...)
      
      Closes #11693
      
      Debugging work by Ian Campbell/Felix Geyer
      Signed-off-by: NAlan Cox <alan@rehat.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c613ec1a
  25. 11 10月, 2008 1 次提交
  26. 26 9月, 2008 1 次提交
  27. 21 8月, 2008 1 次提交
  28. 16 8月, 2008 1 次提交
    • A
      x86: Fix ioremap off by one BUG · e213e877
      Andi Kleen 提交于
      Jean Delvare's machine triggered this BUG
      
      acpi_os_map_memory phys ffff0000 size 65535
      ------------[ cut here ]------------
      kernel BUG at arch/x86/mm/pat.c:233!
      
      with ACPI in the backtrace.
      
      Adding some debugging output showed that ACPI calls
      
      acpi_os_map_memory phys ffff0000 size 65535
      
      And ioremap/PAT does this check in 32bit, so addr+size wraps and the BUG
      in reserve_memtype() triggers incorrectly.
      
              BUG_ON(start >= end); /* end is exclusive */
      
      But reserve_memtype already uses u64:
      
      int reserve_memtype(u64 start, u64 end,
      
      so the 32bit truncation must happen in the caller. Presumably in ioremap
      when it passes this information to reserve_memtype().
      
      This patch does this computation in 64bit.
      
      http://bugzilla.kernel.org/show_bug.cgi?id=11346Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      e213e877
  29. 25 7月, 2008 1 次提交
  30. 23 7月, 2008 1 次提交
  31. 10 7月, 2008 1 次提交