1. 07 2月, 2008 1 次提交
  2. 04 2月, 2008 1 次提交
  3. 30 1月, 2008 3 次提交
  4. 11 10月, 2007 2 次提交
  5. 19 8月, 2007 1 次提交
  6. 30 7月, 2007 1 次提交
  7. 25 7月, 2007 1 次提交
  8. 23 7月, 2007 1 次提交
  9. 17 7月, 2007 1 次提交
  10. 03 5月, 2007 9 次提交
    • A
      [PATCH] x86-64: Remove unused stext symbol · b8716890
      Andi Kleen 提交于
      suggested by Jan Beulich
      Signed-off-by: NAndi Kleen <ak@suse.de>
      b8716890
    • J
      [PATCH] x86: tighten kernel image page access rights · 6fb14755
      Jan Beulich 提交于
      On x86-64, kernel memory freed after init can be entirely unmapped instead
      of just getting 'poisoned' by overwriting with a debug pattern.
      
      On i386 and x86-64 (under CONFIG_DEBUG_RODATA), kernel text and bug table
      can also be write-protected.
      
      Compared to the first version, this one prevents re-creating deleted
      mappings in the kernel image range on x86-64, if those got removed
      previously. This, together with the original changes, prevents temporarily
      having inconsistent mappings when cacheability attributes are being
      changed on such pages (e.g. from AGP code). While on i386 such duplicate
      mappings don't exist, the same change is done there, too, both for
      consistency and because checking pte_present() before using various other
      pte_XXX functions is a requirement anyway. At once, i386 code gets
      adjusted to use pte_huge() instead of open coding this.
      
      AK: split out cpa() changes
      Signed-off-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      6fb14755
    • V
      [PATCH] x86-64: Relocatable Kernel Support · 1ab60e0f
      Vivek Goyal 提交于
      This patch modifies the x86_64 kernel so that it can be loaded and run
      at any 2M aligned address, below 512G.  The technique used is to
      compile the decompressor with -fPIC and modify it so the decompressor
      is fully relocatable.  For the main kernel the page tables are
      modified so the kernel remains at the same virtual address.  In
      addition a variable phys_base is kept that holds the physical address
      the kernel is loaded at.  __pa_symbol is modified to add that when
      we take the address of a kernel symbol.
      
      When loaded with a normal bootloader the decompressor will decompress
      the kernel to 2M and it will run there.  This both ensures the
      relocation code is always working, and makes it easier to use 2M
      pages for the kernel and the cpu.
      
      AK: changed to not make RELOCATABLE default in Kconfig
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      1ab60e0f
    • V
      [PATCH] x86-64: Remove the identity mapping as early as possible · cfd243d4
      Vivek Goyal 提交于
      With the rewrite of the SMP trampoline and the early page
      allocator there is nothing that needs identity mapped pages,
      once we start executing C code.
      
      So add zap_identity_mappings into head64.c and remove
      zap_low_mappings() from much later in the code.  The functions
       are subtly different thus the name change.
      
      This also kills boot_level4_pgt which was from an earlier
      attempt to move the identity mappings as early as possible,
      and is now no longer needed.  Essentially I have replaced
      boot_level4_pgt with trampoline_level4_pgt in trampoline.S
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      cfd243d4
    • V
      [PATCH] x86-64: 64bit ACPI wakeup trampoline · d8e1baf1
      Vivek Goyal 提交于
      o Moved wakeup_level4_pgt into the wakeup routine so we can
        run the kernel above 4G.
      
      o Now we first go to 64bit mode and continue to run from trampoline and
        then then start accessing kernel symbols and restore processor context.
        This enables us to resume even in relocatable kernel context when
        kernel might not be loaded at physical addr it has been compiled for.
      
      o Removed the need for modifying any existing kernel page table.
      
      o Increased the size of the wakeup routine to 8K. This is required as
        wake page tables are on trampoline itself and they got to be at 4K
        boundary, hence one page is not sufficient.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      d8e1baf1
    • V
      [PATCH] x86-64: 64bit PIC SMP trampoline · 90b1c208
      Vivek Goyal 提交于
      This modifies the SMP trampoline and all of the associated code so
      it can jump to a 64bit kernel loaded at an arbitrary address.
      
      The dependencies on having an idenetity mapped page in the kernel
      page tables for SMP bootup have all been removed.
      
      In addition the trampoline has been modified to verify
      that long mode is supported.  Asking if long mode is implemented is
      down right silly but we have traditionally had some of these checks,
      and they can't hurt anything.  So when the totally ludicrous happens
      we just might handle it correctly.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      90b1c208
    • V
      [PATCH] x86-64: cleanup segments · 30f47289
      Vivek Goyal 提交于
      Move __KERNEL32_CS up into the unused gdt entry.  __KERNEL32_CS is
      used when entering the kernel so putting it first is useful when
      trying to keep boot gdt sizes to a minimum.
      
      Set the accessed bit on all gdt entries.  We don't care
      so there is no need for the cpu to burn the extra cycles,
      and it potentially allows the pages to be immutable.  Plus
      it is confusing when debugging and your gdt entries mysteriously
      change.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      30f47289
    • V
      [PATCH] x86-64: Clean up the early boot page table · 67dcbb6b
      Vivek Goyal 提交于
      - Merge physmem_pgt and ident_pgt, removing physmem_pgt.  The merge
        is broken as soon as mm/init.c:init_memory_mapping is run.
      - As physmem_pgt is gone don't export it in pgtable.h.
      - Use defines from pgtable.h for page permissions.
      - Fix the physical memory identity mapping so it is at the correct
        address.
      - Remove the physical memory mapping from wakeup_level4_pgt it
        is at the wrong address so we can't possibly be usinging it.
      - Simply NEXT_PAGE the work to calculate the phys_ alias
        of the labels was very cool.  Unfortuantely it was a brittle
        special purpose hack that makes maitenance more difficult.
        Instead just use label - __START_KERNEL_map like we do
        everywhere else in assembly.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      67dcbb6b
    • V
      [PATCH] x86-64: Kill temp boot pmds · dafe41ee
      Vivek Goyal 提交于
      Early in the boot process we need the ability to set
      up temporary mappings, before our normal mechanisms are
      initialized.  Currently this is used to map pages that
      are part of the page tables we are building and pages
      during the dmi scan.
      
      The core problem is that we are using the user portion of
      the page tables to implement this.  Which means that while
      this mechanism is active we cannot catch NULL pointer dereferences
      and we deviate from the normal ways of handling things.
      
      In this patch I modify early_ioremap to map pages into
      the kernel portion of address space, roughly where
      we will later put modules, and I make the discovery of
      which addresses we can use dynamic which removes all
      kinds of static limits and remove the dependencies
      on implementation details between different parts of the code.
      
      Now alloc_low_page() and unmap_low_page() use
      early_iomap() and early_iounmap() to allocate/map and
      unmap a page.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NVivek Goyal <vgoyal@in.ibm.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      dafe41ee
  11. 13 2月, 2007 1 次提交
    • Z
      [PATCH] x86-64: x86_64 - Fix FS/GS registers for VT execution · ffb60175
      Zachary Amsden 提交于
      Initialize FS and GS to __KERNEL_DS as well.  The actual value of them is not
      important, but it is important to reload them in protected mode.  At this time,
      they still retain the real mode values from initial boot.  VT disallows
      execution of code under such conditions, which means hardware virtualization
      can not be used to boot the kernel on Intel platforms, making the boot time
      painfully slow.
      
      This requires moving the GS load before the load of GS_BASE, so just move
      all the segments loads there to keep them together in the code.
      Signed-off-by: NZachary Amsden <zach@vmware.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      ffb60175
  12. 26 9月, 2006 3 次提交
    • E
      [PATCH] Reload CS when startup_64 is used. · 26374c7b
      Eric W. Biederman 提交于
      In long mode the %cs is largely a relic.  However there are a few cases
      like iret where it matters that we have a valid value.  Without this
      patch it is possible to enter the kernel in startup_64 without setting
      %cs to a valid value.  With this patch we don't care what %cs value
      we enter the kernel with, so long as the cs shadow register indicates
      it is a privileged code segment.
      
      Thanks to Magnus Damm for finding this problem and posting the
      first workable patch.  I have moved the jump to set %cs down a
      few instructions so we don't need to take an extra jump.  Which
      keeps the code simpler.
      Signed-of-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      26374c7b
    • A
      [PATCH] Remove obsolete CVS $Id$ from assembler files in arch/x86_64/kernel/* · 44cc4526
      Andi Kleen 提交于
      CVS hasn't been used for a long time for them.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      44cc4526
    • V
      [PATCH] Add the vgetcpu vsyscall · c08c8205
      Vojtech Pavlik 提交于
      This patch adds a vgetcpu vsyscall, which depending on the CPU RDTSCP
      capability uses either the RDTSCP or CPUID to obtain a CPU and node
      numbers and pass them to the program.
      
      AK: Lots of changes over Vojtech's original code:
      Better prototype for vgetcpu()
      It's better to pass the cpu / node numbers as separate arguments
      to avoid mistakes when going from SMP to NUMA.
      Also add a fast time stamp based cache using a user supplied
      argument to speed things more up.
      Use fast method from Chuck Ebbert to retrieve node/cpu from
      GDT limit instead of CPUID
      Made sure RDTSCP init is always executed after node is known.
      Drop printk
      Signed-off-by: NVojtech Pavlik <vojtech@suse.cz>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      c08c8205
  13. 31 8月, 2006 1 次提交
  14. 26 3月, 2006 2 次提交
  15. 18 2月, 2006 1 次提交
  16. 17 1月, 2006 1 次提交
  17. 12 1月, 2006 2 次提交
  18. 15 11月, 2005 1 次提交
  19. 05 10月, 2005 1 次提交
    • A
      [PATCH] x86_64: Drop global bit from early low mappings · 944d2647
      Andi Kleen 提交于
      Drop global bit from early low mappings
      
      Suggested by Linus, originally also proposed by Suresh.
      
      This fixes a race condition with early start of udev, originally
      tracked down by Suresh B. Siddha. The problem was that switching
      to the user space VM would not clear the global low mappings
      for the beginning of memory, which lead to memory corruption.
      
      Drop the global bits.
      
      The kernel mapping stays global because it should stay constant.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      944d2647
  20. 13 9月, 2005 1 次提交
  21. 29 7月, 2005 1 次提交
  22. 26 6月, 2005 1 次提交
    • E
      [PATCH] kexec: x86_64: add CONFIG_PHYSICAL_START · d0537508
      Eric W. Biederman 提交于
      For one kernel to report a crash another kernel has created we need
      to have 2 kernels loaded simultaneously in memory.  To accomplish this
      the two kernels need to built to run at different physical addresses.
      
      This patch adds the CONFIG_PHYSICAL_START option to the x86_64 kernel
      so we can do just that.  You need to know what you are doing and
      the ramifications are before changing this value, and most users
      won't care so I have made it depend on CONFIG_EMBEDDED
      
      bzImage kernels will work and run at a different address when compiled
      with this option but they will still load at 1MB.  If you need a kernel
      loaded at a different address as well you need to boot a vmlinux.
      Signed-off-by: NEric Biederman <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d0537508
  23. 17 4月, 2005 2 次提交