1. 28 7月, 2009 1 次提交
    • B
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt 提交于
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: NNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  2. 24 7月, 2009 4 次提交
    • M
      [S390] vdso: clock_gettime of CLOCK_THREAD_CPUTIME_ID with noexec=on · 1277580f
      Martin Schwidefsky 提交于
      The combination of noexec=on and a clock_gettime call with clock id
      CLOCK_THREAD_CPUTIME_ID is broken. The vdso code switches to the
      access register mode to get access to the per-cpu data structure to
      execute the magic ectg instruction. After the ectg instruction the
      code always switches back to the primary mode but for noexec=on the
      correct mode is the secondary mode. The effect of the bug is that the
      user space program looses the access to all mappings without PROT_EXEC,
      e.g. the stack. The problem is fixed by restoring the mode that has
      been active before the switch to the access register mode.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      1277580f
    • H
      [S390] vdso: fix per cpu area allocation · 3a6ba460
      Heiko Carstens 提交于
      vdso per cpu area allocation in smp_prepare_cpus() happens with GFP_KERNEL
      but irqs disabled. Triggers this one:
      
      Badness at kernel/lockdep.c:2280
      Modules linked in:
      CPU: 0 Not tainted 2.6.30 #2
      Process swapper (pid: 1, task: 000000003fe88000, ksp: 000000003fe87eb8)
      Krnl PSW : 0400c00180000000 0000000000083360 (lockdep_trace_alloc+0xec/0xf8)
      [...]
      Call Trace:
      ([<00000000000832b6>] lockdep_trace_alloc+0x42/0xf8)
       [<00000000000b1880>] __alloc_pages_internal+0x3e8/0x5c4
       [<00000000000b1b4a>] __get_free_pages+0x3a/0xb0
       [<0000000000026546>] vdso_alloc_per_cpu+0x6a/0x18c
       [<00000000005eff82>] smp_prepare_cpus+0x322/0x594
       [<00000000005e8232>] kernel_init+0x76/0x398
       [<000000000001bb1e>] kernel_thread_starter+0x6/0xc
       [<000000000001bb18>] kernel_thread_starter+0x0/0xc
      
      Fix this by moving the allocation out of the irqs disabled section.
      Reported-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      3a6ba460
    • H
      [S390] hibernation: fix register corruption on machine checks · c63b196a
      Heiko Carstens 提交于
      swsusp_arch_suspend() actually saves all cpu register contents on
      hibernation.
      Machine checks must be disabled since swsusp_arch_suspend() stores
      register contents to their lowcore save areas. That's the same
      place where register contents on machine checks would be saved.
      To avoid register corruption disable machine checks.
      We must also disable machine checks in the new psw mask for
      program checks, since swsusp_arch_suspend() may generate program
      checks.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      c63b196a
    • H
      [S390] hibernation: fix lowcore handling · 5f954c34
      Heiko Carstens 提交于
      Our swsusp_arch_suspend() backend implementation disables prefixing
      by setting the contents of the prefix register to 0.
      However afterwards common code functions are called which might
      access percpu data structures.
      Since the lowcore contains e.g. the percpu base pointer this isn't
      a good idea. So fix this by copying the hibernating cpu's lowcore to
      absolute address zero.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      5f954c34
  3. 23 7月, 2009 1 次提交
  4. 21 7月, 2009 1 次提交
  5. 19 7月, 2009 1 次提交
  6. 18 7月, 2009 1 次提交
    • T
      vmlinux.lds.h: restructure BSS linker script macros · 04e448d9
      Tim Abbott 提交于
      The BSS section macros in vmlinux.lds.h currently place the .sbss
      input section outside the bounds of [__bss_start, __bss_end].  On all
      architectures except for microblaze that handle both .sbss and
      __bss_start/__bss_end, this is wrong: the .sbss input section is
      within the range [__bss_start, __bss_end].  Relatedly, the example
      code at the top of the file actually has __bss_start/__bss_end defined
      twice; I believe the right fix here is to define them in the
      BSS_SECTION macro but not in the BSS macro.
      
      Another problem with the current macros is that several
      architectures have an ALIGN(4) or some other small number just before
      __bss_stop in their linker scripts.  The BSS_SECTION macro currently
      hardcodes this to 4; while it should really be an argument.  It also
      ignores its sbss_align argument; fix that.
      
      mn10300 is the only user at present of any of the macros touched by
      this patch.  It looks like mn10300 actually was incorrectly converted
      to use the new BSS() macro (the alignment of 4 prior to conversion was
      a __bss_stop alignment, but the argument to the BSS macro is a start
      alignment).  So fix this as well.
      
      I'd like acks from Sam and David on this one.  Also CCing Paul, since
      he has a patch from me which will need to be updated to use
      BSS_SECTION(0, PAGE_SIZE, 4) once this gets merged.
      Signed-off-by: NTim Abbott <tabbott@ksplice.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
      04e448d9
  7. 17 7月, 2009 7 次提交
  8. 16 7月, 2009 20 次提交
  9. 15 7月, 2009 2 次提交
  10. 14 7月, 2009 2 次提交