1. 29 6月, 2013 3 次提交
  2. 24 6月, 2013 12 次提交
  3. 22 6月, 2013 1 次提交
    • A
      aout32 coredump compat fix · 945fb136
      Al Viro 提交于
      dump_seek() does SEEK_CUR, not SEEK_SET; native binfmt_aout
      handles it correctly (seeks by PAGE_SIZE - sizeof(struct user),
      getting the current position to PAGE_SIZE), compat one seeks
      by PAGE_SIZE and ends up at PAGE_SIZE + already written...
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      945fb136
  4. 21 6月, 2013 1 次提交
  5. 20 6月, 2013 8 次提交
    • M
      x86: Fix trigger_all_cpu_backtrace() implementation · b52e0a7c
      Michel Lespinasse 提交于
      The following change fixes the x86 implementation of
      trigger_all_cpu_backtrace(), which was previously (accidentally,
      as far as I can tell) disabled to always return false as on
      architectures that do not implement this function.
      
      trigger_all_cpu_backtrace(), as defined in include/linux/nmi.h,
      should call arch_trigger_all_cpu_backtrace() if available, or
      return false if the underlying arch doesn't implement this
      function.
      
      x86 did provide a suitable arch_trigger_all_cpu_backtrace()
      implementation, but it wasn't actually being used because it was
      declared in asm/nmi.h, which linux/nmi.h doesn't include. Also,
      linux/nmi.h couldn't easily be fixed by including asm/nmi.h,
      because that file is not available on all architectures.
      
      I am proposing to fix this by moving the x86 definition of
      arch_trigger_all_cpu_backtrace() to asm/irq.h.
      
      Tested via: echo l > /proc/sysrq-trigger
      
      Before the change, this uses a fallback implementation which
      shows backtraces on active CPUs (using
      smp_call_function_interrupt() )
      
      After the change, this shows NMI backtraces on all CPUs
      Signed-off-by: NMichel Lespinasse <walken@google.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1370518875-1346-1-git-send-email-walken@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b52e0a7c
    • L
      ARM: kernel: implement stack pointer save array through MPIDR hashing · 7604537b
      Lorenzo Pieralisi 提交于
      Current implementation of cpu_{suspend}/cpu_{resume} relies on the MPIDR
      to index the array of pointers where the context is saved and restored.
      The current approach works as long as the MPIDR can be considered a
      linear index, so that the pointers array can simply be dereferenced by
      using the MPIDR[7:0] value.
      On ARM multi-cluster systems, where the MPIDR may not be a linear index,
      to properly dereference the stack pointer array, a mapping function should
      be applied to it so that it can be used for arrays look-ups.
      
      This patch adds code in the cpu_{suspend}/cpu_{resume} implementation
      that relies on shifting and ORing hashing method to map a MPIDR value to a
      set of buckets precomputed at boot to have a collision free mapping from
      MPIDR to context pointers.
      
      The hashing algorithm must be simple, fast, and implementable with few
      instructions since in the cpu_resume path the mapping is carried out with
      the MMU off and the I-cache off, hence code and data are fetched from DRAM
      with no-caching available. Simplicity is counterbalanced with a little
      increase of memory (allocated dynamically) for stack pointers buckets, that
      should be anyway fairly limited on most systems.
      
      Memory for context pointers is allocated in a early_initcall with
      size precomputed and stashed previously in kernel data structures.
      Memory for context pointers is allocated through kmalloc; this
      guarantees contiguous physical addresses for the allocated memory which
      is fundamental to the correct functioning of the resume mechanism that
      relies on the context pointer array to be a chunk of contiguous physical
      memory. Virtual to physical address conversion for the context pointer
      array base is carried out at boot to avoid fiddling with virt_to_phys
      conversions in the cpu_resume path which is quite fragile and should be
      optimized to execute as few instructions as possible.
      Virtual and physical context pointer base array addresses are stashed in a
      struct that is accessible from assembly using values generated through the
      asm-offsets.c mechanism.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Colin Cross <ccross@android.com>
      Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Amit Kucheria <amit.kucheria@linaro.org>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NShawn Guo <shawn.guo@linaro.org>
      Tested-by: NKevin Hilman <khilman@linaro.org>
      Tested-by: NStephen Warren <swarren@wwwdotorg.org>
      7604537b
    • L
      ARM: kernel: build MPIDR hash function data structure · 8cf72172
      Lorenzo Pieralisi 提交于
      On ARM SMP systems, cores are identified by their MPIDR register.
      The MPIDR guidelines in the ARM ARM do not provide strict enforcement of
      MPIDR layout, only recommendations that, if followed, split the MPIDR
      on ARM 32 bit platforms in three affinity levels. In multi-cluster
      systems like big.LITTLE, if the affinity guidelines are followed, the
      MPIDR can not be considered an index anymore. This means that the
      association between logical CPU in the kernel and the HW CPU identifier
      becomes somewhat more complicated requiring methods like hashing to
      associate a given MPIDR to a CPU logical index, in order for the look-up
      to be carried out in an efficient and scalable way.
      
      This patch provides a function in the kernel that starting from the
      cpu_logical_map, implement collision-free hashing of MPIDR values by checking
      all significative bits of MPIDR affinity level bitfields. The hashing
      can then be carried out through bits shifting and ORing; the resulting
      hash algorithm is a collision-free though not minimal hash that can be
      executed with few assembly instructions. The mpidr is filtered through a
      mpidr mask that is built by checking all bits that toggle in the set of
      MPIDRs corresponding to possible CPUs. Bits that do not toggle do not carry
      information so they do not contribute to the resulting hash.
      
      Pseudo code:
      
      /* check all bits that toggle, so they are required */
      for (i = 1, mpidr_mask = 0; i < num_possible_cpus(); i++)
      	mpidr_mask |= (cpu_logical_map(i) ^ cpu_logical_map(0));
      
      /*
       * Build shifts to be applied to aff0, aff1, aff2 values to hash the mpidr
       * fls() returns the last bit set in a word, 0 if none
       * ffs() returns the first bit set in a word, 0 if none
       */
      fs0 = mpidr_mask[7:0] ? ffs(mpidr_mask[7:0]) - 1 : 0;
      fs1 = mpidr_mask[15:8] ? ffs(mpidr_mask[15:8]) - 1 : 0;
      fs2 = mpidr_mask[23:16] ? ffs(mpidr_mask[23:16]) - 1 : 0;
      ls0 = fls(mpidr_mask[7:0]);
      ls1 = fls(mpidr_mask[15:8]);
      ls2 = fls(mpidr_mask[23:16]);
      bits0 = ls0 - fs0;
      bits1 = ls1 - fs1;
      bits2 = ls2 - fs2;
      aff0_shift = fs0;
      aff1_shift = 8 + fs1 - bits0;
      aff2_shift = 16 + fs2 - (bits0 + bits1);
      u32 hash(u32 mpidr) {
      	u32 l0, l1, l2;
      	u32 mpidr_masked = mpidr & mpidr_mask;
      	l0 = mpidr_masked & 0xff;
      	l1 = mpidr_masked & 0xff00;
      	l2 = mpidr_masked & 0xff0000;
      	return (l0 >> aff0_shift | l1 >> aff1_shift | l2 >> aff2_shift);
      }
      
      The hashing algorithm relies on the inherent properties set in the ARM ARM
      recommendations for the MPIDR. Exotic configurations, where for instance the
      MPIDR values at a given affinity level have large holes, can end up requiring
      big hash tables since the compression of values that can be achieved through
      shifting is somewhat crippled when holes are present. Kernel warns if
      the number of buckets of the resulting hash table exceeds the number of
      possible CPUs by a factor of 4, which is a symptom of a very sparse HW
      MPIDR configuration.
      
      The hash algorithm is quite simple and can easily be implemented in assembly
      code, to be used in code paths where the kernel virtual address space is
      not set-up (ie cpu_resume) and instruction and data fetches are strongly
      ordered so code must be compact and must carry out few data accesses.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Colin Cross <ccross@android.com>
      Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
      Cc: Amit Kucheria <amit.kucheria@linaro.org>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reviewed-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Tested-by: NShawn Guo <shawn.guo@linaro.org>
      Tested-by: NKevin Hilman <khilman@linaro.org>
      Tested-by: NStephen Warren <swarren@wwwdotorg.org>
      8cf72172
    • J
      perf: arm64: Record the user-mode PC in the call chain. · abc41254
      Jed Davis 提交于
      With this change, we no longer lose the innermost entry in the user-mode
      part of the call chain.  See also the x86 port, which includes the ip,
      and the corresponding change in arch/arm.
      Signed-off-by: NJed Davis <jld@mozilla.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      abc41254
    • A
      powerpc: Fix bad pmd error with book3E config · 8bbd9f04
      Aneesh Kumar K.V 提交于
      Book3E uses the hugepd at PMD level and don't encode pte directly
      at the pmd level. So it will find the lower bits of pmd set
      and the pmd_bad check throws error. Infact the current code
      will never take the free_hugepd_range call at all because it will
      clear the pmd if it find a hugepd pointer. Fix this by clearing
      bad pmd only if it is not a hugepd pointer.
      
      This is regression introduced by e2b3d202
      "powerpc: Switch 16GB and 16MB explicit hugepages to a different page table format"
      Reported-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      8bbd9f04
    • P
      x86: Fix section mismatch on load_ucode_ap · 94978599
      Paul Gortmaker 提交于
      We are in the process of removing all the __cpuinit annotations.
      While working on making that change, an existing problem was
      made evident:
      
        WARNING: arch/x86/kernel/built-in.o(.text+0x198f2): Section mismatch
        in reference from the function cpu_init() to the function
        .init.text:load_ucode_ap()   The function cpu_init() references
        the function __init load_ucode_ap().  This is often because cpu_init
        lacks a __init annotation or the annotation of load_ucode_ap is wrong.
      
      This now appears because in my working tree, cpu_init() is no longer
      tagged as __cpuinit, and so the audit picks up the mismatch.  The 2nd
      hypothesis from the audit is the correct one, as there was an incorrect
      __init tag on the prototype in the header (but __cpuinit was used on
      the function itself.)
      
      The audit is telling us that the prototype's __init annotation took
      effect and the function did land in the .init.text section.  Checking
      with objdump on a mainline tree that still has __cpuinit shows that
      the __cpuinit on the function takes precedence over the __init on the
      prototype, but that won't be true once we make __cpuinit a no-op.
      
      Even though we are removing __cpuinit, we temporarily align both
      the function and the prototype on __cpuinit so that the changeset
      can be applied to stable trees  if desired.
      
      [ hpa: build fix only, no object code change ]
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: stable <stable@vger.kernel.org> # 3.9+
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      Link: http://lkml.kernel.org/r/1371654926-11729-1-git-send-email-paul.gortmaker@windriver.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      94978599
    • D
      mn10300: Fix include dependency in irqflags.h et al. · c0691143
      David Daney 提交于
      We need to pick up the definition of raw_smp_processor_id() from
      asm/smp.h.  For the !SMP case, we need to supply a definition of
      raw_smp_processor_id().
      
      Because of the include dependencies we cannot use smp_call_func_t in
      asm/smp.h, but we do need linux/thread_info.h
      Signed-off-by: NDavid Daney <david.daney@cavium.com>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c0691143
    • J
      metag: fix mm/hugetlb.c build breakage · 418a133b
      James Hogan 提交于
      Commit 106c992a ("mm/hugetlb: add more arch-defined huge_pte
      functions") added an include of <asm-generic/hugetlb.h> to each
      architecture's <asm/hugetlb.h> (except s390).  Unfortunately metag was
      missed which resulted in build errors when hugetlbfs is enabled (see
      below).
      
      Add the include for metag too to fix the build errors:
      
        mm/hugetlb.c In function 'make_huge_pte':
        mm/hugetlb.c +2250 : error: implicit declaration of function 'huge_pte_mkwrite'
        mm/hugetlb.c +2250 : error: implicit declaration of function 'huge_pte_mkdirty'
        ...
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      418a133b
  6. 19 6月, 2013 15 次提交