1. 03 11月, 2014 3 次提交
    • H
      s390/cpum_sf: Remove initialization of PMU event index · eaf785d5
      Hendrik Brueckner 提交于
      The git commit c719f560
      "perf: Fix and clean up initialization of pmu::event_idx" removed
      the PMU event index callback for all architectures but x86,
      remove the initialization of the event index as well.
      Signed-off-by: NHendrik Brueckner <brueckner@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      eaf785d5
    • M
      s390/signal: add sparse annotations · 37d2cd9d
      Martin Schwidefsky 提交于
      Fix the following warnings from the sparse code checker:
      
      arch/s390/kernel/signal.c:374:38: warning: cast removes address space of expression
      arch/s390/kernel/signal.c:374:65: warning: incorrect type in initializer (different address spaces)
      arch/s390/kernel/signal.c:374:65:    expected unsigned short [noderef] [usertype] <asn:1>*svc
      arch/s390/kernel/signal.c:374:65:    got void *
      
      arch/s390/kernel/compat_signal.c:437:38: warning: cast removes address space of expression
      arch/s390/kernel/compat_signal.c:437:65: warning: incorrect type in initializer (different address spaces)
      arch/s390/kernel/compat_signal.c:437:65:    expected unsigned short [noderef] [usertype] <asn:1>*svc
      arch/s390/kernel/compat_signal.c:437:65:    got void *
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      37d2cd9d
    • M
      s390/cmpxchg: use compiler builtins · f318a122
      Martin Schwidefsky 提交于
      The kernel build for s390 fails for gcc compilers with version 3.x,
      set the minimum required version of gcc to version 4.3.
      
      As the atomic builtins are available with all gcc 4.x compilers,
      use the __sync_val_compare_and_swap and __sync_bool_compare_and_swap
      functions to replace the complex macro and inline assembler magic
      in include/asm/cmpxchg.h. The compiler can just-do-it and generates
      better code with the builtins.
      
      While we are at it use __sync_bool_compare_and_swap for the
      _raw_compare_and_swap function in the spinlock code as well.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      f318a122
  2. 27 10月, 2014 4 次提交
    • H
      s390/kprobes: make use of NOKPROBE_SYMBOL() · 7a5388de
      Heiko Carstens 提交于
      Use NOKPROBE_SYMBOL() instead of __kprobes annotation.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      7a5388de
    • H
      s390/ftrace,kprobes: allow to patch first instruction · c933146a
      Heiko Carstens 提交于
      If the function tracer is enabled, allow to set kprobes on the first
      instruction of a function (which is the function trace caller):
      
      If no kprobe is set handling of enabling and disabling function tracing
      of a function simply patches the first instruction. Either it is a nop
      (right now it's an unconditional branch, which skips the mcount block),
      or it's a branch to the ftrace_caller() function.
      
      If a kprobe is being placed on a function tracer calling instruction
      we encode if we actually have a nop or branch in the remaining bytes
      after the breakpoint instruction (illegal opcode).
      This is possible, since the size of the instruction used for the nop
      and branch is six bytes, while the size of the breakpoint is only
      two bytes.
      Therefore the first two bytes contain the illegal opcode and the last
      four bytes contain either "0" for nop or "1" for branch. The kprobes
      code will then execute/simulate the correct instruction.
      
      Instruction patching for kprobes and function tracer is always done
      with stop_machine(). Therefore we don't have any races where an
      instruction is patched concurrently on a different cpu.
      Besides that also the program check handler which executes the function
      trace caller instruction won't be executed concurrently to any
      stop_machine() execution.
      
      This allows to keep full fault based kprobes handling which generates
      correct pt_regs contents automatically.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      c933146a
    • H
      s390/vdso: fix stack corruption · 9b2efe03
      Heiko Carstens 提交于
      The kernel provided vdso functions do not get a stack frame from the
      calling function and therefore may not change the stack contents, unless
      they allocate space on their own.
      
      This problem was exposed with 070b7be6 "s390/vdso: replace stck with
      stcke" which writes 16 bytes instead of 8 bytes into the stack frame. These
      additional 8 bytes however were indeed used by the caller (glibc) to save
      data and therefore this data was corrupted by the vdso code.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      9b2efe03
    • M
      s390/time: use stck clock fast for do_account_vtime · 1f759bb3
      Martin Schwidefsky 提交于
      The last high frequency call site of the STCK instruction is
      do_account_vtime. Replace it with the faster STCKF instruction.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      1f759bb3
  3. 17 10月, 2014 2 次提交
  4. 10 10月, 2014 1 次提交
  5. 09 10月, 2014 8 次提交
  6. 26 9月, 2014 2 次提交
  7. 25 9月, 2014 5 次提交
  8. 24 9月, 2014 1 次提交
    • E
      ARCH: AUDIT: audit_syscall_entry() should not require the arch · 91397401
      Eric Paris 提交于
      We have a function where the arch can be queried, syscall_get_arch().
      So rather than have every single piece of arch specific code use and/or
      duplicate syscall_get_arch(), just have the audit code use the
      syscall_get_arch() code.
      Based-on-patch-by: NRichard Briggs <rgb@redhat.com>
      Signed-off-by: NEric Paris <eparis@redhat.com>
      Cc: linux-alpha@vger.kernel.org
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: linux-ia64@vger.kernel.org
      Cc: microblaze-uclinux@itee.uq.edu.au
      Cc: linux-mips@linux-mips.org
      Cc: linux@lists.openrisc.net
      Cc: linux-parisc@vger.kernel.org
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: linux-s390@vger.kernel.org
      Cc: linux-sh@vger.kernel.org
      Cc: sparclinux@vger.kernel.org
      Cc: user-mode-linux-devel@lists.sourceforge.net
      Cc: linux-xtensa@linux-xtensa.org
      Cc: x86@kernel.org
      91397401
  9. 09 9月, 2014 8 次提交
  10. 04 9月, 2014 1 次提交
    • A
      seccomp,x86,arm,mips,s390: Remove nr parameter from secure_computing · a4412fc9
      Andy Lutomirski 提交于
      The secure_computing function took a syscall number parameter, but
      it only paid any attention to that parameter if seccomp mode 1 was
      enabled.  Rather than coming up with a kludge to get the parameter
      to work in mode 2, just remove the parameter.
      
      To avoid churn in arches that don't have seccomp filters (and may
      not even support syscall_get_nr right now), this leaves the
      parameter in secure_computing_strict, which is now a real function.
      
      For ARM, this is a bit ugly due to the fact that ARM conditionally
      supports seccomp filters.  Fixing that would probably only be a
      couple of lines of code, but it should be coordinated with the audit
      maintainers.
      
      This will be a slight slowdown on some arches.  The right fix is to
      pass in all of seccomp_data instead of trying to make just the
      syscall nr part be fast.
      
      This is a prerequisite for making two-phase seccomp work cleanly.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: linux-s390@vger.kernel.org
      Cc: x86@kernel.org
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      a4412fc9
  11. 01 9月, 2014 2 次提交
  12. 27 8月, 2014 1 次提交
    • C
      s390: Replace __get_cpu_var uses · eb7e7d76
      Christoph Lameter 提交于
      __get_cpu_var() is used for multiple purposes in the kernel source. One of
      them is address calculation via the form &__get_cpu_var(x).  This calculates
      the address for the instance of the percpu variable of the current processor
      based on an offset.
      
      Other use cases are for storing and retrieving data from the current
      processors percpu area.  __get_cpu_var() can be used as an lvalue when
      writing data or on the right side of an assignment.
      
      __get_cpu_var() is defined as :
      
      #define __get_cpu_var(var) (*this_cpu_ptr(&(var)))
      
      __get_cpu_var() always only does an address determination. However, store
      and retrieve operations could use a segment prefix (or global register on
      other platforms) to avoid the address calculation.
      
      this_cpu_write() and this_cpu_read() can directly take an offset into a
      percpu area and use optimized assembly code to read and write per cpu
      variables.
      
      This patch converts __get_cpu_var into either an explicit address
      calculation using this_cpu_ptr() or into a use of this_cpu operations that
      use the offset.  Thereby address calculations are avoided and less registers
      are used when code is generated.
      
      At the end of the patch set all uses of __get_cpu_var have been removed so
      the macro is removed too.
      
      The patch set includes passes over all arches as well. Once these operations
      are used throughout then specialized macros can be defined in non -x86
      arches as well in order to optimize per cpu access by f.e.  using a global
      register that may be set to the per cpu base.
      
      Transformations done to __get_cpu_var()
      
      1. Determine the address of the percpu instance of the current processor.
      
      	DEFINE_PER_CPU(int, y);
      	int *x = &__get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(&y);
      
      2. Same as #1 but this time an array structure is involved.
      
      	DEFINE_PER_CPU(int, y[20]);
      	int *x = __get_cpu_var(y);
      
          Converts to
      
      	int *x = this_cpu_ptr(y);
      
      3. Retrieve the content of the current processors instance of a per cpu
      variable.
      
      	DEFINE_PER_CPU(int, y);
      	int x = __get_cpu_var(y)
      
         Converts to
      
      	int x = __this_cpu_read(y);
      
      4. Retrieve the content of a percpu struct
      
      	DEFINE_PER_CPU(struct mystruct, y);
      	struct mystruct x = __get_cpu_var(y);
      
         Converts to
      
      	memcpy(&x, this_cpu_ptr(&y), sizeof(x));
      
      5. Assignment to a per cpu variable
      
      	DEFINE_PER_CPU(int, y)
      	__get_cpu_var(y) = x;
      
         Converts to
      
      	this_cpu_write(y, x);
      
      6. Increment/Decrement etc of a per cpu variable
      
      	DEFINE_PER_CPU(int, y);
      	__get_cpu_var(y)++
      
         Converts to
      
      	this_cpu_inc(y)
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      CC: linux390@de.ibm.com
      Acked-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      eb7e7d76
  13. 12 8月, 2014 2 次提交