1. 29 4月, 2015 1 次提交
    • C
      tile: modify arch_spin_unlock_wait() semantics · 14c3dec2
      Chris Metcalf 提交于
      Rather than trying to wait until all possible lockers have
      unlocked the lock, we now only wait until the current locker
      (if any) has released the lock.
      
      The old code was correct, but the new code works more like the x86
      code and thus hopefully is more appropriate under contention.
      See commit 78bff1c8 ("x86/ticketlock: Fix spin_unlock_wait()
      livelock") for x86.
      Signed-off-by: NChris Metcalf <cmetcalf@ezchip.com>
      14c3dec2
  2. 27 4月, 2015 1 次提交
  3. 24 4月, 2015 6 次提交
    • L
      x86: fix special __probe_kernel_write() tail zeroing case · d869844b
      Linus Torvalds 提交于
      Commit cae2a173 ("x86: clean up/fix 'copy_in_user()' tail zeroing")
      fixed the failure case tail zeroing of one special case of the x86-64
      generic user-copy routine, namely when used for the user-to-user case
      ("copy_in_user()").
      
      But in the process it broke an even more unusual case: using the user
      copy routine for kernel-to-kernel copying.
      
      Now, normally kernel-kernel copies are obviously done using memcpy(),
      but we have a couple of special cases when we use the user-copy
      functions.  One is when we pass a kernel buffer to a regular user-buffer
      routine, using set_fs(KERNEL_DS).  That's a "normal" case, and continued
      to work fine, because it never takes any faults (with the possible
      exception of a silent and successful vmalloc fault).
      
      But Jan Beulich pointed out another, very unusual, special case: when we
      use the user-copy routines not because it's a path that expects a user
      pointer, but for a couple of ftrace/kgdb cases that want to do a kernel
      copy, but do so using "unsafe" buffers, and use the user-copy routine to
      gracefully handle faults.  IOW, for probe_kernel_write().
      
      And that broke for the case of a faulting kernel destination, because we
      saw the kernel destination and wanted to try to clear the tail of the
      buffer.  Which doesn't work, since that's what faults.
      
      This only triggers for things like kgdb and ftrace users (eg trying
      setting a breakpoint on read-only memory), but it's definitely a bug.
      The fix is to not compare against the kernel address start (TASK_SIZE),
      but instead use the same limits "access_ok()" uses.
      Reported-and-tested-by: NJan Beulich <jbeulich@suse.com>
      Cc: stable@vger.kernel.org # 4.0
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d869844b
    • A
      crypto: x86/sha512_ssse3 - fixup for asm function prototype change · 00425bb1
      Ard Biesheuvel 提交于
      Patch e68410eb ("crypto: x86/sha512_ssse3 - move SHA-384/512
      SSSE3 implementation to base layer") changed the prototypes of the
      core asm SHA-512 implementations so that they are compatible with
      the prototype used by the base layer.
      
      However, in one instance, the register that was used for passing the
      input buffer was reused as a scratch register later on in the code,
      and since the input buffer param changed places with the digest param
      -which needs to be written back before the function returns- this
      resulted in the scratch register to be dereferenced in a memory write
      operation, causing a GPF.
      
      Fix this by changing the scratch register to use the same register as
      the input buffer param again.
      
      Fixes: e68410eb ("crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer")
      Reported-By: NBobby Powers <bobbypowers@gmail.com>
      Tested-By: NBobby Powers <bobbypowers@gmail.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      00425bb1
    • L
      nios2: rework cache · 1a70db49
      Ley Foon Tan 提交于
      - flush dcache before flush instruction cache
      - remork update_mmu_cache and flush_dcache_page
      - add shmparam.h
      Signed-off-by: NLey Foon Tan <lftan@altera.com>
      1a70db49
    • E
      nios2: Add types.h header required for __u32 type · 2009337e
      Ezequiel Garcia 提交于
      Reported by the header checker (CONFIG_HEADERS_CHECK=y):
      
        CHECK   usr/include/asm/ (31 files)
      ./usr/include/asm/ptrace.h:77: found __[us]{8,16,32,64} type without #include <linux/types.h>
      Signed-off-by: NEzequiel Garcia <ezequiel@vanguardiasur.com.ar>
      Acked-by: NLey Foon Tan <lftan@altera.com>
      2009337e
    • S
      d91e14b3
    • C
      blackfin: Wire up missing syscalls · 4f650a59
      Chen Gang 提交于
      The related syscalls are below which may cause samples/kdbus building
      break in next-20150401 tree, the related information and error:
      
          CALL    scripts/checksyscalls.sh
        <stdin>:1223:2: warning: #warning syscall kcmp not implemented [-Wcpp]
        <stdin>:1226:2: warning: #warning syscall finit_module not implemented [-Wcpp]
        <stdin>:1229:2: warning: #warning syscall sched_setattr not implemented [-Wcpp]
        <stdin>:1232:2: warning: #warning syscall sched_getattr not implemented [-Wcpp]
        <stdin>:1235:2: warning: #warning syscall renameat2 not implemented [-Wcpp]
        <stdin>:1238:2: warning: #warning syscall seccomp not implemented [-Wcpp]
        <stdin>:1241:2: warning: #warning syscall getrandom not implemented [-Wcpp]
        <stdin>:1244:2: warning: #warning syscall memfd_create not implemented [-Wcpp]
        <stdin>:1247:2: warning: #warning syscall bpf not implemented [-Wcpp]
        <stdin>:1250:2: warning: #warning syscall execveat not implemented [-Wcpp]
        [...]
          HOSTCC  samples/kdbus/kdbus-workers
        samples/kdbus/kdbus-workers.c: In function ‘prime_new’:
        samples/kdbus/kdbus-workers.c:930:18: error: ‘__NR_memfd_create’ undeclared (first use in this function)
          p->fd = syscall(__NR_memfd_create, "prime-area", MFD_CLOEXEC);
                          ^
        samples/kdbus/kdbus-workers.c:930:18: note: each undeclared identifier is reported only once for each function it appears in
      Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com>
      4f650a59
  4. 23 4月, 2015 17 次提交
  5. 22 4月, 2015 8 次提交
  6. 21 4月, 2015 7 次提交
    • N
      ARM: 8344/1: VDSO: honor CONFIG_VDSO in Makefile · f80f6531
      Nathan Lynch 提交于
      When CONFIG_VDSO=n, the build normally does not enter arch/arm/vdso/
      because arch/arm/Makefile does not add it to core-y.
      
      However, if the user runs 'make arch/arm/vdso/' the VDSO targets will
      get visited.  This is because the VDSO Makefile itself does not
      consider the value of CONFIG_VDSO.
      
      It is arguably better and more consistent behavior to generate an
      empty built-in.o when CONFIG_VDSO=n and the user attempts to build
      arch/arm/vdso/.  It's nicer because it doesn't try to build things
      that Kconfig dependencies are there to prevent (e.g. the dependency on
      AEABI), and it's less confusing than building objects that won't be
      used in the final image.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f80f6531
    • N
      ARM: 8343/1: VDSO: add build artifacts to .gitignore · 2b507a2d
      Nathan Lynch 提交于
      vdsomunge and vdso.so.raw are outputs that don't get matched by the
      normal ignore rules.
      Signed-off-by: NNathan Lynch <nathan_lynch@mentor.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      2b507a2d
    • R
      ARM: Fix nommu booting · 0a9024e8
      Russell King 提交于
      Commit bf35706f ("ARM: 8314/1: replace PROCINFO embedded branch with
      relative offset") broke booting on nommu platforms as it didn't update
      the nommu boot code.  This patch fixes that oversight.
      
      Fixes: bf35706f ("ARM: 8314/1: replace PROCINFO embedded branch with relative offset")
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      0a9024e8
    • P
      KVM: PPC: Book3S HV: Use msgsnd for signalling threads on POWER8 · 66feed61
      Paul Mackerras 提交于
      This uses msgsnd where possible for signalling other threads within
      the same core on POWER8 systems, rather than IPIs through the XICS
      interrupt controller.  This includes waking secondary threads to run
      the guest, the interrupts generated by the virtual XICS, and the
      interrupts to bring the other threads out of the guest when exiting.
      
      Aggregated statistics from debugfs across vcpus for a guest with 32
      vcpus, 8 threads/vcore, running on a POWER8, show this before the
      change:
      
       rm_entry:     3387.6ns (228 - 86600, 1008969 samples)
        rm_exit:     4561.5ns (12 - 3477452, 1009402 samples)
        rm_intr:     1660.0ns (12 - 553050, 3600051 samples)
      
      and this after the change:
      
       rm_entry:     3060.1ns (212 - 65138, 953873 samples)
        rm_exit:     4244.1ns (12 - 9693408, 954331 samples)
        rm_intr:     1342.3ns (12 - 1104718, 3405326 samples)
      
      for a test of booting Fedora 20 big-endian to the login prompt.
      
      The time taken for a H_PROD hcall (which is handled in the host
      kernel) went down from about 35 microseconds to about 16 microseconds
      with this change.
      
      The noinline added to kvmppc_run_core turned out to be necessary for
      good performance, at least with gcc 4.9.2 as packaged with Fedora 21
      and a little-endian POWER8 host.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      66feed61
    • P
      KVM: PPC: Book3S HV: Translate kvmhv_commence_exit to C · eddb60fb
      Paul Mackerras 提交于
      This replaces the assembler code for kvmhv_commence_exit() with C code
      in book3s_hv_builtin.c.  It also moves the IPI sending code that was
      in book3s_hv_rm_xics.c into a new kvmhv_rm_send_ipi() function so it
      can be used by kvmhv_commence_exit() as well as icp_rm_set_vcpu_irq().
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      eddb60fb
    • P
      KVM: PPC: Book3S HV: Streamline guest entry and exit · 6af27c84
      Paul Mackerras 提交于
      On entry to the guest, secondary threads now wait for the primary to
      switch the MMU after loading up most of their state, rather than before.
      This means that the secondary threads get into the guest sooner, in the
      common case where the secondary threads get to kvmppc_hv_entry before
      the primary thread.
      
      On exit, the first thread out increments the exit count and interrupts
      the other threads (to get them out of the guest) before saving most
      of its state, rather than after.  That means that the other threads
      exit sooner and means that the first thread doesn't spend so much
      time waiting for the other threads at the point where the MMU gets
      switched back to the host.
      
      This pulls out the code that increments the exit count and interrupts
      other threads into a separate function, kvmhv_commence_exit().
      This also makes sure that r12 and vcpu->arch.trap are set correctly
      in some corner cases.
      
      Statistics from /sys/kernel/debug/kvm/vm*/vcpu*/timings show the
      improvement.  Aggregating across vcpus for a guest with 32 vcpus,
      8 threads/vcore, running on a POWER8, gives this before the change:
      
       rm_entry:     avg 4537.3ns (222 - 48444, 1068878 samples)
        rm_exit:     avg 4787.6ns (152 - 165490, 1010717 samples)
        rm_intr:     avg 1673.6ns (12 - 341304, 3818691 samples)
      
      and this after the change:
      
       rm_entry:     avg 3427.7ns (232 - 68150, 1118921 samples)
        rm_exit:     avg 4716.0ns (12 - 150720, 1119477 samples)
        rm_intr:     avg 1614.8ns (12 - 522436, 3850432 samples)
      
      showing a substantial reduction in the time spent per guest entry in
      the real-mode guest entry code, and smaller reductions in the real
      mode guest exit and interrupt handling times.  (The test was to start
      the guest and boot Fedora 20 big-endian to the login prompt.)
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      6af27c84
    • P
      KVM: PPC: Book3S HV: Use bitmap of active threads rather than count · 7d6c40da
      Paul Mackerras 提交于
      Currently, the entry_exit_count field in the kvmppc_vcore struct
      contains two 8-bit counts, one of the threads that have started entering
      the guest, and one of the threads that have started exiting the guest.
      This changes it to an entry_exit_map field which contains two bitmaps
      of 8 bits each.  The advantage of doing this is that it gives us a
      bitmap of which threads need to be signalled when exiting the guest.
      That means that we no longer need to use the trick of setting the
      HDEC to 0 to pull the other threads out of the guest, which led in
      some cases to a spurious HDEC interrupt on the next guest entry.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      7d6c40da