1. 01 10月, 2016 4 次提交
  2. 14 9月, 2016 1 次提交
    • V
      ARC: uaccess: get_user to zero out dest in cause of fault · 05d9d0b9
      Vineet Gupta 提交于
      Al reported potential issue with ARC get_user() as it wasn't clearing
      out destination pointer in case of fault due to bad address etc.
      
      Verified using following
      
      | {
      |  	u32 bogus1 = 0xdeadbeef;
      |	u64 bogus2 = 0xdead;
      |	int rc1, rc2;
      |
      |  	pr_info("Orig values %x %llx\n", bogus1, bogus2);
      |	rc1 = get_user(bogus1, (u32 __user *)0x40000000);
      |	rc2 = get_user(bogus2, (u64 __user *)0x50000000);
      |	pr_info("access %d %d, new values %x %llx\n",
      |		rc1, rc2, bogus1, bogus2);
      | }
      
      | [ARCLinux]# insmod /mnt/kernel-module/qtn.ko
      | Orig values deadbeef dead
      | access -14 -14, new values 0 0
      Reported-by: NAl Viro <viro@ZenIV.linux.org.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-snps-arc@lists.infradead.org
      Cc: linux-kernel@vger.kernel.org
      Cc: stable@vger.kernel.org
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      05d9d0b9
  3. 20 8月, 2016 5 次提交
    • V
      ARC: export __udivdi3 for modules · c57653dc
      Vineet Gupta 提交于
      Some module using div_u64() was failing to link because the libgcc 64-bit
      divide assist routine was not being exported for modules
      
      Reported-by: avinashp@quantenna.com
      Cc: stable@vger.kernel.org
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      c57653dc
    • V
      ARC: mm: fix build breakage with STRICT_MM_TYPECHECKS · 1c3c9093
      Vineet Gupta 提交于
      |  CC      mm/memory.o
      | In file included from ../mm/memory.c:53:0:
      | ../include/linux/pfn_t.h: In function ‘pfn_t_pte’:
      | ../include/linux/pfn_t.h:78:2: error: conversion to non-scalar type requested
      |  return pfn_pte(pfn_t_to_pfn(pfn), pgprot);
      
      With STRICT_MM_TYPECHECKS pte_t is a struct and the offending code
      forces a cast which ends up shifting a struct and hence the gcc warning.
      
      Note that in recent past some of the arches (aarch64, s390) made
      STRICT_MM_TYPECHECKS default, but we don't for ARC as this leads to slightly
      worse generated code, given ARC ABI definition of returning structs
      (which pte_t would become)
      
      Quoting from ARC ABI...
      
        "Results of type struct are returned in a caller-supplied temporary
        variable whose address is passed in r0.
        For such functions, the arguments are shifted so that they are
        passed in r1 and up."
      
      So
       - struct to be returned would be allocated on stack requiring extra
         code at call sites
       - callee updates stack memory to facilitate the return (vs. simple
         MOV into return reg r0)
      
      Hence STRICT_MM_TYPECHECKS is not enabled by default for ARC
      
      Cc: <stable@vger.kernel.org>   #4.4+
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      1c3c9093
    • V
      ARC: export kmap · d77976c4
      Vineet Gupta 提交于
      |  MODPOST 7 modules
      | ERROR: "kmap" [fs/ext2/ext2.ko] undefined!
      | ../scripts/Makefile.modpost:91: recipe for target '__modpost' failed
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      d77976c4
    • V
      ARC: Support syscall ABI v4 · 840c054f
      Vineet Gupta 提交于
      The syscall ABI includes the gcc functional calling ABI since a syscall
      implies userland caller and kernel callee.
      
      The current gcc ABI (v3) for ARCv2 ISA required 64-bit data be passed in
      even-odd register pairs, (potentially punching reg holes when passing such
      values as args). This was partly driven by the fact that the double-word
      LDD/STD instructions in ARCv2 expect the register alignment and thus gcc
      forcing this avoids extra MOV at the cost of a few unused register (which we
      have plenty anyways).
      
      This however was rejected as part of upstreaming gcc port to HS. So the new
      ABI v4 doesn't enforce the even-odd reg restriction.
      
      Do note that for ARCompact ISA builds v3 and v4 are practically the same in
      terms of gcc code generation.
      
      In terms of change management, we infer the new ABI if gcc 6.x onwards
      is used for building the kernel.
      
      This also needs a stable backport to enable older kernels to work with
      new tools/user-space
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      840c054f
    • L
      ARC: use correct offset in pt_regs for saving/restoring user mode r25 · 86147e3c
      Liav Rehana 提交于
      User mode callee regs are explicitly collected before signal delivery or
      breakpoint trap. r25 is special for kernel as it serves as task pointer,
      so user mode value is clobbered very early. It is saved in pt_regs where
      generally only scratch (aka caller saved) regs are saved.
      
      The code to access the corresponding pt_regs location had a subtle bug as
      it was using load/store with scaling of offset, whereas the offset was already
      byte wise correct. So fix this by replacing LD.AS with a standard LD
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NLiav Rehana <liavr@mellanox.com>
      Reviewed-by: NAlexey Brodkin <abrodkin@synopsys.com>
      [vgupta: rewrote title and commit log]
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      86147e3c
  4. 11 8月, 2016 2 次提交
    • V
      ARC: Elide redundant setup of DMA callbacks · 45c3b08a
      Vineet Gupta 提交于
      For resources shared by all cores such as SLC and IOC, only the master
      core needs to do any setups / enabling / disabling etc.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      45c3b08a
    • D
      ARC: Call trace_hardirqs_on() before enabling irqs · 18b43e89
      Daniel Mentz 提交于
      trace_hardirqs_on_caller() in lockdep.c expects to be called before, not
      after interrupts are actually enabled.
      
      The following comment in kernel/locking/lockdep.c substantiates this
      claim:
      
      "
      /*
       * We're enabling irqs and according to our state above irqs weren't
       * already enabled, yet we find the hardware thinks they are in fact
       * enabled.. someone messed up their IRQ state tracing.
       */
      "
      
      An example can be found in include/linux/irqflags.h:
      
      	do { trace_hardirqs_on(); raw_local_irq_enable(); } while (0)
      
      Without this change, we hit the following DEBUG_LOCKS_WARN_ON.
      
      [    7.760000] ------------[ cut here ]------------
      [    7.760000] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:2711 resume_user_mode_begin+0x48/0xf0
      [    7.770000] DEBUG_LOCKS_WARN_ON(!irqs_disabled())
      [    7.780000] Modules linked in:
      [    7.780000] CPU: 0 PID: 1 Comm: init Not tainted 4.7.0-00003-gc668bb9-dirty #366
      [    7.790000]
      [    7.790000] Stack Trace:
      [    7.790000]   arc_unwind_core.constprop.1+0xa4/0x118
      [    7.800000]   warn_slowpath_fmt+0x72/0x158
      [    7.800000]   resume_user_mode_begin+0x48/0xf0
      [    7.810000] ---[ end trace 6f6a7a8fae20d2f0 ]---
      Signed-off-by: NDaniel Mentz <danielmentz@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      18b43e89
  5. 04 8月, 2016 1 次提交
    • K
      dma-mapping: use unsigned long for dma_attrs · 00085f1e
      Krzysztof Kozlowski 提交于
      The dma-mapping core and the implementations do not change the DMA
      attributes passed by pointer.  Thus the pointer can point to const data.
      However the attributes do not have to be a bitfield.  Instead unsigned
      long will do fine:
      
      1. This is just simpler.  Both in terms of reading the code and setting
         attributes.  Instead of initializing local attributes on the stack
         and passing pointer to it to dma_set_attr(), just set the bits.
      
      2. It brings safeness and checking for const correctness because the
         attributes are passed by value.
      
      Semantic patches for this change (at least most of them):
      
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
      
          @@
          f(...,
          - struct dma_attrs *attrs
          + unsigned long attrs
          , ...)
          {
          ...
          }
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      and
      
          // Options: --all-includes
          virtual patch
          virtual context
      
          @r@
          identifier f, attrs;
          type t;
      
          @@
          t f(..., struct dma_attrs *attrs);
      
          @@
          identifier r.f;
          @@
          f(...,
          - NULL
          + 0
           )
      
      Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.comSigned-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Acked-by: NVineet Gupta <vgupta@synopsys.com>
      Acked-by: NRobin Murphy <robin.murphy@arm.com>
      Acked-by: NHans-Christian Noren Egtvedt <egtvedt@samfundet.no>
      Acked-by: Mark Salter <msalter@redhat.com> [c6x]
      Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris]
      Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm]
      Reviewed-by: NBart Van Assche <bart.vanassche@sandisk.com>
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp]
      Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core]
      Acked-by: David Vrabel <david.vrabel@citrix.com> [xen]
      Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb]
      Acked-by: Joerg Roedel <jroedel@suse.de> [iommu]
      Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon]
      Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k]
      Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390]
      Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org>
      Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32]
      Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc]
      Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      00085f1e
  6. 03 8月, 2016 1 次提交
  7. 29 7月, 2016 1 次提交
    • V
      ARC: mm: don't loose PTE_SPECIAL in pte_modify() · 3925a16a
      Vineet Gupta 提交于
      LTP madvise05 was generating mm splat
      
      | [ARCLinux]# /sd/ltp/testcases/bin/madvise05
      | BUG: Bad page map in process madvise05  pte:80e08211 pmd:9f7d4000
      | page:9fdcfc90 count:1 mapcount:-1 mapping:  (null) index:0x0 flags: 0x404(referenced|reserved)
      | page dumped because: bad pte
      | addr:200b8000 vm_flags:00000070 anon_vma:  (null) mapping:  (null) index:1005c
      | file:  (null) fault:  (null) mmap:  (null) readpage:  (null)
      | CPU: 2 PID: 6707 Comm: madvise05
      
      And for newer kernels, the system was rendered unusable afterwards.
      
      The problem was mprotect->pte_modify() clearing PTE_SPECIAL (which is
      set to identify the special zero page wired to the pte).
      When pte was finally unmapped, special casing for zero page was not
      done, and instead it was treated as a "normal" page, tripping on the
      map counts etc.
      
      This fixes ARC STAR 9001053308
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      3925a16a
  8. 28 7月, 2016 1 次提交
    • L
      Disable "maybe-uninitialized" warning globally · 6e8d666e
      Linus Torvalds 提交于
      Several build configurations had already disabled this warning because
      it generates a lot of false positives.  But some had not, and it was
      still enabled for "allmodconfig" builds, for example.
      
      Looking at the warnings produced, every single one I looked at was a
      false positive, and the warnings are frequent enough (and big enough)
      that they can easily hide real problems that you don't notice in the
      noise generated by -Wmaybe-uninitialized.
      
      The warning is good in theory, but this is a classic case of a warning
      that causes more problems than the warning can solve.
      
      If gcc gets better at avoiding false positives, we may be able to
      re-enable this warning.  But as is, we're better off without it, and I
      want to be able to see the *real* warnings.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e8d666e
  9. 27 7月, 2016 1 次提交
  10. 21 7月, 2016 1 次提交
  11. 20 7月, 2016 1 次提交
  12. 15 7月, 2016 1 次提交
  13. 14 7月, 2016 1 次提交
  14. 28 6月, 2016 4 次提交
  15. 25 6月, 2016 1 次提交
  16. 24 6月, 2016 1 次提交
  17. 20 6月, 2016 1 次提交
  18. 16 6月, 2016 2 次提交
  19. 14 6月, 2016 1 次提交
    • P
      locking/spinlock, arch: Update and fix spin_unlock_wait() implementations · 726328d9
      Peter Zijlstra 提交于
      This patch updates/fixes all spin_unlock_wait() implementations.
      
      The update is in semantics; where it previously was only a control
      dependency, we now upgrade to a full load-acquire to match the
      store-release from the spin_unlock() we waited on. This ensures that
      when spin_unlock_wait() returns, we're guaranteed to observe the full
      critical section we waited on.
      
      This fixes a number of spin_unlock_wait() users that (not
      unreasonably) rely on this.
      
      I also fixed a number of ticket lock versions to only wait on the
      current lock holder, instead of for a full unlock, as this is
      sufficient.
      
      Furthermore; again for ticket locks; I added an smp_rmb() in between
      the initial ticket load and the spin loop testing the current value
      because I could not convince myself the address dependency is
      sufficient, esp. if the loads are of different sizes.
      
      I'm more than happy to remove this smp_rmb() again if people are
      certain the address dependency does indeed work as expected.
      
      Note: PPC32 will be fixed independently
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: chris@zankel.net
      Cc: cmetcalf@mellanox.com
      Cc: davem@davemloft.net
      Cc: dhowells@redhat.com
      Cc: james.hogan@imgtec.com
      Cc: jejb@parisc-linux.org
      Cc: linux@armlinux.org.uk
      Cc: mpe@ellerman.id.au
      Cc: ralf@linux-mips.org
      Cc: realmz6@gmail.com
      Cc: rkuo@codeaurora.org
      Cc: rth@twiddle.net
      Cc: schwidefsky@de.ibm.com
      Cc: tony.luck@intel.com
      Cc: vgupta@synopsys.com
      Cc: ysato@users.sourceforge.jp
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      726328d9
  20. 13 6月, 2016 2 次提交
  21. 02 6月, 2016 3 次提交
  22. 31 5月, 2016 2 次提交
  23. 30 5月, 2016 2 次提交