1. 17 7月, 2017 2 次提交
  2. 03 7月, 2017 4 次提交
  3. 02 7月, 2017 1 次提交
    • L
      Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · e18aca02
      Linus Torvalds 提交于
      Pull x86 fixes from Thomas Gleixner:
       "Fixlets for x86:
      
         - Prevent kexec crash when KASLR is enabled, which was caused by an
           address calculation bug
      
         - Restore the freeing of PUDs on memory hot remove
      
         - Correct a negated pointer check in the intel uncore performance
           monitoring driver
      
         - Plug a memory leak in an error exit path in the RDT code"
      
      * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        x86/intel_rdt: Fix memory leak on mount failure
        x86/boot/KASLR: Fix kexec crash due to 'virt_addr' calculation bug
        x86/boot/KASLR: Add checking for the offset of kernel virtual address randomization
        perf/x86/intel/uncore: Fix wrong box pointer check
        x86/mm/hotplug: Fix BUG_ON() after hot-remove by not freeing PUD
      e18aca02
  4. 01 7月, 2017 11 次提交
  5. 30 6月, 2017 22 次提交
    • B
      x86/boot/KASLR: Fix kexec crash due to 'virt_addr' calculation bug · 8eabf42a
      Baoquan He 提交于
      Kernel text KASLR is separated into physical address and virtual
      address randomization. And for virtual address randomization, we
      only randomiza to get an offset between 16M and KERNEL_IMAGE_SIZE.
      So the initial value of 'virt_addr' should be LOAD_PHYSICAL_ADDR,
      but not the original kernel loading address 'output'.
      
      The bug will cause kernel boot failure if kernel is loaded at a different
      position than the address, 16M, which is decided at compiled time.
      Kexec/kdump is such practical case.
      
      To fix it, just assign LOAD_PHYSICAL_ADDR to virt_addr as initial
      value.
      Tested-by: NDave Young <dyoung@redhat.com>
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 8391c73c ("x86/KASLR: Randomize virtual address separately")
      Link: http://lkml.kernel.org/r/1498567146-11990-3-git-send-email-bhe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8eabf42a
    • B
      x86/boot/KASLR: Add checking for the offset of kernel virtual address randomization · b892cb87
      Baoquan He 提交于
      For kernel text KASLR, the virtual address is confined to area of 1G,
      [0xffffffff80000000, 0xffffffffc0000000). For the implemenataion of
      virtual address randomization, we only randomize to get an offset
      between 16M and 1G, then add this offset to the starting address,
      0xffffffff80000000. Here 16M is the offset which is decided at linking
      stage. So the amount of the local variable 'virt_addr' which respresents
      the offset plus the kernel output size can not exceed KERNEL_IMAGE_SIZE.
      
      Add a debug check for the offset. If out of bounds, print error
      message and hang there.
      Suggested-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NBaoquan He <bhe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1498567146-11990-2-git-send-email-bhe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b892cb87
    • S
      tracing/kprobes: Allow to create probe with a module name starting with a digit · 9e52b325
      Sabrina Dubroca 提交于
      Always try to parse an address, since kstrtoul() will safely fail when
      given a symbol as input. If that fails (which will be the case for a
      symbol), try to parse a symbol instead.
      
      This allows creating a probe such as:
      
          p:probe/vlan_gro_receive 8021q:vlan_gro_receive+0
      
      Which is necessary for this command to work:
      
          perf probe -m 8021q -a vlan_gro_receive
      
      Link: http://lkml.kernel.org/r/fd72d666f45b114e2c5b9cf7e27b91de1ec966f1.1498122881.git.sd@queasysnail.net
      
      Cc: stable@vger.kernel.org
      Fixes: 413d37d1 ("tracing: Add kprobe-based event tracer")
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NSabrina Dubroca <sd@queasysnail.net>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      9e52b325
    • J
      MIPS: Avoid accidental raw backtrace · 85423636
      James Hogan 提交于
      Since commit 81a76d71 ("MIPS: Avoid using unwind_stack() with
      usermode") show_backtrace() invokes the raw backtracer when
      cp0_status & ST0_KSU indicates user mode to fix issues on EVA kernels
      where user and kernel address spaces overlap.
      
      However this is used by show_stack() which creates its own pt_regs on
      the stack and leaves cp0_status uninitialised in most of the code paths.
      This results in the non deterministic use of the raw back tracer
      depending on the previous stack content.
      
      show_stack() deals exclusively with kernel mode stacks anyway, so
      explicitly initialise regs.cp0_status to KSU_KERNEL (i.e. 0) to ensure
      we get a useful backtrace.
      
      Fixes: 81a76d71 ("MIPS: Avoid using unwind_stack() with usermode")
      Signed-off-by: NJames Hogan <james.hogan@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # 3.15+
      Patchwork: https://patchwork.linux-mips.org/patch/16656/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      85423636
    • P
      MIPS: Perform post-DMA cache flushes on systems with MAARs · cad482c1
      Paul Burton 提交于
      Recent CPUs from Imagination Technologies such as the I6400 or P6600 are
      able to speculatively fetch data from memory into caches. This means
      that if used in a system with non-coherent DMA they require that caches
      be invalidated after a device performs DMA, and before the CPU reads the
      DMA'd data, in order to ensure that stale values weren't speculatively
      prefetched.
      
      Such CPUs also introduced Memory Accessibility Attribute Registers
      (MAARs) in order to control the regions in which they are allowed to
      speculate. Thus we can use the presence of MAARs as a good indication
      that the CPU requires the above cache maintenance. Use the presence of
      MAARs to determine the result of cpu_needs_post_dma_flush() in the
      default case, in order to handle these recent CPUs correctly.
      
      Note that the return type of cpu_needs_post_dma_flush() is changed to
      bool, such that it's clearer what's happening when cpu_has_maar is cast
      to bool for the return value. If this patch were backported to a
      pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int
      we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is
      currently 1ull<<30, so when truncated to an int gives a non-zero value
      anyway, but even so the implicit conversion from long long int to bool
      makes it clearer to understand what will happen than the implicit
      conversion from long long int to int would. The bool return type also
      fits this usage better semantically, so seems like an all-round win.
      
      Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting
      the return type change.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Reviewed-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Tested-by: NBryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: Ed Blake <ed.blake@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Patchwork: https://patchwork.linux-mips.org/patch/16363/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      cad482c1
    • P
      MIPS: Fix IRQ tracing & lockdep when rescheduling · d8550860
      Paul Burton 提交于
      When the scheduler sets TIF_NEED_RESCHED & we call into the scheduler
      from arch/mips/kernel/entry.S we disable interrupts. This is true
      regardless of whether we reach work_resched from syscall_exit_work,
      resume_userspace or by looping after calling schedule(). Although we
      disable interrupts in these paths we don't call trace_hardirqs_off()
      before calling into C code which may acquire locks, and we therefore
      leave lockdep with an inconsistent view of whether interrupts are
      disabled or not when CONFIG_PROVE_LOCKING & CONFIG_DEBUG_LOCKDEP are
      both enabled.
      
      Without tracing this interrupt state lockdep will print warnings such
      as the following once a task returns from a syscall via
      syscall_exit_partial with TIF_NEED_RESCHED set:
      
      [   49.927678] ------------[ cut here ]------------
      [   49.934445] WARNING: CPU: 0 PID: 1 at kernel/locking/lockdep.c:3687 check_flags.part.41+0x1dc/0x1e8
      [   49.946031] DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)
      [   49.946355] CPU: 0 PID: 1 Comm: init Not tainted 4.10.0-00439-gc9fd5d362289-dirty #197
      [   49.963505] Stack : 0000000000000000 ffffffff81bb5d6a 0000000000000006 ffffffff801ce9c4
      [   49.974431]         0000000000000000 0000000000000000 0000000000000000 000000000000004a
      [   49.985300]         ffffffff80b7e487 ffffffff80a24498 a8000000ff160000 ffffffff80ede8b8
      [   49.996194]         0000000000000001 0000000000000000 0000000000000000 0000000077c8030c
      [   50.007063]         000000007fd8a510 ffffffff801cd45c 0000000000000000 a8000000ff127c88
      [   50.017945]         0000000000000000 ffffffff801cf928 0000000000000001 ffffffff80a24498
      [   50.028827]         0000000000000000 0000000000000001 0000000000000000 0000000000000000
      [   50.039688]         0000000000000000 a8000000ff127bd0 0000000000000000 ffffffff805509bc
      [   50.050575]         00000000140084e0 0000000000000000 0000000000000000 0000000000040a00
      [   50.061448]         0000000000000000 ffffffff8010e1b0 0000000000000000 ffffffff805509bc
      [   50.072327]         ...
      [   50.076087] Call Trace:
      [   50.079869] [<ffffffff8010e1b0>] show_stack+0x80/0xa8
      [   50.086577] [<ffffffff805509bc>] dump_stack+0x10c/0x190
      [   50.093498] [<ffffffff8015dde0>] __warn+0xf0/0x108
      [   50.099889] [<ffffffff8015de34>] warn_slowpath_fmt+0x3c/0x48
      [   50.107241] [<ffffffff801c15b4>] check_flags.part.41+0x1dc/0x1e8
      [   50.114961] [<ffffffff801c239c>] lock_is_held_type+0x8c/0xb0
      [   50.122291] [<ffffffff809461b8>] __schedule+0x8c0/0x10f8
      [   50.129221] [<ffffffff80946a60>] schedule+0x30/0x98
      [   50.135659] [<ffffffff80106278>] work_resched+0x8/0x34
      [   50.142397] ---[ end trace 0cb4f6ef5b99fe21 ]---
      [   50.148405] possible reason: unannotated irqs-off.
      [   50.154600] irq event stamp: 400463
      [   50.159566] hardirqs last  enabled at (400463): [<ffffffff8094edc8>] _raw_spin_unlock_irqrestore+0x40/0xa8
      [   50.171981] hardirqs last disabled at (400462): [<ffffffff8094eb98>] _raw_spin_lock_irqsave+0x30/0xb0
      [   50.183897] softirqs last  enabled at (400450): [<ffffffff8016580c>] __do_softirq+0x4ac/0x6a8
      [   50.195015] softirqs last disabled at (400425): [<ffffffff80165e78>] irq_exit+0x110/0x128
      
      Fix this by using the TRACE_IRQS_OFF macro to call trace_hardirqs_off()
      when CONFIG_TRACE_IRQFLAGS is enabled. This is done before invoking
      schedule() following the work_resched label because:
      
       1) Interrupts are disabled regardless of the path we take to reach
          work_resched() & schedule().
      
       2) Performing the tracing here avoids the need to do it in paths which
          disable interrupts but don't call out to C code before hitting a
          path which uses the RESTORE_SOME macro that will call
          trace_hardirqs_on() or trace_hardirqs_off() as appropriate.
      
      We call trace_hardirqs_on() using the TRACE_IRQS_ON macro before calling
      syscall_trace_leave() for similar reasons, ensuring that lockdep has a
      consistent view of state after we re-enable interrupts.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Cc: linux-mips@linux-mips.org
      Cc: stable <stable@vger.kernel.org>
      Patchwork: https://patchwork.linux-mips.org/patch/15385/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      d8550860
    • P
      MIPS: pm-cps: Drop manual cache-line alignment of ready_count · 161c51cc
      Paul Burton 提交于
      We allocate memory for a ready_count variable per-CPU, which is accessed
      via a cached non-coherent TLB mapping to perform synchronisation between
      threads within the core using LL/SC instructions. In order to ensure
      that the variable is contained within its own data cache line we
      allocate 2 lines worth of memory & align the resulting pointer to a line
      boundary. This is however unnecessary, since kmalloc is guaranteed to
      return memory which is at least cache-line aligned (see
      ARCH_DMA_MINALIGN). Stop the redundant manual alignment.
      
      Besides cleaning up the code & avoiding needless work, this has the side
      effect of avoiding an arithmetic error found by Bryan on 64 bit systems
      due to the 32 bit size of the former dlinesz. This led the ready_count
      variable to have its upper 32b cleared erroneously for MIPS64 kernels,
      causing problems when ready_count was later used on MIPS64 via cpuidle.
      Signed-off-by: NPaul Burton <paul.burton@imgtec.com>
      Fixes: 3179d37e ("MIPS: pm-cps: add PM state entry code for CPS systems")
      Reported-by: NBryan O'Donoghue <bryan.odonoghue@imgtec.com>
      Reviewed-by: NBryan O'Donoghue <bryan.odonoghue@imgtec.com>
      Tested-by: NBryan O'Donoghue <bryan.odonoghue@imgtec.com>
      Cc: linux-mips@linux-mips.org
      Cc: stable <stable@vger.kernel.org> # v3.16+
      Patchwork: https://patchwork.linux-mips.org/patch/15383/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      161c51cc
    • D
      ARM: 8685/1: ensure memblock-limit is pmd-aligned · 9e25ebfe
      Doug Berger 提交于
      The pmd containing memblock_limit is cleared by prepare_page_table()
      which creates the opportunity for early_alloc() to allocate unmapped
      memory if memblock_limit is not pmd aligned causing a boot-time hang.
      
      Commit 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
      attempted to resolve this problem, but there is a path through the
      adjust_lowmem_bounds() routine where if all memory regions start and
      end on pmd-aligned addresses the memblock_limit will be set to
      arm_lowmem_limit.
      
      Since arm_lowmem_limit can be affected by the vmalloc early parameter,
      the value of arm_lowmem_limit may not be pmd-aligned. This commit
      corrects this oversight such that memblock_limit is always rounded
      down to pmd-alignment.
      
      Fixes: 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
      Signed-off-by: NDoug Berger <opendmb@gmail.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      9e25ebfe
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net · 4d8a991d
      Linus Torvalds 提交于
      Pull networking fixes from David Miller:
      
       1) Need to access netdev->num_rx_queues behind an accessor in netvsc
          driver otherwise the build breaks with some configs, from Arnd
          Bergmann.
      
       2) Add dummy xfrm_dev_event() so that build doesn't fail when
          CONFIG_XFRM_OFFLOAD is not set. From Hangbin Liu.
      
       3) Don't OOPS when pfkey_msg2xfrm_state() signals an erros, from Dan
          Carpenter.
      
       4) Fix MCDI command size for filter operations in sfc driver, from
          Martin Habets.
      
       5) Fix UFO segmenting so that we don't calculate incorrect checksums,
          from Michal Kubecek.
      
       6) When ipv6 datagram connects fail, reset destination address and
          port. From Wei Wang.
      
       7) TCP disconnect must reset the cached receive DST, from WANG Cong.
      
       8) Fix sign extension bug on 32-bit in dev_get_stats(), from Eric
          Dumazet.
      
       9) fman driver has to depend on HAS_DMA, from Madalin Bucur.
      
      10) Fix bpf pointer leak with xadd in verifier, from Daniel Borkmann.
      
      11) Fix negative page counts with GFO, from Michal Kubecek.
      
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (41 commits)
        sfc: fix attempt to translate invalid filter ID
        net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish()
        bpf: prevent leaking pointer via xadd on unpriviledged
        arcnet: com20020-pci: add missing pdev setup in netdev structure
        arcnet: com20020-pci: fix dev_id calculation
        arcnet: com20020: remove needless base_addr assignment
        Trivial fix to spelling mistake in arc_printk message
        arcnet: change irq handler to lock irqsave
        rocker: move dereference before free
        mlxsw: spectrum_router: Fix NULL pointer dereference
        net: sched: Fix one possible panic when no destroy callback
        virtio-net: serialize tx routine during reset
        net: usb: asix88179_178a: Add support for the Belkin B2B128
        fsl/fman: add dependency on HAS_DMA
        net: prevent sign extension in dev_get_stats()
        tcp: reset sk_rx_dst in tcp_disconnect()
        net: ipv6: reset daddr and dport in sk if connect() fails
        bnx2x: Don't log mc removal needlessly
        bnxt_en: Fix netpoll handling.
        bnxt_en: Add missing logic to handle TPA end error conditions.
        ...
      4d8a991d
    • L
      Merge tag 'for-4.12/dm-fixes-5' of... · 27bc3440
      Linus Torvalds 提交于
      Merge tag 'for-4.12/dm-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
      
      Pull device mapper fixes from Mike Snitzer:
      
       - dm thinp fix for crash that will occur when metadata device failure
         races with discard passdown to the underlying data device.
      
       - dm raid fix to not access the superblock's >= 1.9.0 'sectors' member
         unconditionally.
      
      * tag 'for-4.12/dm-fixes-5' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
        dm thin: do not queue freed thin mapping for next stage processing
        dm raid: fix oops on upgrading to extended superblock format
      27bc3440
    • L
      Merge branch 'for-linus' of git://git.kernel.dk/linux-block · 374bf883
      Linus Torvalds 提交于
      Pull block fixes from Jens Axboe:
       "Two fixes that should go into this release.
      
        One is an nvme regression fix from Keith, fixing a missing queue
        freeze if the controller is being reset. This causes the reset to
        hang.
      
        The other is a fix for a leak of the bio protection info, if smaller
        sized O_DIRECT is used. This fix should be more involved as we have
        other problematic paths in the kernel, but given as this isn't a
        regression in this series, we'll tackle those for 4.13"
      
      * 'for-linus' of git://git.kernel.dk/linux-block:
        block: provide bio_uninit() free freeing integrity/task associations
        nvme/pci: Fix stuck nvme reset
      374bf883
    • E
      sfc: fix attempt to translate invalid filter ID · d58299a4
      Edward Cree 提交于
      When filter insertion fails with no rollback, we were trying to convert
       EFX_EF10_FILTER_ID_INVALID to an id to store in 'ids' (which is either
       vlan->uc or vlan->mc).  This would WARN_ON_ONCE and then record a bogus
       filter ID of 0x1fff, neither of which is a good thing.
      
      Fixes: 0ccb998b ("sfc: fix filter_id misinterpretation in edge case")
      Signed-off-by: NEdward Cree <ecree@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d58299a4
    • M
      net: handle NAPI_GRO_FREE_STOLEN_HEAD case also in napi_frags_finish() · e44699d2
      Michal Kubeček 提交于
      Recently I started seeing warnings about pages with refcount -1. The
      problem was traced to packets being reused after their head was merged into
      a GRO packet by skb_gro_receive(). While bisecting the issue pointed to
      commit c21b48cc ("net: adjust skb->truesize in ___pskb_trim()") and
      I have never seen it on a kernel with it reverted, I believe the real
      problem appeared earlier when the option to merge head frag in GRO was
      implemented.
      
      Handling NAPI_GRO_FREE_STOLEN_HEAD state was only added to GRO_MERGED_FREE
      branch of napi_skb_finish() so that if the driver uses napi_gro_frags()
      and head is merged (which in my case happens after the skb_condense()
      call added by the commit mentioned above), the skb is reused including the
      head that has been merged. As a result, we release the page reference
      twice and eventually end up with negative page refcount.
      
      To fix the problem, handle NAPI_GRO_FREE_STOLEN_HEAD in napi_frags_finish()
      the same way it's done in napi_skb_finish().
      
      Fixes: d7e8883c ("net: make GRO aware of skb->head_frag")
      Signed-off-by: NMichal Kubecek <mkubecek@suse.cz>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e44699d2
    • D
      bpf: prevent leaking pointer via xadd on unpriviledged · 6bdf6abc
      Daniel Borkmann 提交于
      Leaking kernel addresses on unpriviledged is generally disallowed,
      for example, verifier rejects the following:
      
        0: (b7) r0 = 0
        1: (18) r2 = 0xffff897e82304400
        3: (7b) *(u64 *)(r1 +48) = r2
        R2 leaks addr into ctx
      
      Doing pointer arithmetic on them is also forbidden, so that they
      don't turn into unknown value and then get leaked out. However,
      there's xadd as a special case, where we don't check the src reg
      for being a pointer register, e.g. the following will pass:
      
        0: (b7) r0 = 0
        1: (7b) *(u64 *)(r1 +48) = r0
        2: (18) r2 = 0xffff897e82304400 ; map
        4: (db) lock *(u64 *)(r1 +48) += r2
        5: (95) exit
      
      We could store the pointer into skb->cb, loose the type context,
      and then read it out from there again to leak it eventually out
      of a map value. Or more easily in a different variant, too:
      
         0: (bf) r6 = r1
         1: (7a) *(u64 *)(r10 -8) = 0
         2: (bf) r2 = r10
         3: (07) r2 += -8
         4: (18) r1 = 0x0
         6: (85) call bpf_map_lookup_elem#1
         7: (15) if r0 == 0x0 goto pc+3
         R0=map_value(ks=8,vs=8,id=0),min_value=0,max_value=0 R6=ctx R10=fp
         8: (b7) r3 = 0
         9: (7b) *(u64 *)(r0 +0) = r3
        10: (db) lock *(u64 *)(r0 +0) += r6
        11: (b7) r0 = 0
        12: (95) exit
      
        from 7 to 11: R0=inv,min_value=0,max_value=0 R6=ctx R10=fp
        11: (b7) r0 = 0
        12: (95) exit
      
      Prevent this by checking xadd src reg for pointer types. Also
      add a couple of test cases related to this.
      
      Fixes: 1be7f75d ("bpf: enable non-root eBPF programs")
      Fixes: 17a52670 ("bpf: verifier (add verifier core)")
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Acked-by: NEdward Cree <ecree@solarflare.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6bdf6abc
    • K
      perf/x86/intel/uncore: Fix wrong box pointer check · 80c65fdb
      Kan Liang 提交于
      Should not init a NULL box. It will cause system crash.
      The issue looks like caused by a typo.
      
      This was not noticed because there is no NULL box. Also, for most
      boxes, they are enabled by default. The init code is not critical.
      
      Fixes: fff4b87e ("perf/x86/intel/uncore: Make package handling more robust")
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/20170629190926.2456-1-kan.liang@intel.com
      80c65fdb
    • D
      Merge branch 'arcnet-fixes' · 00778f7c
      David S. Miller 提交于
      Michael Grzeschik says:
      
      ====================
      arcnet: Collection of latest fixes
      
      Here we sum up the recent fixes I collected on the way to use and
      stabilise the framework. Part of it is an possible deadlock that we
      prevent as well to fix the calculation of the dev_id that can be setup
      by an rotary encoder. Beside that we added an trivial spelling patch and
      fix some wrong and missing assignments that improves the code footprint.
      ====================
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      00778f7c
    • M
      arcnet: com20020-pci: add missing pdev setup in netdev structure · 2a0ea04c
      Michael Grzeschik 提交于
      We add the pdev data to the pci devices netdev structure. This way
      the interface get consistent device names in the userspace (udev).
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a0ea04c
    • M
      arcnet: com20020-pci: fix dev_id calculation · cb108619
      Michael Grzeschik 提交于
      The dev_id was miscalculated. Only the two bits 4-5 are relevant for the
      MA1 card. PCIARC1 and PCIFB2 use the four bits 4-7 for id selection.
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb108619
    • M
      arcnet: com20020: remove needless base_addr assignment · 0d494fcf
      Michael Grzeschik 提交于
      The assignment is superfluous.
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d494fcf
    • C
    • M
      arcnet: change irq handler to lock irqsave · 5b858403
      Michael Grzeschik 提交于
      This patch prevents the arcnet driver from the following deadlock.
      
      [   41.273910] ======================================================
      [   41.280397] [ INFO: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected ]
      [   41.287433] 4.4.0-00034-gc0ae784 #536 Not tainted
      [   41.292366] ------------------------------------------------------
      [   41.298863] arcecho/233 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
      [   41.305628]  (&(&lp->lock)->rlock){+.+...}, at: [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet]
      [   41.315199]
      [   41.315199] and this task is already holding:
      [   41.321324]  (_xmit_ARCNET#2){+.-...}, at: [<c06b934c>] packet_direct_xmit+0xfc/0x1c8
      [   41.329593] which would create a new lock dependency:
      [   41.334893]  (_xmit_ARCNET#2){+.-...} -> (&(&lp->lock)->rlock){+.+...}
      [   41.341801]
      [   41.341801] but this new dependency connects a SOFTIRQ-irq-safe lock:
      [   41.350108]  (_xmit_ARCNET#2){+.-...}
      ... which became SOFTIRQ-irq-safe at:
      [   41.357539]   [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.362677]   [<c063ab8c>] dev_watchdog+0x5c/0x264
      [   41.367723]   [<c0094edc>] call_timer_fn+0x6c/0xf4
      [   41.372759]   [<c00950b8>] run_timer_softirq+0x154/0x210
      [   41.378340]   [<c0036b30>] __do_softirq+0x144/0x298
      [   41.383469]   [<c0036fb4>] irq_exit+0xcc/0x130
      [   41.388138]   [<c0085c50>] __handle_domain_irq+0x60/0xb4
      [   41.393728]   [<c0014578>] __irq_svc+0x58/0x78
      [   41.398402]   [<c0010274>] arch_cpu_idle+0x24/0x3c
      [   41.403443]   [<c007127c>] cpu_startup_entry+0x1f8/0x25c
      [   41.409029]   [<c09adc90>] start_kernel+0x3c0/0x3cc
      [   41.414170]
      [   41.414170] to a SOFTIRQ-irq-unsafe lock:
      [   41.419931]  (&(&lp->lock)->rlock){+.+...}
      ... which became SOFTIRQ-irq-unsafe at:
      [   41.427996] ...  [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.433409]   [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
      [   41.439646]   [<c0089120>] handle_nested_irq+0x8c/0xec
      [   41.445063]   [<c03c1170>] regmap_irq_thread+0x190/0x314
      [   41.450661]   [<c0087244>] irq_thread_fn+0x1c/0x34
      [   41.455700]   [<c0087548>] irq_thread+0x13c/0x1dc
      [   41.460649]   [<c0050f10>] kthread+0xe4/0xf8
      [   41.465158]   [<c000f810>] ret_from_fork+0x14/0x24
      [   41.470207]
      [   41.470207] other info that might help us debug this:
      [   41.470207]
      [   41.478627]  Possible interrupt unsafe locking scenario:
      [   41.478627]
      [   41.485763]        CPU0                    CPU1
      [   41.490521]        ----                    ----
      [   41.495279]   lock(&(&lp->lock)->rlock);
      [   41.499414]                                local_irq_disable();
      [   41.505636]                                lock(_xmit_ARCNET#2);
      [   41.511967]                                lock(&(&lp->lock)->rlock);
      [   41.518741]   <Interrupt>
      [   41.521490]     lock(_xmit_ARCNET#2);
      [   41.525356]
      [   41.525356]  *** DEADLOCK ***
      [   41.525356]
      [   41.531587] 1 lock held by arcecho/233:
      [   41.535617]  #0:  (_xmit_ARCNET#2){+.-...}, at: [<c06b934c>] packet_direct_xmit+0xfc/0x1c8
      [   41.544355]
      the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
      [   41.552362] -> (_xmit_ARCNET#2){+.-...} ops: 27 {
      [   41.557357]    HARDIRQ-ON-W at:
      [   41.560664]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.567445]                     [<c063ba28>] dev_deactivate_many+0x114/0x304
      [   41.574866]                     [<c063bc3c>] dev_deactivate+0x24/0x38
      [   41.581646]                     [<c0630374>] linkwatch_do_dev+0x40/0x74
      [   41.588613]                     [<c06305d8>] __linkwatch_run_queue+0xec/0x140
      [   41.596120]                     [<c0630658>] linkwatch_event+0x2c/0x34
      [   41.602991]                     [<c004af30>] process_one_work+0x188/0x40c
      [   41.610131]                     [<c004b200>] worker_thread+0x4c/0x480
      [   41.616912]                     [<c0050f10>] kthread+0xe4/0xf8
      [   41.623048]                     [<c000f810>] ret_from_fork+0x14/0x24
      [   41.629735]    IN-SOFTIRQ-W at:
      [   41.633039]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.639820]                     [<c063ab8c>] dev_watchdog+0x5c/0x264
      [   41.646508]                     [<c0094edc>] call_timer_fn+0x6c/0xf4
      [   41.653190]                     [<c00950b8>] run_timer_softirq+0x154/0x210
      [   41.660425]                     [<c0036b30>] __do_softirq+0x144/0x298
      [   41.667201]                     [<c0036fb4>] irq_exit+0xcc/0x130
      [   41.673518]                     [<c0085c50>] __handle_domain_irq+0x60/0xb4
      [   41.680754]                     [<c0014578>] __irq_svc+0x58/0x78
      [   41.687077]                     [<c0010274>] arch_cpu_idle+0x24/0x3c
      [   41.693769]                     [<c007127c>] cpu_startup_entry+0x1f8/0x25c
      [   41.701006]                     [<c09adc90>] start_kernel+0x3c0/0x3cc
      [   41.707791]    INITIAL USE at:
      [   41.711003]                    [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.717696]                    [<c063ba28>] dev_deactivate_many+0x114/0x304
      [   41.725026]                    [<c063bc3c>] dev_deactivate+0x24/0x38
      [   41.731718]                    [<c0630374>] linkwatch_do_dev+0x40/0x74
      [   41.738593]                    [<c06305d8>] __linkwatch_run_queue+0xec/0x140
      [   41.746011]                    [<c0630658>] linkwatch_event+0x2c/0x34
      [   41.752789]                    [<c004af30>] process_one_work+0x188/0x40c
      [   41.759847]                    [<c004b200>] worker_thread+0x4c/0x480
      [   41.766541]                    [<c0050f10>] kthread+0xe4/0xf8
      [   41.772596]                    [<c000f810>] ret_from_fork+0x14/0x24
      [   41.779198]  }
      [   41.780945]  ... key      at: [<c124d620>] netdev_xmit_lock_key+0x38/0x1c8
      [   41.788192]  ... acquired at:
      [   41.791309]    [<c007bed8>] lock_acquire+0x70/0x90
      [   41.796361]    [<c06f9140>] _raw_spin_lock_irqsave+0x40/0x54
      [   41.802324]    [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet]
      [   41.808844]    [<c06b9380>] packet_direct_xmit+0x130/0x1c8
      [   41.814622]    [<c06bc7e4>] packet_sendmsg+0x3b8/0x680
      [   41.820034]    [<c05fe8b0>] sock_sendmsg+0x14/0x24
      [   41.825091]    [<c05ffd68>] SyS_sendto+0xb8/0xe0
      [   41.829956]    [<c05ffda8>] SyS_send+0x18/0x20
      [   41.834638]    [<c000f780>] ret_fast_syscall+0x0/0x1c
      [   41.839954]
      [   41.841514]
      the dependencies between the lock to be acquired and SOFTIRQ-irq-unsafe lock:
      [   41.850302] -> (&(&lp->lock)->rlock){+.+...} ops: 5 {
      [   41.855644]    HARDIRQ-ON-W at:
      [   41.858945]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.865726]                     [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
      [   41.873607]                     [<c0089120>] handle_nested_irq+0x8c/0xec
      [   41.880666]                     [<c03c1170>] regmap_irq_thread+0x190/0x314
      [   41.887901]                     [<c0087244>] irq_thread_fn+0x1c/0x34
      [   41.894593]                     [<c0087548>] irq_thread+0x13c/0x1dc
      [   41.901195]                     [<c0050f10>] kthread+0xe4/0xf8
      [   41.907338]                     [<c000f810>] ret_from_fork+0x14/0x24
      [   41.914025]    SOFTIRQ-ON-W at:
      [   41.917328]                     [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.924106]                     [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
      [   41.931981]                     [<c0089120>] handle_nested_irq+0x8c/0xec
      [   41.939028]                     [<c03c1170>] regmap_irq_thread+0x190/0x314
      [   41.946264]                     [<c0087244>] irq_thread_fn+0x1c/0x34
      [   41.952954]                     [<c0087548>] irq_thread+0x13c/0x1dc
      [   41.959548]                     [<c0050f10>] kthread+0xe4/0xf8
      [   41.965689]                     [<c000f810>] ret_from_fork+0x14/0x24
      [   41.972379]    INITIAL USE at:
      [   41.975595]                    [<c06f8fc8>] _raw_spin_lock+0x30/0x40
      [   41.982283]                    [<bf083d54>] arcnet_interrupt+0x2c/0x800 [arcnet]
      [   41.990063]                    [<c0089120>] handle_nested_irq+0x8c/0xec
      [   41.997027]                    [<c03c1170>] regmap_irq_thread+0x190/0x314
      [   42.004172]                    [<c0087244>] irq_thread_fn+0x1c/0x34
      [   42.010766]                    [<c0087548>] irq_thread+0x13c/0x1dc
      [   42.017267]                    [<c0050f10>] kthread+0xe4/0xf8
      [   42.023314]                    [<c000f810>] ret_from_fork+0x14/0x24
      [   42.029903]  }
      [   42.031648]  ... key      at: [<bf0854cc>] __key.42091+0x0/0xfffff0f8 [arcnet]
      [   42.039255]  ... acquired at:
      [   42.042372]    [<c007bed8>] lock_acquire+0x70/0x90
      [   42.047413]    [<c06f9140>] _raw_spin_lock_irqsave+0x40/0x54
      [   42.053364]    [<bf083bc8>] arcnet_send_packet+0x60/0x1c0 [arcnet]
      [   42.059872]    [<c06b9380>] packet_direct_xmit+0x130/0x1c8
      [   42.065634]    [<c06bc7e4>] packet_sendmsg+0x3b8/0x680
      [   42.071030]    [<c05fe8b0>] sock_sendmsg+0x14/0x24
      [   42.076069]    [<c05ffd68>] SyS_sendto+0xb8/0xe0
      [   42.080926]    [<c05ffda8>] SyS_send+0x18/0x20
      [   42.085601]    [<c000f780>] ret_fast_syscall+0x0/0x1c
      [   42.090918]
      [   42.092481]
      [   42.092481] stack backtrace:
      [   42.097065] CPU: 0 PID: 233 Comm: arcecho Not tainted 4.4.0-00034-gc0ae784 #536
      [   42.104751] Hardware name: Generic AM33XX (Flattened Device Tree)
      [   42.111183] [<c0017ec8>] (unwind_backtrace) from [<c00139d0>] (show_stack+0x10/0x14)
      [   42.119337] [<c00139d0>] (show_stack) from [<c02a82c4>] (dump_stack+0x8c/0x9c)
      [   42.126937] [<c02a82c4>] (dump_stack) from [<c0078260>] (check_usage+0x4bc/0x63c)
      [   42.134815] [<c0078260>] (check_usage) from [<c0078438>] (check_irq_usage+0x58/0xb0)
      [   42.142964] [<c0078438>] (check_irq_usage) from [<c007aaa0>] (__lock_acquire+0x1524/0x20b0)
      [   42.151740] [<c007aaa0>] (__lock_acquire) from [<c007bed8>] (lock_acquire+0x70/0x90)
      [   42.159886] [<c007bed8>] (lock_acquire) from [<c06f9140>] (_raw_spin_lock_irqsave+0x40/0x54)
      [   42.168768] [<c06f9140>] (_raw_spin_lock_irqsave) from [<bf083bc8>] (arcnet_send_packet+0x60/0x1c0 [arcnet])
      [   42.179115] [<bf083bc8>] (arcnet_send_packet [arcnet]) from [<c06b9380>] (packet_direct_xmit+0x130/0x1c8)
      [   42.189182] [<c06b9380>] (packet_direct_xmit) from [<c06bc7e4>] (packet_sendmsg+0x3b8/0x680)
      [   42.198059] [<c06bc7e4>] (packet_sendmsg) from [<c05fe8b0>] (sock_sendmsg+0x14/0x24)
      [   42.206199] [<c05fe8b0>] (sock_sendmsg) from [<c05ffd68>] (SyS_sendto+0xb8/0xe0)
      [   42.213978] [<c05ffd68>] (SyS_sendto) from [<c05ffda8>] (SyS_send+0x18/0x20)
      [   42.221388] [<c05ffda8>] (SyS_send) from [<c000f780>] (ret_fast_syscall+0x0/0x1c)
      Signed-off-by: NMichael Grzeschik <m.grzeschik@pengutronix.de>
      
         ---
         v1 -> v2: removed unneeded zero assignment of flags
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5b858403
    • D
      rocker: move dereference before free · acb4b7df
      Dan Carpenter 提交于
      My static checker complains that ofdpa_neigh_del() can sometimes free
      "found".   It just makes sense to use it first before deleting it.
      
      Fixes: ecf244f7 ("rocker: fix maybe-uninitialized warning")
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      acb4b7df