1. 29 11月, 2015 1 次提交
    • A
      ARM: 8454/1: OF implies OF_FLATTREE · aa7d5f18
      Arnd Bergmann 提交于
      On the ARM architecture, individual platforms select CONFIG_USE_OF if they
      need it, but all device tree code is keyed off CONFIG_OF. When building
      a platform without DT support and manually enabling CONFIG_OF, we now
      get a number of build errors, e.g.
      
      arch/arm/kernel/devtree.c: In function 'setup_machine_fdt':
      arch/arm/kernel/devtree.c:215:19: error: implicit declaration of function 'early_init_dt_verify' [-Werror=implicit-function-declaration]
      
      We could now try to separate the use case of booting from DT vs. the
      case of using the dynamic implementation, but that seems more complicated
      than it can gain us.
      
      This simply changes the ARM Kconfig file to always enable OF_RESERVED_MEM
      and OF_EARLY_FLATTREE when CONFIG_OF is enabled. These options add a little
      extra code when we just want the dynamic OF implementation, but that seems
      like a rather obscure case, and this version solves all CONFIG_OF related
      randconfig regressions.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Fixes: 0166dc11 ("of: make CONFIG_OF user selectable")
      Acked-by: NRob Herring <robh@kernel.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      aa7d5f18
  2. 28 11月, 2015 1 次提交
  3. 27 11月, 2015 6 次提交
  4. 26 11月, 2015 7 次提交
    • C
      Revert "arm64: Mark kernel page ranges contiguous" · 667c2759
      Catalin Marinas 提交于
      This reverts commit 348a65cd.
      
      Incorrect page table manipulation that does not respect the ARM ARM
      recommended break-before-make sequence may lead to TLB conflicts. The
      contiguous PTE patch makes the system even more susceptible to such
      errors by changing the mapping from a single page to a contiguous range
      of pages. An additional TLB invalidation would reduce the risk window,
      however, the correct fix is to switch to a temporary swapper_pg_dir.
      Once the correct workaround is done, the reverted commit will be
      re-applied.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reported-by: NJeremy Linton <jeremy.linton@arm.com>
      667c2759
    • W
      arm64: mm: keep reserved ASIDs in sync with mm after multiple rollovers · 0ebea808
      Will Deacon 提交于
      Under some unusual context-switching patterns, it is possible to end up
      with multiple threads from the same mm running concurrently with
      different ASIDs:
      
      1. CPU x schedules task t with mm p containing ASID a and generation g
         This task doesn't block and the CPU doesn't context switch.
         So:
           * per_cpu(active_asid, x) = {g,a}
           * p->context.id = {g,a}
      
      2. Some other CPU generates an ASID rollover. The global generation is
         now (g + 1). CPU x is still running t, with no context switch and
         so per_cpu(reserved_asid, x) = {g,a}
      
      3. CPU y schedules task t', which shares mm p with t. The generation
         mismatches, so we take the slowpath and hit the reserved ASID from
         CPU x. p is then updated so that p->context.id = {g + 1,a}
      
      4. CPU y schedules some other task u, which has an mm != p.
      
      5. Some other CPU generates *another* CPU rollover. The global
         generation is now (g + 2). CPU x is still running t, with no context
         switch and so per_cpu(reserved_asid, x) = {g,a}.
      
      6. CPU y once again schedules task t', but now *fails* to hit the
         reserved ASID from CPU x because of the generation mismatch. This
         results in a new ASID being allocated, despite the fact that t is
         still running on CPU x with the same mm.
      
      Consequently, TLBIs (e.g. as a result of CoW) will not be synchronised
      between the two threads.
      
      This patch fixes the problem by updating all of the matching reserved
      ASIDs when we hit on the slowpath (i.e. in step 3 above). This keeps
      the reserved ASIDs in-sync with the mm and avoids the problem.
      Reported-by: NTony Thompson <anthony.thompson@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      0ebea808
    • A
      arm64: KASAN depends on !(ARM64_16K_PAGES && ARM64_VA_BITS_48) · f1b9032f
      Andrey Ryabinin 提交于
      On KASAN + 16K_PAGES + 48BIT_VA
       arch/arm64/mm/kasan_init.c: In function ‘kasan_early_init’:
       include/linux/compiler.h:484:38: error: call to ‘__compiletime_assert_95’ declared with attribute error: BUILD_BUG_ON failed: !IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE)
          _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
      
      Currently KASAN will not work on 16K_PAGES and 48BIT_VA, so
      forbid such configuration to avoid above build failure.
      Signed-off-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reported-by: NSuzuki K. Poulose <Suzuki.Poulose@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f1b9032f
    • L
      nios2: fix cache coherency · 8e3d7c83
      Ley Foon Tan 提交于
      There is intermittent cache coherency issue caught in toolchian tests.
      Revert to use flushd.
      Signed-off-by: NLey Foon Tan <lftan@altera.com>
      8e3d7c83
    • G
      ARM/PCI: Move align_resource function pointer to pci_host_bridge structure · 7c7a0e94
      Gabriele Paoloni 提交于
      Commit b3a72384 ("ARM/PCI: Replace pci_sys_data->align_resource with
      global function pointer") introduced an ARM-specific align_resource()
      function pointer.  This is not portable to other arches and doesn't work
      for platforms with two different PCIe host bridge controllers.
      
      Move the function pointer to the pci_host_bridge structure so each host
      bridge driver can specify its own align_resource() function.
      Signed-off-by: NGabriele Paoloni <gabriele.paoloni@huawei.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: NArnd Bergmann <arnd@arndb.de>
      7c7a0e94
    • G
      ARM: OMAP4+: SMP: use lockless clkdm/pwrdm api in omap4_boot_secondary · 918af9f9
      Grygorii Strashko 提交于
      OMAP CPU hotplug uses cpu1's clocks and power domains for CPU1 wake up
      from low power states (or turn on CPU1). This part of code is also
      part of system suspend (disable_nonboot_cpus()).
      >From other side, cpu1's clocks and power domains are used by CPUIdle. All above
      functionality is mutually exclusive and, therefore, lockless clkdm/pwrdm api
      can be used in omap4_boot_secondary().
      
      This fixes below back-trace on -RT which is triggered by
      pwrdm_lock/unlock():
      
      BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
       in_atomic(): 1, irqs_disabled(): 0, pid: 118, name: sh
       9 locks held by sh/118:
        #0:  (sb_writers#4){.+.+.+}, at: [<c0144a6c>] vfs_write+0x13c/0x164
        #1:  (&of->mutex){+.+.+.}, at: [<c01b4c70>] kernfs_fop_write+0x48/0x19c
        #2:  (s_active#24){.+.+.+}, at: [<c01b4c78>] kernfs_fop_write+0x50/0x19c
        #3:  (device_hotplug_lock){+.+.+.}, at: [<c03cbff0>] lock_device_hotplug_sysfs+0xc/0x4c
        #4:  (&dev->mutex){......}, at: [<c03cd284>] device_online+0x14/0x88
        #5:  (cpu_add_remove_lock){+.+.+.}, at: [<c003af90>] cpu_up+0x50/0x1a0
        #6:  (cpu_hotplug.lock){++++++}, at: [<c003ae48>] cpu_hotplug_begin+0x0/0xc4
        #7:  (cpu_hotplug.lock#2){+.+.+.}, at: [<c003aec0>] cpu_hotplug_begin+0x78/0xc4
        #8:  (boot_lock){+.+...}, at: [<c002b254>] omap4_boot_secondary+0x1c/0x178
       Preemption disabled at:[<  (null)>]   (null)
      
       CPU: 0 PID: 118 Comm: sh Not tainted 4.1.12-rt11-01998-gb4a62c3-dirty #137
       Hardware name: Generic DRA74X (Flattened Device Tree)
       [<c0017574>] (unwind_backtrace) from [<c0013be8>] (show_stack+0x10/0x14)
       [<c0013be8>] (show_stack) from [<c05a8670>] (dump_stack+0x80/0x94)
       [<c05a8670>] (dump_stack) from [<c05ad158>] (rt_spin_lock+0x24/0x54)
       [<c05ad158>] (rt_spin_lock) from [<c0030dac>] (clkdm_wakeup+0x10/0x2c)
       [<c0030dac>] (clkdm_wakeup) from [<c002b2c0>] (omap4_boot_secondary+0x88/0x178)
       [<c002b2c0>] (omap4_boot_secondary) from [<c0015d00>] (__cpu_up+0xc4/0x164)
       [<c0015d00>] (__cpu_up) from [<c003b09c>] (cpu_up+0x15c/0x1a0)
       [<c003b09c>] (cpu_up) from [<c03cd2d4>] (device_online+0x64/0x88)
       [<c03cd2d4>] (device_online) from [<c03cd360>] (online_store+0x68/0x74)
       [<c03cd360>] (online_store) from [<c01b4ce0>] (kernfs_fop_write+0xb8/0x19c)
       [<c01b4ce0>] (kernfs_fop_write) from [<c0144124>] (__vfs_write+0x20/0xd8)
       [<c0144124>] (__vfs_write) from [<c01449c0>] (vfs_write+0x90/0x164)
       [<c01449c0>] (vfs_write) from [<c01451e4>] (SyS_write+0x44/0x9c)
       [<c01451e4>] (SyS_write) from [<c0010240>] (ret_fast_syscall+0x0/0x54)
       CPU1: smp_ops.cpu_die() returned, trying to resuscitate
      
      Cc: Tero Kristo <t-kristo@ti.com>
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: NTony Lindgren <tony@atomide.com>
      918af9f9
    • N
      arm: omap2+: add missing HWMOD_NO_IDLEST in 81xx hwmod data · 29f5b34c
      Neil Armstrong 提交于
      Add missing HWMOD_NO_IDLEST hwmod flag for entries not
      having omap4 clkctrl values.
      The emac0 hwmod flag fixes the davinci_emac driver probe
      since the return of pm_resume() call is now checked.
      
      This solves the following boot errors :
      [    0.121429] omap_hwmod: l4_ls: _wait_target_ready failed: -16
      [    0.121441] omap_hwmod: l4_ls: cannot be enabled for reset (3)
      [    0.124342] omap_hwmod: l4_hs: _wait_target_ready failed: -16
      [    0.124352] omap_hwmod: l4_hs: cannot be enabled for reset (3)
      [    1.967228] omap_hwmod: emac0: _wait_target_ready failed: -16
      
      Cc: Brian Hutchinson <b.hutchman@gmail.com>
      Signed-off-by: NNeil Armstrong <narmstrong@baylibre.com>
      Signed-off-by: NTony Lindgren <tony@atomide.com>
      29f5b34c
  5. 25 11月, 2015 13 次提交
    • M
      arm64: efi: correctly map runtime regions · 3b12acf4
      Mark Rutland 提交于
      The kernel may use a page granularity of 4K, 16K, or 64K depending on
      configuration.
      
      When mapping EFI runtime regions, we use memrange_efi_to_native to round
      the physical base address of a region down to a kernel page boundary,
      and round the size up to a kernel page boundary, adding the residue left
      over from rounding down the physical base address. We do not round down
      the virtual base address.
      
      In __create_mapping we account for the offset of the virtual base from a
      granule boundary, adding the residue to the size before rounding the
      base down to said granule boundary.
      
      Thus we account for the residue twice, and when the residue is non-zero
      will cause __create_mapping to map an additional page at the end of the
      region. Depending on the memory map, this page may be in a region we are
      not intended/permitted to map, or may clash with a different region that
      we wish to map. In typical cases, mapping the next item in the memory
      map will overwrite the erroneously created entry, as we sort the memory
      map in the stub.
      
      As __create_mapping can cope with base addresses which are not page
      aligned, we can instead rely on it to map the region appropriately, and
      simplify efi_virtmap_init by removing the unnecessary code.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Leif Lindholm <leif.lindholm@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      3b12acf4
    • M
      arm64: mm: fix fault_info table xFSC decoding · c03784ee
      Mark Rutland 提交于
      We are missing descriptions for some valid xFSC values in the fault info
      table (e.g. "TLB conflict abort"), and have erroneous descriptions for
      reserved values (e.g. "asynchronous external abort", "debug event").
      
      This patch adds the missing xFSC values, and removes erroneous decoding
      of values reserved by the architecture, as described in ARM DDI 0487A.h.
      
      At the same time, fixed the unbalanced brackets for the synchronous
      parity error strings in the table.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c03784ee
    • N
      ARM: orion5x: Fix legacy get_irqnr_and_base · 4d2ec7e2
      Nicolas Pitre 提交于
      Commit 5be9fc23 ("ARM: orion5x: fix legacy orion5x IRQ numbers") shifted
      IRQ numbers by one but didn't update the get_irqnr_and_base macro
      accordingly.  This macro is involved when CONFIG_MULTI_IRQ_HANDLER
      is not defined.
      
      [jac: 5d6bed2a went in to v4.2, but was backported to v3.18]
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Fixes: 5be9fc23 ("ARM: orion5x: fix legacy orion5x IRQ numbers")
      Cc: <stable@vger.kernel.org> # v3.18+
      Signed-off-by: NJason Cooper <jason@lakedaemon.net>
      4d2ec7e2
    • N
      ARM: dove: Fix legacy get_irqnr_and_base · c1c90728
      Nicolas Pitre 提交于
      Commit 5d6bed2a ("ARM: dove: fix legacy dove IRQ numbers") shifted
      IRQ numbers by one but didn't update the get_irqnr_and_base macro
      accordingly.  This macro is involved when CONFIG_MULTI_IRQ_HANDLER
      is not defined.
      
      [jac: 5d6bed2a went in to v4.2, but was backported to v3.18]
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Fixes: 5d6bed2a ("ARM: dove: fix legacy dove IRQ numbers")
      Cc: <stable@vger.kernel.org> # v3.18+
      Signed-off-by: NJason Cooper <jason@lakedaemon.net>
      c1c90728
    • H
      KVM: nVMX: remove incorrect vpid check in nested invvpid emulation · b2467e74
      Haozhong Zhang 提交于
      This patch removes the vpid check when emulating nested invvpid
      instruction of type all-contexts invalidation. The existing code is
      incorrect because:
       (1) According to Intel SDM Vol 3, Section "INVVPID - Invalidate
           Translations Based on VPID", invvpid instruction does not check
           vpid in the invvpid descriptor when its type is all-contexts
           invalidation.
       (2) According to the same document, invvpid of type all-contexts
           invalidation does not require there is an active VMCS, so/and
           get_vmcs12() in the existing code may result in a NULL-pointer
           dereference. In practice, it can crash both KVM itself and L1
           hypervisors that use invvpid (e.g. Xen).
      Signed-off-by: NHaozhong Zhang <haozhong.zhang@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b2467e74
    • S
      arm64: early_alloc: Fix check for allocation failure · 7142392d
      Suzuki K. Poulose 提交于
      In early_alloc we check if the memblock_alloc failed by checking
      the virtual address of the result, which will never fail. This patch
      fixes it to check the actual result for failure.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7142392d
    • F
      rtc: ds1307: fix kernel splat due to wakeup irq handling · 51c4cfef
      Felipe Balbi 提交于
      Since commit 3fffd128 ("i2c: allow specifying
      separate wakeup interrupt in device tree") we have
      automatic wakeup irq support for i2c devices. That
      commit missed the fact that rtc-1307 had its own
      wakeup irq handling and ended up introducing a
      kernel splat for at least Beagle x15 boards.
      
      Fix that by reverting original commit _and_ passing
      correct interrupt names on DTS so i2c-core can
      choose correct IRQ as wakeup.
      
      Now that we have automatic wakeirq support, we can
      revert the original commit which did it manually.
      
      Fixes the following warning:
      
      [   10.346582] WARNING: CPU: 1 PID: 263 at linux/drivers/base/power/wakeirq.c:43 dev_pm_attach_wake_irq+0xbc/0xd4()
      [   10.359244] rtc-ds1307 2-006f: wake irq already initialized
      
      Cc: Tony Lindgren <tony@atomide.com>
      Cc: Nishanth Menon <nm@ti.com>
      Signed-off-by: NFelipe Balbi <balbi@ti.com>
      Acked-by: NTony Lindgren <tony@atomide.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAlexandre Belloni <alexandre.belloni@free-electrons.com>
      51c4cfef
    • M
      arm64: kvm: report original PAR_EL1 upon panic · fbb4574c
      Mark Rutland 提交于
      If we call __kvm_hyp_panic while a guest context is active, we call
      __restore_sysregs before acquiring the system register values for the
      panic, in the process throwing away the PAR_EL1 value at the point of
      the panic.
      
      This patch modifies __kvm_hyp_panic to stash the PAR_EL1 value prior to
      restoring host register values, enabling us to report the original
      values at the point of the panic.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      fbb4574c
    • M
      arm64: kvm: avoid %p in __kvm_hyp_panic · 1d7a4e31
      Mark Rutland 提交于
      Currently __kvm_hyp_panic uses %p for values which are not pointers,
      such as the ESR value. This can confusingly lead to "(null)" being
      printed for the value.
      
      Use %x instead, and only use %p for host pointers.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      1d7a4e31
    • C
      KVM: arm/arm64: Fix preemptible timer active state crazyness · 7e16aa81
      Christoffer Dall 提交于
      We were setting the physical active state on the GIC distributor in a
      preemptible section, which could cause us to set the active state on
      different physical CPU from the one we were actually going to run on,
      hacoc ensues.
      
      Since we are no longer descheduling/scheduling soft timers in the
      flush/sync timer functions, simply moving the timer flush into a
      non-preemptible section.
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      7e16aa81
    • M
      arm64: KVM: Add workaround for Cortex-A57 erratum 834220 · 498cd5c3
      Marc Zyngier 提交于
      Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults
      when a Stage 1 permission fault or device alignment fault should
      have been reported.
      
      This patch implements the workaround (which is to validate that the
      Stage-1 translation actually succeeds) by using code patching.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      498cd5c3
    • M
      arm64: KVM: Fix AArch32 to AArch64 register mapping · c0f09634
      Marc Zyngier 提交于
      When running a 32bit guest under a 64bit hypervisor, the ARMv8
      architecture defines a mapping of the 32bit registers in the 64bit
      space. This includes banked registers that are being demultiplexed
      over the 64bit ones.
      
      On exceptions caused by an operation involving a 32bit register, the
      HW exposes the register number in the ESR_EL2 register. It was so
      far understood that SW had to distinguish between AArch32 and AArch64
      accesses (based on the current AArch32 mode and register number).
      
      It turns out that I misinterpreted the ARM ARM, and the clue is in
      D1.20.1: "For some exceptions, the exception syndrome given in the
      ESR_ELx identifies one or more register numbers from the issued
      instruction that generated the exception. Where the exception is
      taken from an Exception level using AArch32 these register numbers
      give the AArch64 view of the register."
      
      Which means that the HW is already giving us the translated version,
      and that we shouldn't try to interpret it at all (for example, doing
      an MMIO operation from the IRQ mode using the LR register leads to
      very unexpected behaviours).
      
      The fix is thus not to perform a call to vcpu_reg32() at all from
      vcpu_reg(), and use whatever register number is supplied directly.
      The only case we need to find out about the mapping is when we
      actively generate a register access, which only occurs when injecting
      a fault in a guest.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      c0f09634
    • A
      ARM/arm64: KVM: test properly for a PTE's uncachedness · e6fab544
      Ard Biesheuvel 提交于
      The open coded tests for checking whether a PTE maps a page as
      uncached use a flawed '(pte_val(xxx) & CONST) != CONST' pattern,
      which is not guaranteed to work since the type of a mapping is
      not a set of mutually exclusive bits
      
      For HYP mappings, the type is an index into the MAIR table (i.e, the
      index itself does not contain any information whatsoever about the
      type of the mapping), and for stage-2 mappings it is a bit field where
      normal memory and device types are defined as follows:
      
          #define MT_S2_NORMAL            0xf
          #define MT_S2_DEVICE_nGnRE      0x1
      
      I.e., masking *and* comparing with the latter matches on the former,
      and we have been getting lucky merely because the S2 device mappings
      also have the PTE_UXN bit set, or we would misidentify memory mappings
      as device mappings.
      
      Since the unmap_range() code path (which contains one instance of the
      flawed test) is used both for HYP mappings and stage-2 mappings, and
      considering the difference between the two, it is non-trivial to fix
      this by rewriting the tests in place, as it would involve passing
      down the type of mapping through all the functions.
      
      However, since HYP mappings and stage-2 mappings both deal with host
      physical addresses, we can simply check whether the mapping is backed
      by memory that is managed by the host kernel, and only perform the
      D-cache maintenance if this is the case.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: NPavel Fedin <p.fedin@samsung.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      e6fab544
  6. 24 11月, 2015 3 次提交
    • C
      ARM: dts: vfxxx: Fix dspi[01] spi-num-chipselects. · 897ed0ca
      Cory Tusar 提交于
      Per the Vybrid Reference Manual (section 3.8.6.1), dspi0 has 6 chip
      select signals associated with it, while dspi1 has only 4.
      Signed-off-by: NCory Tusar <cory.tusar@pid1solutions.com>
      Acked-by: NStefan Agner <stefan@agner.ch>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NShawn Guo <shawnguo@kernel.org>
      897ed0ca
    • M
      ARM: dts: keystone: k2l: fix kernel crash when clk_ignore_unused is not in bootargs · 17e846aa
      Murali Karicheri 提交于
      Currently kernel crash randomly when K2L EVM is booted without
      clk_ignore_unused in the bootargs. This workaround is not needed
      on other K2 devices such as K2HK and K2E and with this fix, we can
      remove the workaround altogether. netcp driver on K2L uses linked
      ram on OSR (On chip Static RAM) and requires the clock to this peripheral
      enabled for proper functioning. This is the reason for the kernel crash.
      So add the clock node to fix this issue.
      
      While at it, remove the workaround documentation as well.
      
      With the fix applied, clk_summary dump shows the clock to OSR enabled.
      
      cat /sys/kernel/debug/clk/clk_summary
       ------cut--------------
         tcp3d-1                   0            0   399360000          0 0
         tcp3d-0                   0            0   399360000          0 0
         osr                       1            1   399360000          0 0
         fftc-0                    0            0   399360000          0 0
       -----cut----------------
      Signed-off-by: NMurali Karicheri <m-karicheri2@ti.com>
      Signed-off-by: NSantosh Shilimkar <ssantosh@kernel.org>
      17e846aa
    • V
      ARC: dw2 unwind: Remove falllback linear search thru FDE entries · 2e22502c
      Vineet Gupta 提交于
      Fixes STAR 9000953410: "perf callgraph profiling causing RCU stalls"
      
      | perf record -g -c 15000 -e cycles /sbin/hackbench
      |
      | INFO: rcu_preempt self-detected stall on CPU
      | 1: (1 GPs behind) idle=609/140000000000002/0 softirq=2914/2915 fqs=603
      | Task dump for CPU 1:
      
      in-kernel dwarf unwinder has a fast binary lookup and a fallback linear
      search (which iterates thru each of ~11K entries) thus takes 2 orders of
      magnitude longer (~3 million cycles vs. 2000). Routines written in hand
      assembler lack dwarf info (as we don't support assembler CFI pseudo-ops
      yet) fail the unwinder binary lookup, hit linear search, failing
      nevertheless in the end.
      
      However the linear search is pointless as binary lookup tables are created
      from it in first place. It is impossible to have binary lookup fail while
      succeed the linear search. It is pure waste of cycles thus removed by
      this patch.
      
      This manifested as RCU stalls / NMI watchdog splat when running
      hackbench under perf with callgraph profiling. The triggering condition
      was perf counter overflowing in routine lacking dwarf info (like memset)
      leading to patheic 3 million cycle unwinder slow path and by the time it
      returned new interrupts were already pending (Timer, IPI) and taken
      rightaway. The original memset didn't make forward progress, system kept
      accruing more interrupts and more unwinder delayes in a vicious feedback
      loop, ultimately triggering the NMI diagnostic.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NVineet Gupta <vgupta@synopsys.com>
      2e22502c
  7. 23 11月, 2015 5 次提交
    • M
      powerpc/tm: Check for already reclaimed tasks · 7f821fc9
      Michael Neuling 提交于
      Currently we can hit a scenario where we'll tm_reclaim() twice.  This
      results in a TM bad thing exception because the second reclaim occurs
      when not in suspend mode.
      
      The scenario in which this can happen is the following.  We attempt to
      deliver a signal to userspace.  To do this we need obtain the stack
      pointer to write the signal context.  To get this stack pointer we
      must tm_reclaim() in case we need to use the checkpointed stack
      pointer (see get_tm_stackpointer()).  Normally we'd then return
      directly to userspace to deliver the signal without going through
      __switch_to().
      
      Unfortunatley, if at this point we get an error (such as a bad
      userspace stack pointer), we need to exit the process.  The exit will
      result in a __switch_to().  __switch_to() will attempt to save the
      process state which results in another tm_reclaim().  This
      tm_reclaim() now causes a TM Bad Thing exception as this state has
      already been saved and the processor is no longer in TM suspend mode.
      Whee!
      
      This patch checks the state of the MSR to ensure we are TM suspended
      before we attempt the tm_reclaim().  If we've already saved the state
      away, we should no longer be in TM suspend mode.  This has the
      additional advantage of checking for a potential TM Bad Thing
      exception.
      
      Found using syscall fuzzer.
      
      Fixes: fb09692e ("powerpc: Add reclaim and recheckpoint functions for context switching transactional memory processes")
      Cc: stable@vger.kernel.org # v3.9+
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      7f821fc9
    • M
      powerpc/tm: Block signal return setting invalid MSR state · d2b9d2a5
      Michael Neuling 提交于
      Currently we allow both the MSR T and S bits to be set by userspace on
      a signal return.  Unfortunately this is a reserved configuration and
      will cause a TM Bad Thing exception if attempted (via rfid).
      
      This patch checks for this case in both the 32 and 64 bit signals
      code.  If both T and S are set, we mark the context as invalid.
      
      Found using a syscall fuzzer.
      
      Fixes: 2b0a576d ("powerpc: Add new transactional memory state to the signal context")
      Cc: stable@vger.kernel.org # v3.9+
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      d2b9d2a5
    • A
      ARM: imx: add platform irq type setting in gpc · 4699ccbf
      Anson Huang 提交于
      GPC irq domain is a child domain of GIC, now all of platform irqs
      are inside GPC domain, during the module populate, all devices irq
      should have correct type setting in GIC, however, there is no
      .irq_set_type callback setting in GPC, so the irq_set_type will be
      skipped and cause all irqs' type in /proc/interrupt are "edge" which
      mismatch with irq type setting in dtb file. Since GPC has no irq
      type setting, so just tell kernel to use irq_chip_set_type_parent.
      Signed-off-by: NAnson Huang <Anson.Huang@freescale.com>
      Cc: <stable@vger.kernel.org> # 4.1+
      Reviewed-by: NLucas Stach <l.stach@pengutronix.de>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NShawn Guo <shawnguo@kernel.org>
      4699ccbf
    • S
      ARM: dts: vfxxx: Fix erroneous property in esdhc0 node · 3fa2f949
      Sanchayan Maity 提交于
      Something seems to have gone wrong during the merging of the device
      tree changes with the following patch
      
      "ARM: dts: add property for maximum ADC clock frequencies"
      
      The property "fsl,adck-max-frequency" instead of being applied for
      the ADC1 node got applied to the esdhc0 node. This patch fixes it.
      Signed-off-by: NSanchayan Maity <maitysanchayan@gmail.com>
      Fixes: def0641e ("ARM: dts: add property for maximum ADC clock frequencies")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NShawn Guo <shawnguo@kernel.org>
      3fa2f949
    • N
      ARM: shmobile: r8a7793: proper constness with __initconst · c29d387b
      Nicolas Pitre 提交于
      Both the pointer array and the pointed data have to be const when using
      __initconst to be correct.  This also fixes LTO builds that otherwise
      fail with section mismatch errors.
      
      Fixes: ec60d95b ("ARM: shmobile: Basic r8a7793 SoC support")
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NSimon Horman <horms+renesas@verge.net.au>
      c29d387b
  8. 22 11月, 2015 4 次提交