1. 30 1月, 2017 1 次提交
  2. 14 1月, 2017 1 次提交
    • N
      ARM: put types.h in uapi · ed79c9d3
      Nicolas Dichtel 提交于
      Due to the way kbuild works, this header was unintentionally exported
      back in 2013 when it was created, despite it not being in a uapi/
      directory.  This is very non-intuitive behaviour by Kbuild.
      
      However, we've had this include exported to userland for almost four
      years, and searching google for "ARM types.h __UINTPTR_TYPE__" gives
      no hint that anyone has complained about it.  So, let's make it
      officially exported in this state.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      ed79c9d3
  3. 13 1月, 2017 1 次提交
    • J
      KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems · 488f94d7
      Jintack Lim 提交于
      Current KVM world switch code is unintentionally setting wrong bits to
      CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
      timer.  Bit positions of CNTHCTL_EL2 are changing depending on
      HCR_EL2.E2H bit.  EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
      not set, but they are 11th and 10th bits respectively when E2H is set.
      
      In fact, on VHE we only need to set those bits once, not for every world
      switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
      1, which makes those bits have no effect for the host kernel execution.
      So we just set those bits once for guests, and that's it.
      Signed-off-by: NJintack Lim <jintack@cs.columbia.edu>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      488f94d7
  4. 11 1月, 2017 2 次提交
  5. 13 12月, 2016 3 次提交
  6. 03 12月, 2016 1 次提交
  7. 29 11月, 2016 1 次提交
  8. 23 11月, 2016 1 次提交
    • R
      Revert "arm: move exports to definitions" · 8478132a
      Russell King 提交于
      This reverts commit 4dd1837d.
      
      Moving the exports for assembly code into the assembly files breaks
      KSYM trimming, but also breaks modversions.
      
      While fixing the KSYM trimming is trivial, fixing modversions brings
      us to a technically worse position that we had prior to the above
      change:
      
      - We end up with the prototype definitions divorsed from everything
        else, which means that adding or removing assembly level ksyms
        become more fragile:
        * if adding a new assembly ksyms export, a missed prototype in
          asm-prototypes.h results in a successful build if no module in
          the selected configuration makes use of the symbol.
        * when removing a ksyms export, asm-prototypes.h will get forgotten,
          with armksyms.c, you'll get a build error if you forget to touch
          the file.
      
      - We end up with the same amount of include files and prototypes,
        they're just in a header file instead of a .c file with their
        exports.
      
      As for lines of code, we don't get much of a size reduction:
       (original commit)
       47 files changed, 131 insertions(+), 208 deletions(-)
       (fix for ksyms trimming)
       7 files changed, 18 insertions(+), 5 deletions(-)
       (two fixes for modversions)
       1 file changed, 34 insertions(+)
       3 files changed, 7 insertions(+), 2 deletions(-)
      which results in a net total of only 25 lines deleted.
      
      As there does not seem to be much benefit from this change of approach,
      revert the change.
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      8478132a
  9. 17 11月, 2016 1 次提交
  10. 16 11月, 2016 2 次提交
    • C
      locking/core, arch: Remove cpu_relax_lowlatency() · 5bd0b85b
      Christian Borntraeger 提交于
      As there are no users left, we can remove cpu_relax_lowlatency()
      implementations from every architecture.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Noam Camus <noamc@ezchip.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Cc: <linux-arch@vger.kernel.org>
      Link: http://lkml.kernel.org/r/1477386195-32736-6-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5bd0b85b
    • C
      locking/core: Introduce cpu_relax_yield() · 79ab11cd
      Christian Borntraeger 提交于
      For spinning loops people do often use barrier() or cpu_relax().
      For most architectures cpu_relax and barrier are the same, but on
      some architectures cpu_relax can add some latency.
      For example on power,sparc64 and arc, cpu_relax can shift the CPU
      towards other hardware threads in an SMT environment.
      On s390 cpu_relax does even more, it uses an hypercall to the
      hypervisor to give up the timeslice.
      In contrast to the SMT yielding this can result in larger latencies.
      In some places this latency is unwanted, so another variant
      "cpu_relax_lowlatency" was introduced. Before this is used in more
      and more places, lets revert the logic and provide a cpu_relax_yield
      that can be called in places where yielding is more important than
      latency. By default this is the same as cpu_relax on all architectures.
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Noam Camus <noamc@ezchip.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linuxppc-dev@lists.ozlabs.org
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1477386195-32736-2-git-send-email-borntraeger@de.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      79ab11cd
  11. 14 11月, 2016 1 次提交
  12. 13 11月, 2016 1 次提交
    • L
      efi: Allow bitness-agnostic protocol calls · 3552fdf2
      Lukas Wunner 提交于
      We already have a macro to invoke boot services which on x86 adapts
      automatically to the bitness of the EFI firmware:  efi_call_early().
      
      The macro allows sharing of functions across arches and bitness variants
      as long as those functions only call boot services.  However in practice
      functions in the EFI stub contain a mix of boot services calls and
      protocol calls.
      
      Add an efi_call_proto() macro for bitness-agnostic protocol calls to
      allow sharing more code across arches as well as deduplicating 32 bit
      and 64 bit code paths.
      
      On x86, implement it using a new efi_table_attr() macro for bitness-
      agnostic table lookups.  Refactor efi_call_early() to make use of the
      same macro.  (The resulting object code remains identical.)
      Signed-off-by: NLukas Wunner <lukas@wunner.de>
      Signed-off-by: NMatt Fleming <matt@codeblueprint.co.uk>
      Cc: Andreas Noever <andreas.noever@gmail.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Jones <pjones@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: http://lkml.kernel.org/r/20161112213237.8804-8-matt@codeblueprint.co.ukSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3552fdf2
  13. 05 11月, 2016 1 次提交
    • M
      arm/arm64: KVM: Perform local TLB invalidation when multiplexing vcpus on a single CPU · 94d0e598
      Marc Zyngier 提交于
      Architecturally, TLBs are private to the (physical) CPU they're
      associated with. But when multiple vcpus from the same VM are
      being multiplexed on the same CPU, the TLBs are not private
      to the vcpus (and are actually shared across the VMID).
      
      Let's consider the following scenario:
      
      - vcpu-0 maps PA to VA
      - vcpu-1 maps PA' to VA
      
      If run on the same physical CPU, vcpu-1 can hit TLB entries generated
      by vcpu-0 accesses, and access the wrong physical page.
      
      The solution to this is to keep a per-VM map of which vcpu ran last
      on each given physical CPU, and invalidate local TLBs when switching
      to a different vcpu from the same VM.
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      94d0e598
  14. 01 11月, 2016 1 次提交
  15. 25 10月, 2016 1 次提交
  16. 19 10月, 2016 4 次提交
  17. 12 10月, 2016 1 次提交
  18. 08 10月, 2016 1 次提交
    • C
      nmi_backtrace: add more trigger_*_cpu_backtrace() methods · 9a01c3ed
      Chris Metcalf 提交于
      Patch series "improvements to the nmi_backtrace code" v9.
      
      This patch series modifies the trigger_xxx_backtrace() NMI-based remote
      backtracing code to make it more flexible, and makes a few small
      improvements along the way.
      
      The motivation comes from the task isolation code, where there are
      scenarios where we want to be able to diagnose a case where some cpu is
      about to interrupt a task-isolated cpu.  It can be helpful to see both
      where the interrupting cpu is, and also an approximation of where the
      cpu that is being interrupted is.  The nmi_backtrace framework allows us
      to discover the stack of the interrupted cpu.
      
      I've tested that the change works as desired on tile, and build-tested
      x86, arm, mips, and sparc64.  For x86 I confirmed that the generic
      cpuidle stuff as well as the architecture-specific routines are in the
      new cpuidle section.  For arm, mips, and sparc I just build-tested it
      and made sure the generic cpuidle routines were in the new cpuidle
      section, but I didn't attempt to figure out which the platform-specific
      idle routines might be.  That might be more usefully done by someone
      with platform experience in follow-up patches.
      
      This patch (of 4):
      
      Currently you can only request a backtrace of either all cpus, or all
      cpus but yourself.  It can also be helpful to request a remote backtrace
      of a single cpu, and since we want that, the logical extension is to
      support a cpumask as the underlying primitive.
      
      This change modifies the existing lib/nmi_backtrace.c code to take a
      cpumask as its basic primitive, and modifies the linux/nmi.h code to use
      the new "cpumask" method instead.
      
      The existing clients of nmi_backtrace (arm and x86) are converted to
      using the new cpumask approach in this change.
      
      The other users of the backtracing API (sparc64 and mips) are converted
      to use the cpumask approach rather than the all/allbutself approach.
      The mips code ignored the "include_self" boolean but with this change it
      will now also dump a local backtrace if requested.
      
      Link: http://lkml.kernel.org/r/1472487169-14923-2-git-send-email-cmetcalf@mellanox.comSigned-off-by: NChris Metcalf <cmetcalf@mellanox.com>
      Tested-by: Daniel Thompson <daniel.thompson@linaro.org> [arm]
      Reviewed-by: NAaron Tomlin <atomlin@redhat.com>
      Reviewed-by: NPetr Mladek <pmladek@suse.com>
      Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9a01c3ed
  19. 06 10月, 2016 1 次提交
    • R
      ARM: fix delays · fb833b1f
      Russell King 提交于
      Commit 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value
      limitation") tried to increase the bogomips limitation, but in doing
      so messed up udelay such that it always gives about a 5% error in the
      delay, even if we use a timer.
      
      The calculation is:
      
      	loops = UDELAY_MULT * us_delay * ticks_per_jiffy >> UDELAY_SHIFT
      
      Originally, UDELAY_MULT was ((UL(2199023) * HZ) >> 11) and UDELAY_SHIFT
      30.  Assuming HZ=100, us_delay of 1000 and ticks_per_jiffy of 1660000
      (eg, 166MHz timer, 1ms delay) this would calculate:
      
      	((UL(2199023) * HZ) >> 11) * 1000 * 1660000 >> 30
      		=> 165999
      
      With the new values of 2047 * HZ + 483648 * HZ / 1000000 and 31, we get:
      
      	(2047 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31
      		=> 158269
      
      which is incorrect.  This is due to a typo - correcting it gives:
      
      	(2147 * HZ + 483648 * HZ / 1000000) * 1000 * 1660000 >> 31
      		=> 165999
      
      i.o.w, the original value.
      
      Fixes: 215e362d ("ARM: 8306/1: loop_udelay: remove bogomips value limitation")
      Cc: <stable@vger.kernel.org>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      fb833b1f
  20. 29 9月, 2016 1 次提交
    • R
      ARM: 8617/1: dma: fix dma_max_pfn() · d248220f
      Roger Quadros 提交于
      Since commit 6ce0d200 ("ARM: dma: Use dma_pfn_offset for dma address translation"),
      dma_to_pfn() already returns the PFN with the physical memory start offset
      so we don't need to add it again.
      
      This fixes USB mass storage lock-up problem on systems that can't do DMA
      over the entire physical memory range (e.g.) Keystone 2 systems with 4GB RAM
      can only do DMA over the first 2GB. [K2E-EVM].
      
      What happens there is that without this patch SCSI layer sets a wrong
      bounce buffer limit in scsi_calculate_bounce_limit() for the USB mass
      storage device. dma_max_pfn() evaluates to 0x8fffff and bounce_limit
      is set to 0x8fffff000 whereas maximum DMA'ble physical memory on Keystone 2
      is 0x87fffffff. This results in non DMA'ble pages being given to the
      USB controller and hence the lock-up.
      
      NOTE: in the above case, USB-SCSI-device's dma_pfn_offset was showing as 0.
      This should have really been 0x780000 as on K2e, LOWMEM_START is 0x80000000
      and HIGHMEM_START is 0x800000000. DMA zone is 2GB so dma_max_pfn should be
      0x87ffff. The incorrect dma_pfn_offset for the USB storage device is because
      USB devices are not correctly inheriting the dma_pfn_offset from the
      USB host controller. This will be fixed by a separate patch.
      
      Fixes: 6ce0d200 ("ARM: dma: Use dma_pfn_offset for dma address translation")
      Cc: stable@vger.kernel.org
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Olof Johansson <olof@lixom.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Reported-by: NGrygorii Strashko <grygorii.strashko@ti.com>
      Signed-off-by: NRoger Quadros <rogerq@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      d248220f
  21. 24 9月, 2016 1 次提交
  22. 23 9月, 2016 1 次提交
    • M
      ARM: gic-v3: Work around definition of gic_write_bpr1 · 3d9cd95f
      Marc Zyngier 提交于
      A new accessor for gic_write_bpr1 is added to arch_gicv3.h in 4.9,
      whilst the CP15 accessors are redifined in a separate branch.
      This leads to a horrible clash, where the new accessor ends up with
      a crap "asm volatile" definition.
      
      Work around this by carrying our own definition of gic_write_bpr1,
      creating a small conflict which will be obvious to resolve.
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      3d9cd95f
  23. 22 9月, 2016 5 次提交
  24. 20 9月, 2016 1 次提交
  25. 16 9月, 2016 1 次提交
  26. 13 9月, 2016 1 次提交
  27. 12 9月, 2016 1 次提交
    • S
      ARM: 8612/1: LPAE: initialize cache policy correctly · 6b3142b2
      Stefan Agner 提交于
      The cachepolicy variable gets initialized using a masked pmd
      value. So far, the pmd has been masked with flags valid for the
      2-page table format, but the 3-page table format requires a
      different mask. On LPAE, this lead to a wrong assumption of what
      initial cache policy has been used. Later a check forces the
      cache policy to writealloc and prints the following warning:
      Forcing write-allocate cache policy for SMP
      
      This patch introduces a new definition PMD_SECT_CACHE_MASK for
      both page table formats which masks in all cache flags in both
      cases.
      Signed-off-by: NStefan Agner <stefan@agner.ch>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      6b3142b2
  28. 08 9月, 2016 2 次提交