1. 09 9月, 2015 3 次提交
    • J
      xen: Use correctly the Xen memory terminologies · 0df4f266
      Julien Grall 提交于
      Based on include/xen/mm.h [1], Linux is mistakenly using MFN when GFN
      is meant, I suspect this is because the first support for Xen was for
      PV. This resulted in some misimplementation of helpers on ARM and
      confused developers about the expected behavior.
      
      For instance, with pfn_to_mfn, we expect to get an MFN based on the name.
      Although, if we look at the implementation on x86, it's returning a GFN.
      
      For clarity and avoid new confusion, replace any reference to mfn with
      gfn in any helpers used by PV drivers. The x86 code will still keep some
      reference of pfn_to_mfn which may be used by all kind of guests
      No changes as been made in the hypercall field, even
      though they may be invalid, in order to keep the same as the defintion
      in xen repo.
      
      Note that page_to_mfn has been renamed to xen_page_to_gfn to avoid a
      name to close to the KVM function gfn_to_page.
      
      Take also the opportunity to simplify simple construction such
      as pfn_to_mfn(page_to_pfn(page)) into xen_page_to_gfn. More complex clean up
      will come in follow-up patches.
      
      [1] http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=e758ed14f390342513405dd766e874934573e6cbSigned-off-by: NJulien Grall <julien.grall@citrix.com>
      Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Acked-by: NDmitry Torokhov <dmitry.torokhov@gmail.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      0df4f266
    • J
      arm/xen: implement correctly pfn_to_mfn · 5192b35d
      Julien Grall 提交于
      After the commit introducing convertion between DMA and guest addresses,
      all the callers of pfn_to_mfn are expecting to get a GFN (Guest Frame
      Number). On ARM, all the guests are auto-translated so the GFN is equal
      to the Linux PFN (Pseudo-physical Frame Number).
      
      The current implementation may return an MFN if the caller is passing a
      PFN associated to a mapped foreign grant. In pratice, I haven't seen
      the problem on running guest but we should fix it for the sake of
      correctness.
      
      Correct the implementation by always returning the pfn passed in parameter.
      
      A follow-up patch will take care to rename pfn_to_mfn to a suitable
      name.
      Signed-off-by: NJulien Grall <julien.grall@citrix.com>
      Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      5192b35d
    • J
      xen: Make clear that swiotlb and biomerge are dealing with DMA address · 32e09870
      Julien Grall 提交于
      The swiotlb is required when programming a DMA address on ARM when a
      device is not protected by an IOMMU.
      
      In this case, the DMA address should always be equal to the machine address.
      For DOM0 memory, Xen ensure it by have an identity mapping between the
      guest address and host address. However, when mapping a foreign grant
      reference, the 1:1 model doesn't work.
      
      For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN
      (Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point
      of view given that all ARM guest are auto-translated.
      
      Even though the name pfn_to_mfn is misleading, we need to ensure that
      those caller get a GFN and not by mistake a MFN. In pratical, I haven't
      seen error related to this but we should fix it for the sake of
      correctness.
      
      In order to fix the implementation of pfn_to_mfn on ARM in a follow-up
      patch, we have to introduce new helpers to return the DMA from a PFN and
      the invert.
      
      On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn.
      
      The helpers will be used in swiotlb and xen_biovec_phys_mergeable.
      
      This is necessary in the latter because we have to ensure that the
      biovec code will not try to merge a biovec using foreign page and
      another using Linux memory.
      
      Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn
      given that the only usage was in swiotlb.
      Signed-off-by: NJulien Grall <julien.grall@citrix.com>
      Reviewed-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      32e09870
  2. 28 8月, 2015 1 次提交
  3. 27 8月, 2015 2 次提交
    • R
      ARM: software-based priviledged-no-access support · a5e090ac
      Russell King 提交于
      Provide a software-based implementation of the priviledged no access
      support found in ARMv8.1.
      
      Userspace pages are mapped using a different domain number from the
      kernel and IO mappings.  If we switch the user domain to "no access"
      when we enter the kernel, we can prevent the kernel from touching
      userspace.
      
      However, the kernel needs to be able to access userspace via the
      various user accessor functions.  With the wrapping in the previous
      patch, we can temporarily enable access when the kernel needs user
      access, and re-disable it afterwards.
      
      This allows us to trap non-intended accesses to userspace, eg, caused
      by an inadvertent dereference of the LIST_POISON* values, which, with
      appropriate user mappings setup, can be made to succeed.  This in turn
      can allow use-after-free bugs to be further exploited than would
      otherwise be possible.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a5e090ac
    • R
      ARM: entry: provide uaccess assembly macro hooks · 2190fed6
      Russell King 提交于
      Provide hooks into the kernel entry and exit paths to permit control
      of userspace visibility to the kernel.  The intended use is:
      
      - on entry to kernel from user, uaccess_disable will be called to
        disable userspace visibility
      - on exit from kernel to user, uaccess_enable will be called to
        enable userspace visibility
      - on entry from a kernel exception, uaccess_save_and_disable will be
        called to save the current userspace visibility setting, and disable
        access
      - on exit from a kernel exception, uaccess_restore will be called to
        restore the userspace visibility as it was before the exception
        occurred.
      
      These hooks allows us to keep userspace visibility disabled for the
      vast majority of the kernel, except for localised regions where we
      want to explicitly access userspace.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      2190fed6
  4. 26 8月, 2015 1 次提交
  5. 25 8月, 2015 4 次提交
  6. 21 8月, 2015 8 次提交
  7. 20 8月, 2015 2 次提交
    • J
      arm/xen: Remove helpers which are PV specific · 724afaea
      Julien Grall 提交于
      ARM guests are always HVM. The current implementation is assuming a 1:1
      mapping which is only true for DOM0 and may not be at all in the future.
      
      Furthermore, all the helpers but arbitrary_virt_to_machine are used in
      x86 specific code (or only compiled for).
      
      The helper arbitrary_virt_to_machine is only used in PV specific code.
      Therefore we should never call the function.
      
      Add a BUG() in this helper and drop all the others.
      Signed-off-by: NJulien Grall <julien.grall@citrix.com>
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      724afaea
    • J
      xen/events: Support event channel rebind on ARM · 4a5b6946
      Julien Grall 提交于
      Currently, the event channel rebind code is gated with the presence of
      the vector callback.
      
      The virtual interrupt controller on ARM has the concept of per-CPU
      interrupt (PPI) which allow us to support per-VCPU event channel.
      Therefore there is no need of vector callback for ARM.
      
      Xen is already using a free PPI to notify the guest VCPU of an event.
      Furthermore, the xen code initialization in Linux (see
      arch/arm/xen/enlighten.c) is requesting correctly a per-CPU IRQ.
      
      Introduce new helper xen_support_evtchn_rebind to allow architecture
      decide whether rebind an event is support or not. It will always return
      true on ARM and keep the same behavior on x86.
      
      This is also allow us to drop the usage of xen_have_vector_callback
      entirely in the ARM code.
      Signed-off-by: NJulien Grall <julien.grall@citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      4a5b6946
  8. 18 8月, 2015 3 次提交
  9. 12 8月, 2015 2 次提交
  10. 04 8月, 2015 1 次提交
  11. 03 8月, 2015 3 次提交
    • M
      ARM: migrate to common PSCI client code · be120397
      Mark Rutland 提交于
      Now that the common PSCI client code has been factored out to
      drivers/firmware, and made safe for 32-bit use, move the 32-bit ARM code
      over to it. This results in a moderate reduction of duplicated lines,
      and will prevent further duplication as the PSCI client code is updated
      for PSCI 1.0 and beyond.
      
      The two legacy platform users of the PSCI invocation code are updated to
      account for interface changes. In both cases the power state parameter
      (which is constant) is now generated using macros, so that the
      pack/unpack logic can be killed in preparation for PSCI 1.0 power state
      changes.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NRob Herring <robh@kernel.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ashwin Chaugule <ashwin.chaugule@linaro.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      be120397
    • P
      locking/static_keys: Add a new static_key interface · 11276d53
      Peter Zijlstra 提交于
      There are various problems and short-comings with the current
      static_key interface:
      
       - static_key_{true,false}() read like a branch depending on the key
         value, instead of the actual likely/unlikely branch depending on
         init value.
      
       - static_key_{true,false}() are, as stated above, tied to the
         static_key init values STATIC_KEY_INIT_{TRUE,FALSE}.
      
       - we're limited to the 2 (out of 4) possible options that compile to
         a default NOP because that's what our arch_static_branch() assembly
         emits.
      
      So provide a new static_key interface:
      
        DEFINE_STATIC_KEY_TRUE(name);
        DEFINE_STATIC_KEY_FALSE(name);
      
      Which define a key of different types with an initial true/false
      value.
      
      Then allow:
      
         static_branch_likely()
         static_branch_unlikely()
      
      to take a key of either type and emit the right instruction for the
      case.
      
      This means adding a second arch_static_branch_jump() assembly helper
      which emits a JMP per default.
      
      In order to determine the right instruction for the right state,
      encode the branch type in the LSB of jump_entry::key.
      
      This is the final step in removing the naming confusion that has led to
      a stream of avoidable bugs such as:
      
        a833581e ("x86, perf: Fix static_key bug in load_mm_cr4()")
      
      ... but it also allows new static key combinations that will give us
      performance enhancements in the subsequent patches.
      
      Tested-by: Rabin Vincent <rabin@rab.in> # arm
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> # ppc
      Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> # s390
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      11276d53
    • A
      locking, arch: use WRITE_ONCE()/READ_ONCE() in smp_store_release()/smp_load_acquire() · 76695af2
      Andrey Konovalov 提交于
      Replace ACCESS_ONCE() macro in smp_store_release() and smp_load_acquire()
      with WRITE_ONCE() and READ_ONCE() on x86, arm, arm64, ia64, metag, mips,
      powerpc, s390, sparc and asm-generic since ACCESS_ONCE() does not work
      reliably on non-scalar types.
      
      WRITE_ONCE() and READ_ONCE() were introduced in the following commits:
      
        230fa253 ("kernel: Provide READ_ONCE and ASSIGN_ONCE")
        43239cbe ("kernel: Change ASSIGN_ONCE(val, x) to WRITE_ONCE(x, val)")
      Signed-off-by: NAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NDavidlohr Bueso <dbueso@suse.de>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Cc: Alexander Duyck <alexander.h.duyck@redhat.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arch@vger.kernel.org
      Link: http://lkml.kernel.org/r/1438528264-714-1-git-send-email-andreyknvl@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      76695af2
  12. 02 8月, 2015 1 次提交
    • R
      ARM: reduce visibility of dmac_* functions · 1234e3fd
      Russell King 提交于
      The dmac_* functions are private to the ARM DMA API implementation, and
      should not be used by drivers.  In order to discourage their use, remove
      their prototypes and macros from asm/*.h.
      
      We have to leave dmac_flush_range() behind as Exynos and MSM IOMMU code
      use these; once these sites are fixed, this can be moved also.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      1234e3fd
  13. 01 8月, 2015 2 次提交
    • W
      ARM: 8407/1: switch_to: Remove finish_arch_switch · 9ac87c5a
      Will Deacon 提交于
      Fold finish_arch_switch() into switch_to(), in preparation for the
      removal of the finish_arch_switch call from core sched code.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      9ac87c5a
    • S
      ARM: 8392/3: smp: Only expose /sys/.../cpuX/online if hotpluggable · 787047ee
      Stephen Boyd 提交于
      Writes to /sys/.../cpuX/online fail if we determine the platform
      doesn't support hotplug for that CPU. Furthermore, if the cpu_die
      op isn't specified the system hangs when we try to offline a CPU
      and it comes right back online unexpectedly. Let's figure this
      stuff out before we make the sysfs nodes so that the online file
      doesn't even exist if it isn't (at least sometimes) possible to
      hotplug the CPU.
      
      Add a new 'cpu_can_disable' op and repoint all 'cpu_disable'
      implementations at it because all implementers use the op to
      indicate if a CPU can be hotplugged or not in a static fashion.
      With PSCI we may need to add a 'cpu_disable' op so that the
      secure OS can be migrated off the CPU we're trying to hotplug.
      In this case, the 'cpu_can_disable' op will indicate that all
      CPUs are hotpluggable by returning true, but the 'cpu_disable' op
      will make a PSCI migration call and occasionally fail, denying
      the hotplug of a CPU. This shouldn't be any worse than x86 where
      we may indicate that all CPUs are hotpluggable but occasionally
      we can't offline a CPU due to check_irq_vectors_for_cpu_disable()
      failing to find a CPU to move vectors to.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Nicolas Pitre <nico@linaro.org>
      Cc: Dave Martin <Dave.Martin@arm.com>
      Acked-by: Simon Horman <horms@verge.net.au> [shmobile portion]
      Tested-by: NSimon Horman <horms@verge.net.au>
      Cc: Magnus Damm <magnus.damm@gmail.com>
      Cc: <linux-sh@vger.kernel.org>
      Tested-by: NTyler Baker <tyler.baker@linaro.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      787047ee
  14. 31 7月, 2015 1 次提交
    • M
      arm: perf: factor arm_pmu core out to drivers · fa8ad788
      Mark Rutland 提交于
      To enable sharing of the arm_pmu code with arm64, this patch factors it
      out to drivers/perf/. A new drivers/perf directory is added for
      performance monitor drivers to live under.
      
      MAINTAINERS is updated accordingly. Files added previously without a
      corresponsing MAINTAINERS update (perf_regs.c, perf_callchain.c, and
      perf_event.h) are also added.
      
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Linus Walleij <linus.walleij@linaro.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      [will: augmented Kconfig help slightly]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fa8ad788
  15. 27 7月, 2015 2 次提交
  16. 25 7月, 2015 2 次提交
    • R
      ARM: add soc memory barrier extension · 4e1f8a6f
      Russell King 提交于
      Add an extension to the heavy barrier code to allow a SoC specific
      memory barrier function to be provided.  This is needed for platforms
      where the interconnect has weak ordering, and thus needs assistance
      to ensure that memory writes are properly visible in the correct order
      to other parts of the system.
      Acked-by: NTony Lindgren <tony@atomide.com>
      Acked-by: NRichard Woodruff <r-woodruff2@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4e1f8a6f
    • R
      ARM: move heavy barrier support out of line · f8130906
      Russell King 提交于
      The existing memory barrier macro causes a significant amount of code
      to be inserted inline at every call site.  For example, in
      gpio_set_irq_type(), we have this for mb():
      
      c0344c08:       f57ff04e        dsb     st
      c0344c0c:       e59f8190        ldr     r8, [pc, #400]  ; c0344da4 <gpio_set_irq_type+0x230>
      c0344c10:       e3590004        cmp     r9, #4
      c0344c14:       e5983014        ldr     r3, [r8, #20]
      c0344c18:       0a000054        beq     c0344d70 <gpio_set_irq_type+0x1fc>
      c0344c1c:       e3530000        cmp     r3, #0
      c0344c20:       0a000004        beq     c0344c38 <gpio_set_irq_type+0xc4>
      c0344c24:       e50b2030        str     r2, [fp, #-48]  ; 0xffffffd0
      c0344c28:       e50bc034        str     ip, [fp, #-52]  ; 0xffffffcc
      c0344c2c:       e12fff33        blx     r3
      c0344c30:       e51bc034        ldr     ip, [fp, #-52]  ; 0xffffffcc
      c0344c34:       e51b2030        ldr     r2, [fp, #-48]  ; 0xffffffd0
      c0344c38:       e5963004        ldr     r3, [r6, #4]
      
      Moving the outer_cache_sync() call out of line reduces the impact of
      the barrier:
      
      c0344968:       f57ff04e        dsb     st
      c034496c:       e35a0004        cmp     sl, #4
      c0344970:       e50b2030        str     r2, [fp, #-48]  ; 0xffffffd0
      c0344974:       0a000044        beq     c0344a8c <gpio_set_irq_type+0x1b8>
      c0344978:       ebf363dd        bl      c001d8f4 <arm_heavy_mb>
      c034497c:       e5953004        ldr     r3, [r5, #4]
      
      This should reduce the cache footprint of this code.  Overall, this
      results in a reduction of around 20K in the kernel size:
      
          text    data      bss      dec     hex filename
      10773970  667392 10369656 21811018 14ccf4a ../build/imx6/vmlinux-old
      10754219  667392 10369656 21791267 14c8223 ../build/imx6/vmlinux-new
      
      Another advantage to this approach is that we can finally resolve the
      issue of SoCs which have their own memory barrier requirements within
      multiplatform kernels (such as OMAP.)  Here, the bus interconnects
      need additional handling to ensure that writes become visible in the
      correct order (eg, between dma_map() operations, writes to DMA
      coherent memory, and MMIO accesses.)
      Acked-by: NTony Lindgren <tony@atomide.com>
      Acked-by: NRichard Woodruff <r-woodruff2@ti.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      f8130906
  17. 18 7月, 2015 1 次提交
  18. 17 7月, 2015 1 次提交