1. 16 1月, 2015 3 次提交
  2. 15 1月, 2015 1 次提交
  3. 11 1月, 2015 1 次提交
    • E
      KVM: arm/arm64: vgic: add init entry to VGIC KVM device · 065c0034
      Eric Auger 提交于
      Since the advent of VGIC dynamic initialization, this latter is
      initialized quite late on the first vcpu run or "on-demand", when
      injecting an IRQ or when the guest sets its registers.
      
      This initialization could be initiated explicitly much earlier
      by the users-space, as soon as it has provided the requested
      dimensioning parameters.
      
      This patch adds a new entry to the VGIC KVM device that allows
      the user to manually request the VGIC init:
      - a new KVM_DEV_ARM_VGIC_GRP_CTRL group is introduced.
      - Its first attribute is KVM_DEV_ARM_VGIC_CTRL_INIT
      
      The rationale behind introducing a group is to be able to add other
      controls later on, if needed.
      Signed-off-by: NEric Auger <eric.auger@linaro.org>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      065c0034
  4. 24 12月, 2014 3 次提交
    • J
      arm64: mm: Add pgd_page to support RCU fast_gup · 5d96e0cb
      Jungseok Lee 提交于
      This patch adds pgd_page definition in order to keep supporting
      HAVE_GENERIC_RCU_GUP configuration. In addition, it changes pud_page
      expression to align with pmd_page for readability.
      
      An introduction of pgd_page resolves the following build breakage
      under 4KB + 4Level memory management combo.
      
      mm/gup.c: In function 'gup_huge_pgd':
      mm/gup.c:889:2: error: implicit declaration of function 'pgd_page' [-Werror=implicit-function-declaration]
        head = pgd_page(orig);
        ^
      mm/gup.c:889:7: warning: assignment makes pointer from integer without a cast
        head = pgd_page(orig);
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: NJungseok Lee <jungseoklee85@gmail.com>
      [catalin.marinas@arm.com: remove duplicate pmd_page definition]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5d96e0cb
    • W
      arm64: defconfig: defconfig update for 3.19 · f7bf130e
      Will Deacon 提交于
      The usual defconfig tweaks, this time:
      
        - FHANDLE and AUTOFS4_FS to keep systemd happy
        - PID_NS, QUOTA and KEYS to keep LTP happy
        - Disable DEBUG_PREEMPT, as this *really* hurts performance
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f7bf130e
    • L
      arm64: kernel: fix __cpu_suspend mm switch on warm-boot · f43c2718
      Lorenzo Pieralisi 提交于
      On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0
      page tables on boot or to the active_mm mappings belonging to user space
      processes, it must never be set to swapper_pg_dir page tables mappings.
      
      When a CPU is booted its active_mm is set to init_mm even though its
      TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies
      that when __cpu_suspend is triggered the active_mm can point at
      init_mm even if the current TTBR0_EL1 register contains the reserved
      TTBR0_EL1 mappings.
      
      Therefore, the mm save and restore executed in __cpu_suspend might
      turn out to be erroneous in that, if the current->active_mm corresponds
      to init_mm, on resume from low power it ends up restoring in the
      TTBR0_EL1 the init_mm mappings that are global and can cause speculation
      of TLB entries which end up being propagated to user space.
      
      This patch fixes the issue by checking the active_mm pointer before
      restoring the TTBR0 mappings. If the current active_mm == &init_mm,
      the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of
      switching back to the active_mm, which is the expected behaviour
      corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered.
      
      Fixes: 95322526 ("arm64: kernel: cpu_{suspend/resume} implementation")
      Cc: <stable@vger.kernel.org> # 3.14+: 18ab7db6
      Cc: <stable@vger.kernel.org> # 3.14+: 714f5992
      Cc: <stable@vger.kernel.org> # 3.14+: c3684fbb
      Cc: <stable@vger.kernel.org> # 3.14+
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f43c2718
  5. 22 12月, 2014 1 次提交
  6. 18 12月, 2014 1 次提交
  7. 14 12月, 2014 1 次提交
  8. 13 12月, 2014 4 次提交
  9. 12 12月, 2014 1 次提交
    • A
      arch: Add lightweight memory barriers dma_rmb() and dma_wmb() · 1077fa36
      Alexander Duyck 提交于
      There are a number of situations where the mandatory barriers rmb() and
      wmb() are used to order memory/memory operations in the device drivers
      and those barriers are much heavier than they actually need to be.  For
      example in the case of PowerPC wmb() calls the heavy-weight sync
      instruction when for coherent memory operations all that is really needed
      is an lsync or eieio instruction.
      
      This commit adds a coherent only version of the mandatory memory barriers
      rmb() and wmb().  In most cases this should result in the barrier being the
      same as the SMP barriers for the SMP case, however in some cases we use a
      barrier that is somewhere in between rmb() and smp_rmb().  For example on
      ARM the rmb barriers break down as follows:
      
        Barrier   Call     Explanation
        --------- -------- ----------------------------------
        rmb()     dsb()    Data synchronization barrier - system
        dma_rmb() dmb(osh) data memory barrier - outer sharable
        smp_rmb() dmb(ish) data memory barrier - inner sharable
      
      These new barriers are not as safe as the standard rmb() and wmb().
      Specifically they do not guarantee ordering between coherent and incoherent
      memories.  The primary use case for these would be to enforce ordering of
      reads and writes when accessing coherent memory that is shared between the
      CPU and a device.
      
      It may also be noted that there is no dma_mb().  Most architectures don't
      provide a good mechanism for performing a coherent only full barrier without
      resorting to the same mechanism used in mb().  As such there isn't much to
      be gained in trying to define such a function.
      
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Cc: Michael Ellerman <michael@ellerman.id.au>
      Cc: Michael Neuling <mikey@neuling.org>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1077fa36
  10. 11 12月, 2014 5 次提交
    • M
      arm64: mm: dump: don't skip final region · fb59d007
      Mark Rutland 提交于
      If the final page table entry we walk is a valid mapping, the page table
      dumping code will not log the region this entry is part of, as the final
      note_page call in ptdump_show will trigger an early return. Luckily this
      isn't seen on contemporary systems as they typically don't have enough
      RAM to extend the linear mapping right to the end of the address space.
      
      In note_page, we log a region  when we reach its end (i.e. we hit an
      entry immediately afterwards which has different prot bits or is
      invalid). The final entry has no subsequent entry, so we will not log
      this immediately. We try to cater for this with a subsequent call to
      note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and
      hence we will skip a valid mapping if it spans to the final entry we
      note.
      
      Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with
      user mappings, so we do not need the check to ensure we don't log user
      page tables. Due to the way addr is constructed in the walk_* functions,
      it can never be less than LOWEST_ADDR when walking the page tables, so
      it is not necessary to avoid dereferencing invalid table addresses. The
      existing checks for st->current_prot and st->marker[1].start_address are
      sufficient to ensure we will not print and/or dereference garbage when
      trying to log information.
      
      This patch removes the unnecessary check against LOWEST_ADDR, ensuring
      we log all regions in the kernel page table, including those which span
      right to the end of the address space.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NSteve Capper <steve.capper@linaro.org>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      fb59d007
    • M
      arm64: mm: dump: fix shift warning · 35545f0c
      Mark Rutland 提交于
      When building with 48-bit VAs, it's possible to get the following
      warning when building the arm64 page table dumping code:
      
      arch/arm64/mm/dump.c: In function ‘walk_pgd’:
      arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type
        pgd_t *pgd = pgd_offset(mm, 0);
        ^
      
      As pgd_offset is a macro and the second argument is not cast to any
      particular type, the zero will be given integer type by the compiler.
      As pgd_offset passes the pargument to pgd_index, we then try to shift
      the 32-bit integer by at least 39 bits (for 4k pages).
      
      Elsewhere the pgd_offset is passed a second argument of unsigned long
      type, so let's do the same here by passing '0UL' rather than '0'.
      
      Cc: Kees Cook <keescook@chromium.org>
      Acked-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      35545f0c
    • K
      arm64: psci: Fix build breakage without PM_SLEEP · e5e62d47
      Krzysztof Kozlowski 提交于
      Fix build failure of defconfig when PM_SLEEP is disabled (e.g. by
      disabling SUSPEND) and CPU_IDLE enabled:
      
      arch/arm64/kernel/psci.c:543:2: error: unknown field ‘cpu_suspend’ specified in initializer
        .cpu_suspend = cpu_psci_cpu_suspend,
        ^
      arch/arm64/kernel/psci.c:543:2: warning: initialization from incompatible pointer type [enabled by default]
      arch/arm64/kernel/psci.c:543:2: warning: (near initialization for ‘cpu_psci_ops.cpu_prepare’) [enabled by default]
      make[1]: *** [arch/arm64/kernel/psci.o] Error 1
      
      The cpu_operations.cpu_suspend field exists only if ARM64_CPU_SUSPEND is
      defined, not CPU_IDLE.
      Signed-off-by: NKrzysztof Kozlowski <k.kozlowski@samsung.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      e5e62d47
    • K
      mm: fix huge zero page accounting in smaps report · c164e038
      Kirill A. Shutemov 提交于
      As a small zero page, huge zero page should not be accounted in smaps
      report as normal page.
      
      For small pages we rely on vm_normal_page() to filter out zero page, but
      vm_normal_page() is not designed to handle pmds.  We only get here due
      hackish cast pmd to pte in smaps_pte_range() -- pte and pmd format is not
      necessary compatible on each and every architecture.
      
      Let's add separate codepath to handle pmds.  follow_trans_huge_pmd() will
      detect huge zero page for us.
      
      We would need pmd_dirty() helper to do this properly.  The patch adds it
      to THP-enabled architectures which don't yet have one.
      
      [akpm@linux-foundation.org: use do_div to fix 32-bit build]
      Signed-off-by: N"Kirill A. Shutemov" <kirill@shutemov.name>
      Reported-by: NFengguang Wu <fengguang.wu@intel.com>
      Tested-by: NFengwei Yin <yfw.kernel@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c164e038
    • D
      net, lib: kill arch_fast_hash library bits · 0cb6c969
      Daniel Borkmann 提交于
      As there are now no remaining users of arch_fast_hash(), lets kill
      it entirely.
      
      This basically reverts commit 71ae8aac ("lib: introduce arch
      optimized hash library") and follow-up work, that is f.e., commit
      23721754 ("lib: hash: follow-up fixups for arch hash"),
      commit e3fec2f7 ("lib: Add missing arch generic-y entries for
      asm-generic/hash.h") and last but not least commit 6a02652d
      ("perf tools: Fix include for non x86 architectures").
      
      Cc: Francesco Fusco <fusco@ntop.org>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0cb6c969
  11. 05 12月, 2014 3 次提交
  12. 04 12月, 2014 8 次提交
  13. 03 12月, 2014 1 次提交
  14. 01 12月, 2014 1 次提交
  15. 29 11月, 2014 1 次提交
  16. 28 11月, 2014 5 次提交