1. 29 5月, 2014 3 次提交
  2. 23 5月, 2014 4 次提交
  3. 17 5月, 2014 5 次提交
  4. 12 5月, 2014 4 次提交
  5. 10 5月, 2014 2 次提交
    • W
      arm64: head: fix cache flushing and barriers in set_cpu_boot_mode_flag · d0488597
      Will Deacon 提交于
      set_cpu_boot_mode_flag is used to identify which exception levels are
      encountered across the system by CPUs trying to enter the kernel. The
      basic algorithm is: if a CPU is booting at EL2, it will set a flag at
      an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable.
      Otherwise, a flag is set at an offset of zero into the same cacheline.
      This enables us to check that all CPUs booted at the same exception
      level.
      
      This cacheline is written with the stage-1 MMU off (that is, via a
      strongly-ordered mapping) and will bypass any clean lines in the cache,
      leading to potential coherence problems when the variable is later
      checked via the normal, cacheable mapping of the kernel image.
      
      This patch reworks the broken flushing code so that we:
      
        (1) Use a DMB to order the strongly-ordered write of the cacheline
            against the subsequent cache-maintenance operation (by-VA
            operations only hazard against normal, cacheable accesses).
      
        (2) Use a single dc ivac instruction to invalidate any clean lines
            containing a stale copy of the line after it has been updated.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d0488597
    • W
      arm64: barriers: make use of barrier options with explicit barriers · 98f7685e
      Will Deacon 提交于
      When calling our low-level barrier macros directly, we can often suffice
      with more relaxed behaviour than the default "all accesses, full system"
      option.
      
      This patch updates the users of dsb() to specify the option which they
      actually require.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      98f7685e
  6. 09 5月, 2014 6 次提交
  7. 08 5月, 2014 3 次提交
    • A
      arm64: add support for kernel mode NEON in interrupt context · 190f1ca8
      Ard Biesheuvel 提交于
      This patch modifies kernel_neon_begin() and kernel_neon_end(), so
      they may be called from any context. To address the case where only
      a couple of registers are needed, kernel_neon_begin_partial(u32) is
      introduced which takes as a parameter the number of bottom 'n' NEON
      q-registers required. To mark the end of such a partial section, the
      regular kernel_neon_end() should be used.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      190f1ca8
    • A
      arm64: defer reloading a task's FPSIMD state to userland resume · 005f78cd
      Ard Biesheuvel 提交于
      If a task gets scheduled out and back in again and nothing has touched
      its FPSIMD state in the mean time, there is really no reason to reload
      it from memory. Similarly, repeated calls to kernel_neon_begin() and
      kernel_neon_end() will preserve and restore the FPSIMD state every time.
      
      This patch defers the FPSIMD state restore to the last possible moment,
      i.e., right before the task returns to userland. If a task does not return to
      userland at all (for any reason), the existing FPSIMD state is preserved
      and may be reused by the owning task if it gets scheduled in again on the
      same CPU.
      
      This patch adds two more functions to abstract away from straight FPSIMD
      register file saves and restores:
      - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded
      - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      005f78cd
    • A
      arm64: add abstractions for FPSIMD state manipulation · c51f9269
      Ard Biesheuvel 提交于
      There are two tacit assumptions in the FPSIMD handling code that will no longer
      hold after the next patch that optimizes away some FPSIMD state restores:
      . the FPSIMD registers of this CPU contain the userland FPSIMD state of
        task 'current';
      . when switching to a task, its FPSIMD state will always be restored from
        memory.
      
      This patch adds the following functions to abstract away from straight FPSIMD
      register file saves and restores:
      - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved
      - fpsimd_update_current_state -> replace current's FPSIMD state
      
      Where necessary, the signal handling and fork code are updated to use the above
      wrappers instead of poking into the FPSIMD registers directly.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      c51f9269
  8. 04 5月, 2014 2 次提交
    • C
      arm64: Use bus notifiers to set per-device coherent DMA ops · 6ecba8eb
      Catalin Marinas 提交于
      Recently, the default DMA ops have been changed to non-coherent for
      alignment with 32-bit ARM platforms (and DT files). This patch adds bus
      notifiers to be able to set the coherent DMA ops (with no cache
      maintenance) for devices explicitly marked as coherent via the
      "dma-coherent" DT property.
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6ecba8eb
    • M
      arm64: fixmap: fix missing sub-page offset for earlyprintk · f774b7d1
      Marc Zyngier 提交于
      Commit d57c33c5 (add generic fixmap.h) added (among other
      similar things) set_fixmap_io to deal with early ioremap of devices.
      
      More recently, commit bf4b558e (arm64: add early_ioremap support)
      converted the arm64 earlyprintk to use set_fixmap_io. A side effect of
      this conversion is that my virtual machines have stopped booting when
      I pass "earlyprintk=uart8250-8bit,0x3f8" to the guest kernel.
      
      Turns out that the new earlyprintk code doesn't care at all about
      sub-page offsets, and just assumes that the earlyprintk device will
      be page-aligned. Obviously, that doesn't play well with the above example.
      
      Further investigation shows that set_fixmap_io uses __set_fixmap instead
      of __set_fixmap_offset. A fix is to introduce a set_fixmap_offset_io that
      uses the latter, and to remove the superflous call to fix_to_virt
      (which only returns the value that set_fixmap_io has already given us).
      
      With this applied, my VMs are back in business. Tested on a Cortex-A57
      platform with kvmtool as platform emulation.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Acked-by: NMark Salter <msalter@redhat.com>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f774b7d1
  9. 26 4月, 2014 1 次提交
  10. 25 4月, 2014 1 次提交
  11. 08 4月, 2014 2 次提交
  12. 07 4月, 2014 1 次提交
    • M
      arm64: fix !CONFIG_COMPAT build failures · ff268ff7
      Mark Salter 提交于
      Recent arm64 builds using CONFIG_ARM64_64K_PAGES are failing with:
      
        arch/arm64/kernel/perf_regs.c: In function ‘perf_reg_abi’:
        arch/arm64/kernel/perf_regs.c:41:2: error: implicit declaration of function ‘is_compat_thread’
      
        arch/arm64/kernel/perf_event.c:1398:2: error: unknown type name ‘compat_uptr_t’
      
      This is due to some recent arm64 perf commits with compat support:
      
        commit 23c7d70d:
          ARM64: perf: add support for frame pointer unwinding in compat mode
      
        commit 2ee0d7fd:
          ARM64: perf: add support for perf registers API
      
      Those patches make the arm64 kernel unbuildable if CONFIG_COMPAT is not
      defined and CONFIG_ARM64_64K_PAGES depends on !CONFIG_COMPAT. This patch
      allows the arm64 kernel to build with and without CONFIG_COMPAT.
      Signed-off-by: NMark Salter <msalter@redhat.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ff268ff7
  13. 05 4月, 2014 1 次提交
  14. 20 3月, 2014 2 次提交
    • S
      arm64, debug-monitors: Fix CPU hotplug callback registration · 4b0b68af
      Srivatsa S. Bhat 提交于
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      Fix the debug-monitors code in arm64 by using this latter form of callback
      registration.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Ingo Molnar <mingo@kernel.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      4b0b68af
    • S
      arm64, hw_breakpoint.c: Fix CPU hotplug callback registration · 3d0dc643
      Srivatsa S. Bhat 提交于
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      Fix the hw-breakpoint code in arm64 by using this latter form of callback
      registration.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      3d0dc643
  15. 13 3月, 2014 3 次提交