- 26 3月, 2015 1 次提交
-
-
由 Hanjun Guo 提交于
Introduce a new function map_gicc_mpidr() to allow MPIDRs to be obtained from the GICC Structure introduced by ACPI 5.1, since MPIDR for ARM64 is 64-bit, so typedef u64 for phys_cpuid_t. The ARM architecture defines the MPIDR register as the CPU hardware identifier. This patch adds the code infrastructure to retrieve the MPIDR values from the ARM ACPI GICC structure in order to look-up the kernel CPU hardware ids required by the ACPI core code to identify CPUs. CC: Rafael J. Wysocki <rjw@rjwysocki.net> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 3月, 2015 7 次提交
-
-
由 Hanjun Guo 提交于
MADT contains the information for MPIDR which is essential for SMP initialization, parse the GIC cpu interface structures to get the MPIDR value and map it to cpu_logical_map(), and add enabled cpu with valid MPIDR into cpu_possible_map. ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and Parking protocol, but the Parking protocol is only specified for ARMv7 now, so make PSCI as the only way for the SMP boot protocol before some updates for the ACPI spec or the Parking protocol spec. Parking protocol patches for SMP boot will be sent to upstream when the new version of Parking protocol is ready. CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Graeme Gregory 提交于
There are two flags: PSCI_COMPLIANT and PSCI_USE_HVC. When set, the former signals to the OS that the firmware is PSCI compliant. The latter selects the appropriate conduit for PSCI calls by toggling between Hypervisor Calls (HVC) and Secure Monitor Calls (SMC). FADT table contains such information in ACPI 5.1, FADT table was parsed in ACPI table init and copy to struct acpi_gbl_FADT, so use the flags in struct acpi_gbl_FADT for PSCI init. Since ACPI 5.1 doesn't support self defined PSCI function IDs, which means that only PSCI 0.2+ is supported in ACPI. CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Al Stone 提交于
This implements the following policy to decide whether ACPI should be used to boot the system: - acpi=off: ACPI will not be used to boot the system, even if there is no alternative available (e.g., device tree is empty) - acpi=force: only ACPI will be used to boot the system; if that fails, there will be no fallback to alternative methods (such as device tree) - otherwise, ACPI will be used as a fallback if the device tree turns out to lack a platform description; the heuristic to decide this is whether /chosen is the only node present at depth 1 CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Rafael J. Wysocki <rjw@rjwysocki.net> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Tested-by: NTimur Tabi <timur@codeaurora.org> Signed-off-by: NAl Stone <al.stone@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Hanjun Guo 提交于
CONFIG_ACPI depends CONFIG_PCI on x86 and ia64, in ARM64 server world we will have PCIe in most cases, but some of them may not, make CONFIG_ACPI depend CONFIG_PCI on ARM64 will satisfy both. With that case, we need some arch dependent PCI functions to access the config space before the PCI root bridge is created, and pci_acpi_scan_root() to create the PCI root bus. So introduce some stub function here to make ACPI core compile and revisit them later when implemented on ARM64. CC: Liviu Dudau <Liviu.Dudau@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Salter 提交于
The acpi_os_ioremap() function may be used to map normal RAM or IO regions. The current implementation simply uses ioremap_cache(). This will work for some architectures, but arm64 ioremap_cache() cannot be used to map IO regions which don't support caching. So for arm64, use ioremap() for non-RAM regions. CC: Rafael J Wysocki <rjw@rjwysocki.net> CC: Catalin Marinas <catalin.marinas@arm.com> Tested-by: NRobert Richter <rrichter@cavium.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Al Stone 提交于
As we want to get ACPI tables to parse and then use the information for system initialization, we should get the RSDP (Root System Description Pointer) first, it then locates Extended Root Description Table (XSDT) which contains all the 64-bit physical address that pointer to other boot-time tables. Introduce acpi.c and its related head file in this patch to provide fundamental needs of extern variables and functions for ACPI core, and then get boot-time tables as needed. - asm/acenv.h for arch specific ACPICA environments and implementation, It is needed unconditionally by ACPI core; - asm/acpi.h for arch specific variables and functions needed by ACPI driver core; - acpi.c for ARM64 related ACPI implementation for ACPI driver core; acpi_boot_table_init() is introduced to get RSDP and boot-time tables, it will be called in setup_arch() before paging_init(), so we should use eary_memremap() mechanism here to get the RSDP and all the table pointers. FADT Major.Minor version was introduced in ACPI 5.1, it is the same as ACPI version. In ACPI 5.1, some major gaps are fixed for ARM, such as updates in MADT table for GIC and SMP init, without those updates, we can not get the MPIDR for SMP init, and GICv2/3 related init information, so we can't boot arm64 ACPI properly with table versions predating 5.1. If firmware provides ACPI tables with ACPI version less than 5.1, OS has no way to retrieve the configuration data that is necessary to init SMP boot protocol and the GIC properly, so disable ACPI if we get an FADT table with version less that 5.1 when acpi_boot_table_init() called. CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NYijing Wang <wangyijing@huawei.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Tested-by: NRobert Richter <rrichter@cavium.com> Acked-by: NRobert Richter <rrichter@cavium.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NAl Stone <al.stone@linaro.org> Signed-off-by: NGraeme Gregory <graeme.gregory@linaro.org> Signed-off-by: NTomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Mark Salter 提交于
Commit 0e63ea48 (arm64/efi: add missing call to early_ioremap_reset()) added a missing call to early_ioremap_reset(). This triggers a BUG if code tries using early_ioremap() after the early_ioremap_reset(). This is a problem for some ACPI code which needs short-lived temporary mappings after paging_init() but before acpi_early_init() in start_kernel(). This patch adds definitions for the __late_set_fixmap() and __late_clear_fixmap() which avoids the BUG by allowing later use of early_ioremap(). CC: Leif Lindholm <leif.lindholm@linaro.org> CC: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: NMark Langsdorf <mlangsdo@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NRobert Richter <rrichter@cavium.com> Tested-by: NTimur Tabi <timur@codeaurora.org> Acked-by: NRobert Richter <rrichter@cavium.com> Reviewed-by: NGrant Likely <grant.likely@linaro.org> Signed-off-by: NMark Salter <msalter@redhat.com> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 14 3月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
The ARM architecture allows the caching of intermediate page table levels and page table freeing requires a sequence like: pmd_clear() TLB invalidation pte page freeing With commit 5e5f6dc1 (arm64: mm: enable HAVE_RCU_TABLE_FREE logic), the page table freeing batching was moved from tlb_remove_page() to tlb_remove_table(). The former takes care of TLB invalidation as this is also shared with pte clearing and page cache page freeing. The latter, however, does not invalidate the TLBs for intermediate page table levels as it probably relies on the architecture code to do it if required. When the mm->mm_users < 2, tlb_remove_table() does not do any batching and page table pages are freed before tlb_finish_mmu() which performs the actual TLB invalidation. This patch introduces __tlb_flush_pgtable() for arm64 and calls it from the {pte,pmd,pud}_free_tlb() directly without relying on deferred page table freeing. Fixes: 5e5f6dc1 arm64: mm: enable HAVE_RCU_TABLE_FREE logic Reported-by: NJon Masters <jcm@redhat.com> Tested-by: NJon Masters <jcm@redhat.com> Tested-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 2月, 2015 2 次提交
-
-
由 Lorenzo Pieralisi 提交于
ARM64 CPUidle driver requires the cpu_do_idle function so that it can be used to enter the shallowest idle state, and it is declared in asm/proc-fns.h. The current ARM64 CPUidle driver does not include asm/proc-fns.h explicitly and it has so far relied on implicit inclusion from other header files. Owing to some header dependencies reshuffling this currently triggers build failures when CONFIG_ARM64_64K_PAGES=y: drivers/cpuidle/cpuidle-arm64.c: In function "arm64_enter_idle_state" drivers/cpuidle/cpuidle-arm64.c:42:3: error: implicit declaration of function "cpu_do_idle" [-Werror=implicit-function-declaration] cpu_do_idle(); ^ This patch adds the explicit inclusion of the asm/proc-fns.h header file in the arm64 asm/cpuidle.h header file, so that the build breakage is fixed and the required header inclusion is added to the appropriate arch back-end CPUidle header, already included by the CPUidle arm64 driver, where CPUidle arch related function declarations belong. Reported-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
With commit 3690951f (arm64: Use swiotlb late initialisation), the swiotlb buffer size is limited to MAX_ORDER_NR_PAGES. However, there are platforms with 32-bit only devices that require bounce buffering via swiotlb. This patch changes the swiotlb initialisation to an early 64MB memblock allocation. In order to get the swiotlb buffer correctly allocated (via memblock_virt_alloc_low_nopanic), this patch also defines ARCH_LOW_ADDRESS_LIMIT to the maximum physical address capable of 32-bit DMA. Reported-by: NKefeng Wang <wangkefeng.wang@huawei.com> Tested-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 2月, 2015 2 次提交
-
-
由 Feng Kan 提交于
Caught during Trinity testing. The pte_modify does not allow modification for PTE type bit. This cause the test to hang the system. It is found that the PTE can't transit from an inaccessible page (b00) to a valid page (b11) because the mask does not allow it. This happens when a big block of mmaped memory is set the PROT_NONE, then the a small piece is broken off and set to PROT_WRITE | PROT_READ cause a huge page split. Signed-off-by: NFeng Kan <fkan@apm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Yingjoe Chen 提交于
The functions __cpu_flush_user_tlb_range and __cpu_flush_kern_tlb_range were removed in commit fa48e6f7 'arm64: mm: Optimise tlb flush logic where we have >4K granule'. Global variable cpu_tlb was never used in arm64. Remove them. Signed-off-by: NYingjoe Chen <yingjoe.chen@mediatek.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 2月, 2015 2 次提交
-
-
由 Marc Zyngier 提交于
asm/assembler.h lacks the usual guard against multiple inclusion, leading to a compilation failure if it is accidentally included twice. Using the classic #ifndef/#define/#endif construct solves the issue. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Robin Murphy 提交于
Fix cbz/cbnz having the mask offset by a bit, and add encodings for tbz/tbnz so that all branch forms are represented. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NZi Shen Lim <zlim.lnx@gmail.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 13 2月, 2015 1 次提交
-
-
由 Andy Lutomirski 提交于
If an attacker can cause a controlled kernel stack overflow, overwriting the restart block is a very juicy exploit target. This is because the restart_block is held in the same memory allocation as the kernel stack. Moving the restart block to struct task_struct prevents this exploit by making the restart_block harder to locate. Note that there are other fields in thread_info that are also easy targets, at least on some architectures. It's also a decent simplification, since the restart code is more or less identical on all architectures. [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack] Signed-off-by: NAndy Lutomirski <luto@amacapital.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: David Miller <davem@davemloft.net> Acked-by: NRichard Weinberger <richard@nod.at> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Steven Miao <realmz6@gmail.com> Cc: Mark Salter <msalter@redhat.com> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: David Howells <dhowells@redhat.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Michal Simek <monstr@monstr.eu> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Helge Deller <deller@gmx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Chen Liqin <liqin.linux@gmail.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Chris Zankel <chris@zankel.net> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
LKP has triggered a compiler warning after my recent patch "mm: account pmd page tables to the process": mm/mmap.c: In function 'exit_mmap': >> mm/mmap.c:2857:2: warning: right shift count >= width of type [enabled by default] The code: > 2857 WARN_ON(mm_nr_pmds(mm) > 2858 round_up(FIRST_USER_ADDRESS, PUD_SIZE) >> PUD_SHIFT); In this, on tile, we have FIRST_USER_ADDRESS defined as 0. round_up() has the same type -- int. PUD_SHIFT. I think the best way to fix it is to define FIRST_USER_ADDRESS as unsigned long. On every arch for consistency. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 2月, 2015 1 次提交
-
-
由 Kirill A. Shutemov 提交于
We've replaced remap_file_pages(2) implementation with emulation. Nobody creates non-linear mapping anymore. This patch also adjust __SWP_TYPE_SHIFT and increase number of bits availble for swap offset. Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 06 2月, 2015 1 次提交
-
-
由 Paolo Bonzini 提交于
This patch introduces a new module parameter for the KVM module; when it is present, KVM attempts a bit of polling on every HLT before scheduling itself out via kvm_vcpu_block. This parameter helps a lot for latency-bound workloads---in particular I tested it with O_DSYNC writes with a battery-backed disk in the host. In this case, writes are fast (because the data doesn't have to go all the way to the platters) but they cannot be merged by either the host or the guest. KVM's performance here is usually around 30% of bare metal, or 50% if you use cache=directsync or cache=writethrough (these parameters avoid that the guest sends pointless flush requests, and at the same time they are not slow because of the battery-backed cache). The bad performance happens because on every halt the host CPU decides to halt itself too. When the interrupt comes, the vCPU thread is then migrated to a new physical CPU, and in general the latency is horrible because the vCPU thread has to be scheduled back in. With this patch performance reaches 60-65% of bare metal and, more important, 99% of what you get if you use idle=poll in the guest. This means that the tunable gets rid of this particular bottleneck, and more work can be done to improve performance in the kernel or QEMU. Of course there is some price to pay; every time an otherwise idle vCPUs is interrupted by an interrupt, it will poll unnecessarily and thus impose a little load on the host. The above results were obtained with a mostly random value of the parameter (500000), and the load was around 1.5-2.5% CPU usage on one of the host's core for each idle guest vCPU. The patch also adds a new stat, /sys/kernel/debug/kvm/halt_successful_poll, that can be used to tune the parameter. It counts how many HLT instructions received an interrupt during the polling period; each successful poll avoids that Linux schedules the VCPU thread out and back in, and may also avoid a likely trip to C1 and back for the physical CPU. While the VM is idle, a Linux 4 VCPU VM halts around 10 times per second. Of these halts, almost all are failed polls. During the benchmark, instead, basically all halts end within the polling period, except a more or less constant stream of 50 per second coming from vCPUs that are not running the benchmark. The wasted time is thus very low. Things may be slightly different for Windows VMs, which have a ~10 ms timer tick. The effect is also visible on Marcelo's recently-introduced latency test for the TSC deadline timer. Though of course a non-RT kernel has awful latency bounds, the latency of the timer is around 8000-10000 clock cycles compared to 20000-120000 without setting halt_poll_ns. For the TSC deadline timer, thus, the effect is both a smaller average latency and a smaller variance. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 03 2月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
The comment was right originally but the _pad array size was wrong. It was fixed in the meantime but the comment not updated. Reported-by: NPeter Maydell <peter.maydell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 1月, 2015 4 次提交
-
-
由 Marc Zyngier 提交于
When handling a fault in stage-2, we need to resync I$ and D$, just to be sure we don't leave any old cache line behind. That's very good, except that we do so using the *user* address. Under heavy load (swapping like crazy), we may end up in a situation where the page gets mapped in stage-2 while being unmapped from userspace by another CPU. At that point, the DC/IC instructions can generate a fault, which we handle with kvm->mmu_lock held. The box quickly deadlocks, user is unhappy. Instead, perform this invalidation through the kernel mapping, which is guaranteed to be present. The box is much happier, and so am I. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Let's assume a guest has created an uncached mapping, and written to that page. Let's also assume that the host uses a cache-coherent IO subsystem. Let's finally assume that the host is under memory pressure and starts to swap things out. Before this "uncached" page is evicted, we need to make sure we invalidate potential speculated, clean cache lines that are sitting there, or the IO subsystem is going to swap out the cached view, loosing the data that has been written directly into memory. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
Trying to emulate the behaviour of set/way cache ops is fairly pointless, as there are too many ways we can end-up missing stuff. Also, there is some system caches out there that simply ignore set/way operations. So instead of trying to implement them, let's convert it to VA ops, and use them as a way to re-enable the trapping of VM ops. That way, we can detect the point when the MMU/caches are turned off, and do a full VM flush (which is what the guest was trying to do anyway). This allows a 32bit zImage to boot on the APM thingy, and will probably help bootloaders in general. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Dave P Martin 提交于
Alternate macro mode is not a property of a macro definition, but a gas runtime state that alters the way macros are expanded for ever after (until .noaltmacro is seen). This means that subsequent assembly code that calls other macros can break if fpsimdmacros.h is included. Since these instruction sequences are simple (if dull -- but in a good way), this patch solves the problem by simply expanding the .irp loops. The pre-existing fpsimd_{save,restore} macros weren't rolled with .irp anyway and the sequences affected are short, so this change restores consistency at little cost. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 1月, 2015 1 次提交
-
-
由 zhichang.yuan 提交于
For 64K page system, after mapping a PMD section, the corresponding initial page table is not needed any more. That page can be freed. Signed-off-by: NZhichang Yuan <zhichang.yuan@linaro.org> [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 1月, 2015 3 次提交
-
-
由 Lorenzo Pieralisi 提交于
ARM64_CPU_SUSPEND config option was introduced to make code providing context save/restore selectable only on platforms requiring power management capabilities. Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which in turn is set by the SUSPEND config option. The introduction of CPU_IDLE for arm64 requires that code configured by ARM64_CPU_SUSPEND (context save/restore) should be compiled in in order to enable the CPU idle driver to rely on CPU operations carrying out context save/restore. The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP) failed dependencies, which is not a clean way of handling the kernel configuration option. For these reasons, this patch removes the ARM64_CPU_SUSPEND config option and makes the context save/restore dependent on CPU_PM, which is selected whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies in the process. This way, code previously configured through ARM64_CPU_SUSPEND is compiled in whenever a power management subsystem requires it to be present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour expected on ARM64 kernels. The cpu_suspend and cpu_init_idle CPU operations are added only if CPU_IDLE is selected, since they are CPU_IDLE specific methods and should be grouped and defined accordingly. PSCI CPU operations are updated to reflect the introduced changes. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch moves the sys_rt_sigreturn_wrapper prototype to arch/arm64/kernel/sys.c and removes the asm/syscalls.h header. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Unlike the sys_call_table[], the compat one was implemented in sys32.S making it impossible to notice discrepancies between the number of compat syscalls and the __NR_compat_syscalls macro, the latter having to be defined in asm/unistd.h as including asm/unistd32.h would cause conflicts on __NR_* definitions. With this patch, incorrect __NR_compat_syscalls values will result in a build-time error. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com>
-
- 24 1月, 2015 6 次提交
-
-
由 Will Deacon 提交于
arm64 defines its own ucontext structure which is incompatible with the struct defined (and exposed to userspace by) the asm-generic headers. glibc carries its own struct definition that is compatible with the arm64 definition, but we should expose our format in the uapi headers in case other libraries want to make use of the ucontext pushed as part of an arm64 sigframe. This patch moves the arm64 asm/ucontext.h to the uapi headers, along with the necessary #include of linux/types.h. Cc: Arnd Bergmann <arnd@arndb.de> Cc: Marcus Shawcroft <marcus.shawcroft@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Jiang Liu 提交于
Commit 9a46ad6d "smp: make smp_call_function_many() use logic similar to smp_call_function_single()" has unified the way to handle single and multiple cross-CPU function calls. Now only one interrupt is needed for architecture specific code to support generic SMP function call interfaces, so kill the redundant single function call interrupt. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
Emulate deprecated 'setend' instruction for AArch32 bit tasks. setend [le/be] - Sets the endianness of EL0 On systems with CPUs which support mixed endian at EL0, the hardware support for the instruction can be enabled by setting the SCTLR_EL1.SED bit. Like the other emulated instructions it is controlled by an entry in /proc/sys/abi/. For more information see : Documentation/arm64/legacy_instructions.txt The instruction is emulated by setting/clearing the SPSR_EL1.E bit, which will be reflected in the PSTATE.E in AArch32 context. This patch also restores the native endianness for the execution of signal handlers, since the process could have changed the endianness. Note: All CPUs on the system must have mixed endian support at EL0. Once the handler is registered, hotplugging a CPU which doesn't support mixed endian, could lead to unexpected results/behavior in applications. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Punit Agrawal <punit.agrawal@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
As of now each insn_emulation has a cpu hotplug notifier that enables/disables the CPU feature bit for the functionality. This patch re-arranges the code, such that there is only one notifier that runs through the list of registered emulation hooks and runs their corresponding set_hw_mode. We do nothing when a CPU is dying as we will set the appropriate bits as it comes back online based on the state of the hooks. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Punit Agrawal <punit.agrawal@arm.com> [catalin.marinas@arm.com: fix pr_warn compilation error] [catalin.marinas@arm.com: remove unnecessary "insn" check] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Suzuki K. Poulose 提交于
This patch keeps track of the mixed endian EL0 support across the system and provides helper functions to export it. The status is a boolean indicating whether all the CPUs on the system supports mixed endian at EL0. Signed-off-by: NSuzuki K. Poulose <suzuki.poulose@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Since dev_archdata now has a dma_coherent state, combine the two coherent and non-coherent operations and remove their declaration, together with set_dma_ops, from the arch dma-mapping.h file. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 1月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but commit d1e6dc91 ("arm64: Add architectural support for PCI") extended this to cover the full 32MiB. The final 8KiB of this 32MiB is also allocated for the fixmap, allowing for potential clashes between the two. This change was masked by assumptions in mem_init and the page table dumping code, which assumed the I/O space to be 16MiB long through seaparte hard-coded definitions. This patch changes the definition of the PCI I/O space allocation to live in asm/memory.h, along with the other VA space allocations. As the fixmap allocation depends on the number of fixmap entries, this is moved below the PCI I/O space allocation. Both the fixmap and PCI I/O space are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB are moved over use new PCI_IO_{START,END} definitions, which will keep in sync with the size of the IO space (now restored to 16MiB). As a useful side effect, the use of the new PCI_IO_{START,END} definitions prevents a build issue in the dumping code due to a (now redundant) missing include of io.h for PCI_IOBASE. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 1月, 2015 3 次提交
-
-
由 Ard Biesheuvel 提交于
Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by: NArd Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: NKees Cook <keescook@chromium.org> Tested-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Laura Abbott 提交于
When kernel text is marked as read only, it cannot be modified directly. Use a fixmap to modify the text instead in a similar manner to x86 and arm. Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NKees Cook <keescook@chromium.org> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 1月, 2015 2 次提交
-
-
由 Andre Przywara 提交于
With all of the GICv3 code in place now we allow userland to ask the kernel for using a virtual GICv3 in the guest. Also we provide the necessary support for guests setting the memory addresses for the virtual distributor and redistributors. This requires some userland code to make use of that feature and explicitly ask for a virtual GICv3. Document that KVM_CREATE_IRQCHIP only works for GICv2, but is considered legacy and using KVM_CREATE_DEVICE is preferred. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Andre Przywara 提交于
For a GICv2 there is always only one (v)CPU involved: the one that does the access. On a GICv3 the access to a CPU redistributor is memory-mapped, but not banked, so the (v)CPU affected is determined by looking at the MMIO address region being accessed. To allow passing the affected CPU into the accessors later, extend struct kvm_exit_mmio to add an opaque private pointer parameter. The current GICv2 emulation just does not use it. Signed-off-by: NAndre Przywara <andre.przywara@arm.com> Acked-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-