- 28 6月, 2017 3 次提交
-
-
由 Paul Burton 提交于
Recent CPUs from Imagination Technologies such as the I6400 or P6600 are able to speculatively fetch data from memory into caches. This means that if used in a system with non-coherent DMA they require that caches be invalidated after a device performs DMA, and before the CPU reads the DMA'd data, in order to ensure that stale values weren't speculatively prefetched. Such CPUs also introduced Memory Accessibility Attribute Registers (MAARs) in order to control the regions in which they are allowed to speculate. Thus we can use the presence of MAARs as a good indication that the CPU requires the above cache maintenance. Use the presence of MAARs to determine the result of cpu_needs_post_dma_flush() in the default case, in order to handle these recent CPUs correctly. Note that the return type of cpu_needs_post_dma_flush() is changed to bool, such that it's clearer what's happening when cpu_has_maar is cast to bool for the return value. If this patch were backported to a pre-v4.7 kernel then MIPS_CPU_MAAR was 1ull<<34, so when cast to an int we would incorrectly return 0. It so happens that MIPS_CPU_MAAR is currently 1ull<<30, so when truncated to an int gives a non-zero value anyway, but even so the implicit conversion from long long int to bool makes it clearer to understand what will happen than the implicit conversion from long long int to int would. The bool return type also fits this usage better semantically, so seems like an all-round win. Thanks to Ed for spotting the issue for pre-v4.7 kernels & suggesting the return type change. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Reviewed-by: NBryan O'Donoghue <pure.logic@nexus-software.ie> Tested-by: NBryan O'Donoghue <pure.logic@nexus-software.ie> Cc: Ed Blake <ed.blake@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16363/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
Follow on patches for eBPF JIT require these additional instructions: insn_bgtz, insn_blez, insn_break, insn_ddivu, insn_dmultu, insn_dsbh, insn_dshd, insn_dsllv, insn_dsra32, insn_dsrav, insn_dsrlv, insn_lbu, insn_movn, insn_movz, insn_multu, insn_nor, insn_sb, insn_sh, insn_slti, insn_dinsu, insn_lwu ... so, add them. Sort the insn_* enumeration values alphabetically. Signed-off-by: NDavid Daney <david.daney@cavium.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16367/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
Instead of doing a linear search through the insn_table for each instruction, use the opcode as direct index into the table. This will give constant time lookup performance as the number of supported opcodes increases. Make the tables const as they are only ever read. For uasm-mips.c sort the table alphabetically, and remove duplicate entries, uasm-micromips.c was already sorted and duplicate free. There is a small savings in object size as struct insn loses a field: $ size arch/mips/mm/uasm-mips.o arch/mips/mm/uasm-mips.o.save text data bss dec hex filename 10040 0 0 10040 2738 arch/mips/mm/uasm-mips.o 9240 1120 0 10360 2878 arch/mips/mm/uasm-mips.o.save Signed-off-by: NDavid Daney <david.daney@cavium.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/16365/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 19 6月, 2017 1 次提交
-
-
由 Hugh Dickins 提交于
Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: NOleg Nesterov <oleg@redhat.com> Original-patch-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NHugh Dickins <hughd@google.com> Acked-by: NMichal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 6月, 2017 1 次提交
-
-
由 Marcin Nowakowski 提交于
fixrange_init operates at PMD-granularity and expects the addresses to be PMD-size aligned, but currently that might not be the case for PKMAP_BASE unless it is defined properly, so ensure a correct alignment is used before passing the address to fixrange_init. fixed mappings: only align the start address that is passed to fixrange_init rather than the value before adding the size, as we may end up with uninitialised upper part of the range. Signed-off-by: NMarcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/15948/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 13 4月, 2017 1 次提交
-
-
由 Paul Burton 提交于
Commit 41c594ab ("[MIPS] MT: Improved multithreading support.") added an else case to an if statement in do_page_fault() (which has since gained 2 leading underscores) for some unclear reason. If the condition in the if statement evaluates true then we execute a goto & branch elsewhere anyway, so the else has no effect. Combined with an #if 0 block with misleading indentation introduced in the same commit it makes the code less clear than it could be. Remove the unnecessary else statement & de-indent the printk within the #if 0 block in order to make the code easier for humans to parse. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: trivial@kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15842/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 12 4月, 2017 1 次提交
-
-
由 Paul Burton 提交于
We always either target MIPS32/MIPS64 or microMIPS, and always include one & only one of uasm-mips.c or uasm-micromips.c. Therefore the abstraction of the ISA in asm/uasm.h declaring functions for either ISA is redundant & needless. Remove it to simplify the code. This is largely the result of the following: :%s/ISAOPC(\(.\{-}\))/uasm_i##\1/ :%s/ISAFUNC(\(.\{-}\))/\1/ Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Paul Burton <paul.burton@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/15844/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 10 4月, 2017 2 次提交
-
-
由 Alex Belits 提交于
Some users must have 4K pages while needing a 48-bit VA space size. The cleanest way do do this is to go to a 4-level page table for this case. Each page table level using order-0 pages adds 9 bits to the VA size (at 4K pages, so for four levels we get 9 * 4 + 12 == 48-bits. For the 4K page size case only we add support functions for the PUD level of the page table tree, also the TLB exception handlers get an extra level of tree walk. [david.daney@cavium.com: Forward port to v4.10.] [david.daney@cavium.com: Forward port to v4.11.] Signed-off-by: NAlex Belits <alex.belits@cavium.com> Signed-off-by: NDavid Daney <david.daney@cavium.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Alex Belits <alex.belits@cavium.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15312/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
The follow-on BPF JIT patches use the LHU instruction, so add it. Signed-off-by: NDavid Daney <david.daney@cavium.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Steven J. Hill <steven.hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15743/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 28 3月, 2017 2 次提交
-
-
由 James Hogan 提交于
Cache management is implemented separately for Cavium Octeon CPUs, so r4k_blast_[id]cache aren't available. Instead for Octeon perform a local icache flush using local_flush_icache_range(), and for other platforms which don't use c-r4k.c use __flush_cache_all() / flush_icache_all(). Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Daney <david.daney@cavium.com> Cc: Andreas Herrmann <andreas.herrmann@caviumnetworks.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
The MAAR V bit has been renamed VL since another bit called VH is added at the top of the register when it is extended to 64-bits on a 32-bit processor with XPA. Rename the V definition, fix the various users, and add definitions for the VH bit. Also add a definition for the MAARI Index field. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 22 3月, 2017 2 次提交
-
-
由 Huacai Chen 提交于
If scache.waysize is 0, r4k___flush_cache_all() will do nothing and then cause bugs. BTW, though vcache.waysize isn't being used by now, we also fix its calculation. Signed-off-by: NHuacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15756/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Huacai Chen 提交于
On VTLB+FTLB platforms (such as Loongson-3A R2), FTLB's pagesize is usually configured the same as PAGE_SIZE. In such a case, Huge page entry is not suitable to write in FTLB. Unfortunately, when a huge page is created, its page table entries haven't created immediately. Then the TLB refill handler will fetch an invalid page table entry which has no "HUGE" bit, and this entry may be written to FTLB. Since it is invalid, TLB load/store handler will then use tlbwi to write the valid entry at the same place. However, the valid entry is a huge page entry which isn't suitable for FTLB. Our solution is to modify build_huge_handler_tail. Flush the invalid old entry (whether it is in FTLB or VTLB, this is in order to reduce branches) and use tlbwr to write the valid new entry. Signed-off-by: NRui Wang <wangr@lemote.com> Signed-off-by: NHuacai Chen <chenhc@lemote.com> Cc: John Crispin <john@phrozen.org> Cc: Steven J . Hill <Steven.Hill@caviumnetworks.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15754/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 02 3月, 2017 3 次提交
-
-
由 Ingo Molnar 提交于
Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to split more MM APIs out of <linux/sched.h>, which will have to be picked up from a couple of .c files. The APIs that we are going to move are: arch_pick_mmap_layout() arch_get_unmapped_area() arch_get_unmapped_area_topdown() mm_update_next_owner() Include the header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/signal.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 25 2月, 2017 1 次提交
-
-
由 Lucas Stach 提交于
The callers of the DMA alloc functions already provide the proper context GFP flags. Make sure to pass them through to the CMA allocator, to make the CMA compaction context aware. Link: http://lkml.kernel.org/r/20170127172328.18574-3-l.stach@pengutronix.deSigned-off-by: NLucas Stach <l.stach@pengutronix.de> Acked-by: NVlastimil Babka <vbabka@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Cc: Radim Krcmar <rkrcmar@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Alexander Graf <agraf@suse.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 2月, 2017 2 次提交
-
-
由 James Hogan 提交于
Export to TLB exception code generating functions so that KVM can construct a fast TLB refill handler for guest context without reinventing the wheel quite so much. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
由 James Hogan 提交于
Export pmd_init(), invalid_pmd_table and tlbmiss_handler_setup_pgd to GPL kernel modules so that MIPS KVM can use the inline page table management functions and switch between page tables: - pmd_init() will be used directly by KVM to initialise newly allocated pmd tables with invalid lower level table pointers. - invalid_pmd_table is used by pud_present(), pud_none(), and pud_clear(), which KVM will use to test and clear pud entries. - tlbmiss_handler_setup_pgd() will be called by KVM entry code to switch to the appropriate GVA page tables. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 02 2月, 2017 1 次提交
-
-
由 James Hogan 提交于
pgd_alloc() references init_mm which is not exported to modules. In order for KVM to be able to use pgd_alloc() to allocate GVA page tables, move pgd_alloc() into a new pgtable.c file and export it to modules. Signed-off-by: NJames Hogan <james.hogan@imgtec.com> Acked-by: NRalf Baechle <ralf@linux-mips.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Radim Krčmář" <rkrcmar@redhat.com> Cc: linux-mips@linux-mips.org Cc: kvm@vger.kernel.org
-
- 25 1月, 2017 1 次提交
-
-
由 Bart Van Assche 提交于
Most dma_map_ops structures are never modified. Constify these structures such that these can be write-protected. This patch has been generated as follows: git grep -l 'struct dma_map_ops' | xargs -d\\n sed -i \ -e 's/struct dma_map_ops/const struct dma_map_ops/g' \ -e 's/const struct dma_map_ops {/struct dma_map_ops {/g' \ -e 's/^const struct dma_map_ops;$/struct dma_map_ops;/' \ -e 's/const const struct dma_map_ops /const struct dma_map_ops /g'; sed -i -e 's/const \(struct dma_map_ops intel_dma_ops\)/\1/' \ $(git grep -l 'struct dma_map_ops intel_dma_ops'); sed -i -e 's/const \(struct dma_map_ops dma_iommu_ops\)/\1/' \ $(git grep -l 'struct dma_map_ops' | grep ^arch/powerpc); sed -i -e '/^struct vmd_dev {$/,/^};$/ s/const \(struct dma_map_ops[[:blank:]]dma_ops;\)/\1/' \ -e '/^static void vmd_setup_dma_ops/,/^}$/ s/const \(struct dma_map_ops \*dest\)/\1/' \ -e 's/const \(struct dma_map_ops \*dest = \&vmd->dma_ops\)/\1/' \ drivers/pci/host/*.c sed -i -e '/^void __init pci_iommu_alloc(void)$/,/^}$/ s/dma_ops->/intel_dma_ops./' arch/ia64/kernel/pci-dma.c sed -i -e 's/static const struct dma_map_ops sn_dma_ops/static struct dma_map_ops sn_dma_ops/' arch/ia64/sn/pci/pci_dma.c sed -i -e 's/(const struct dma_map_ops \*)//' drivers/misc/mic/bus/vop_bus.c Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: Russell King <linux@armlinux.org.uk> Cc: x86@kernel.org Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 03 1月, 2017 9 次提交
-
-
由 Paul Burton 提交于
In systems with CM3 & higher, the L2 cache is inclusive of the L1 dcache. Indicate this such that cpu_has_inclusive_pcaches evaluates true and we avoid some unnecessary cache ops during DMA cache maintenance. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14018/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Physically indexed caches cannot suffer from virtual aliasing, so clear the MIPS_CACHE_ALIASES bit in order to ensure we don't do extra work avoiding aliasing that cannot happen. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14017/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
The L1 data cache in I6400 CPUs is indexed by physical address bits if an entry for the address is present in the DTLB early enough in the pipelined execution of a memory access instruction. If an entry is not present then it's indexed by virtual address bits, but hardware will check in a later pipeline stage when a DTLB entry has been created whether the virtual address bits used match the physical address bits, and if not will transparently restart the memory access instruction. This means that although it isn't always physically indexed, it appears so to software & we can treat the I6400 L1 data cache as being physically indexed in order to avoid considering aliasing. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14016/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
Now that EXPORT_SYMBOL can be used from assembly source, move the EXPORT_SYMBOL invocations for the copy_page & clear_page functions to be alongside their definitions. With this change there are no longer any symbols exported from mips_ksyms.c so remove the file. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14515/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
It's unclear to me why this wasn't always the case, but move the EXPORT_SYMBOL invocation for invalid_pte_table to be alongside its definition. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14511/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
When generating TLB exception handling code we write to memory reserved at the handle_tlbl, handle_tlbm & handle_tlbs symbols. Up until now the ISA bit has always been clear simply because the assembly code reserving the space for those functions places no instructions in them. In preparation for marking all LEAF functions as containing code, explicitly clear the ISA bit when calculating the addresses at which to write TLB exception handling code. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14507/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Redfearn 提交于
arch_mmap_rnd() uses hard-coded limits of 16MB for the randomisation of mmap within 32bit processes and 256MB in 64bit processes. Since v4.4 other arches support tuning this value in /proc/sys/vm/mmap_rnd_bits. Add support for this to MIPS. Set the minimum(default) number of bits randomisation for 32bit to 8 - which with 4k pagesize is unchanged from the current 16MB total randomness. The minimum(default) for 64bit is 12bits, again with 4k pagesize this is the same as the current 256MB. This patch is necessary for MIPS32 to pass the Android CTS tests, with the number of random bits set to 15. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Reviewed-by: NKees Cook <keescook@chromium.org> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Daniel Cashman <dcashman@android.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: kernel-hardening@lists.openwall.com Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14617/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
Fix the following build error with binutils 2.25. CC arch/mips/mm/sc-ip22.o {standard input}: Assembler messages: {standard input}:132: Error: number (0x9000000080000000) larger than 32 bits {standard input}:159: Error: number (0x9000000080000000) larger than 32 bits {standard input}:200: Error: number (0x9000000080000000) larger than 32 bits scripts/Makefile.build:293: recipe for target 'arch/mips/mm/sc-ip22.o' failed make[1]: *** [arch/mips/mm/sc-ip22.o] Error 1 MIPS has used .set mips3 to temporarily switch the assembler to 64 bit mode in 64 bit kernels virtually forever. Binutils 2.25 broke this behavious partially by happily accepting 64 bit instructions in .set mips3 mode but puking on 64 bit constants when generating 32 bit ELF. Binutils 2.26 restored the old behaviour again. Fix build with binutils 2.25 by open coding the offending dli $1, 0x9000000080000000 as li $1, 0x9000 dsll $1, $1, 48 which is ugly be the only thing that will build on all binutils vintages. Signed-off-by: NRalf Baechle <ralf@linux-mips.org> Cc: stable@vger.kernel.org
-
由 Ralf Baechle 提交于
Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 25 12月, 2016 1 次提交
-
-
由 Linus Torvalds 提交于
This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 12月, 2016 1 次提交
-
-
由 Alexander Duyck 提交于
This change allows us to pass DMA_ATTR_SKIP_CPU_SYNC which allows us to avoid invoking cache line invalidation if the driver will just handle it via a sync_for_cpu or sync_for_device call. Link: http://lkml.kernel.org/r/20161110113513.76501.32321.stgit@ahduyck-blue-test.jf.intel.comSigned-off-by: NAlexander Duyck <alexander.h.duyck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Keguang Zhang <keguang.zhang@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 11月, 2016 1 次提交
-
-
由 Matt Redfearn 提交于
Since commit 4bcc595c ("printk: reinstate KERN_CONT for printing continuation lines") the output from __do_page_fault on MIPS has been pretty unreadable due to the lack of KERN_CONT markers. Use pr_cont to provide the appropriate markers & restore the expected output. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14544/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 24 11月, 2016 1 次提交
-
-
由 Paul Burton 提交于
Since MIPSr6 the Wired register is split into 2 fields, with the upper 16 bits of the register indicating a limit on the value that the wired entry count in the bottom 16 bits of the register can take. This means that simply reading the wired register doesn't get us a valid TLB entry index any longer, and we instead need to retrieve only the lower 16 bits of the register. Introduce a new num_wired_entries() function which does this on MIPSr6 or higher and simply returns the value of the wired register on older architecture revisions, and make use of it when reading the number of wired entries. Since commit e710d666 ("MIPS: tlb-r4k: If there are wired entries, don't use TLBINVF") we have been using a non-zero number of wired entries to determine whether we should avoid use of the tlbinvf instruction (which would invalidate wired entries) and instead loop over TLB entries in local_flush_tlb_all(). This loop begins with the number of wired entries, or before this patch some large bogus TLB index on MIPSr6 systems. Thus since the aforementioned commit some MIPSr6 systems with FTLBs have been prone to leaving stale address translations in the FTLB & crashing in various weird & wonderful ways when we later observe the wrong memory. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14557/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 19 10月, 2016 1 次提交
-
-
由 Lorenzo Stoakes 提交于
This removes the 'write' and 'force' use from get_user_pages_unlocked() and replaces them with 'gup_flags' to make the use of FOLL_FORCE explicit in callers as use of this flag can result in surprising behaviour (and hence bugs) within the mm subsystem. Signed-off-by: NLorenzo Stoakes <lstoakes@gmail.com> Reviewed-by: NJan Kara <jack@suse.cz> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 10月, 2016 3 次提交
-
-
由 Paul Burton 提交于
On some MIPS systems, a subset of devices may have DMA coherent with CPU caches. For example in systems including a MIPS I/O Coherence Unit (IOCU), some devices may be connected to that IOCU whilst others are not. Prior to this patch, we have a plat_device_is_coherent() function but no implementation which does anything besides return a global true or false, optionally chosen at runtime. For devices such as those described above this is insufficient. Fix this by tracking DMA coherence on a per-device basis with a dma_coherent field in struct dev_archdata. Setting this from arch_setup_dma_ops() takes care of devices which set the dma-coherent property via device tree, and any PCI devices beneath a bridge described in DT, automatically. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14349/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
There are no cases where plat_device_is_coherent() will return zero whilst hw_coherentio is non-zero, and acting any differently in such a case doesn't make much sense - if a device is non-coherent with the CPU caches then access to memory "coherent" with DMA must be uncached. Clean up the nonsensical case. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14348/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Paul Burton 提交于
The coherentio variable has previously been used as a boolean value, indicating whether the user specified that coherent I/O should be enabled or disabled. It failed to take into account the case where the user does not specify any preference, in which case it makes sense that we should default to coherent I/O if the hardware supports it (hw_coherentio is non-zero). Introduce an enum to clarify the 3 different values of coherentio & use it throughout the code, modifying plat_device_is_coherent() & r4k_cache_init() to take into account the default case. Signed-off-by: NPaul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Paul Burton <paul.burton@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/14347/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
- 05 10月, 2016 2 次提交
-
-
由 Paul Gortmaker 提交于
Historically a lot of these existed because we did not have a distinction between what was modular code and what was providing support to modules via EXPORT_SYMBOL and friends. That changed when we forked out support for the latter into the export.h file. This means we should be able to reduce the usage of module.h in code that is obj-y Makefile or bool Kconfig. The advantage in doing so is that module.h itself sources about 15 other headers; adding significantly to what we feed cpp, and it can obscure what headers we are effectively using. Since module.h was the source for init.h (for __init) and for export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance for the presence of either and replace as needed. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14033/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Matt Redfearn 提交于
When adding a wired entry to the TLB via add_wired_entry, the tlb is flushed with local_flush_tlb_all, which on CPUs with TLBINV results in the new wired entry being flushed again. Behavior of the TLBINV instruction applies to all applicable TLB entries and is unaffected by the setting of the Wired register. Therefore if the TLB has any wired entries, fall back to iterating over the entries rather than blasting them all using TLBINVF. Signed-off-by: NMatt Redfearn <matt.redfearn@imgtec.com> Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Ohad Ben-Cohen <ohad@wizery.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: lisa.parratt@imgtec.com Cc: Hugh Dickins <hughd@google.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: David S. Miller <davem@davemloft.net> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-remoteproc@vger.kernel.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14283/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-