- 10 11月, 2019 1 次提交
-
-
由 Jonas Gorski 提交于
[ Upstream commit e4f5cb1a9b27c0f94ef4f5a0178a3fde2d3d0e9e ] The vectors span more than one byte, so mark them as arrays. Fixes the following build error when building when using GCC 8.3: In file included from ./include/linux/string.h:19, from ./include/linux/bitmap.h:9, from ./include/linux/cpumask.h:12, from ./arch/mips/include/asm/processor.h:15, from ./arch/mips/include/asm/thread_info.h:16, from ./include/linux/thread_info.h:38, from ./include/asm-generic/preempt.h:5, from ./arch/mips/include/generated/asm/preempt.h:1, from ./include/linux/preempt.h:81, from ./include/linux/spinlock.h:51, from ./include/linux/mmzone.h:8, from ./include/linux/bootmem.h:8, from arch/mips/bcm63xx/prom.c:10: arch/mips/bcm63xx/prom.c: In function 'prom_init': ./arch/mips/include/asm/string.h:162:11: error: '__builtin_memcpy' forming offset [2, 32] is out of the bounds [0, 1] of object 'bmips_smp_movevec' with type 'char' [-Werror=array-bounds] __ret = __builtin_memcpy((dst), (src), __len); \ ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ arch/mips/bcm63xx/prom.c:97:3: note: in expansion of macro 'memcpy' memcpy((void *)0xa0000200, &bmips_smp_movevec, 0x20); ^~~~~~ In file included from arch/mips/bcm63xx/prom.c:14: ./arch/mips/include/asm/bmips.h:80:13: note: 'bmips_smp_movevec' declared here extern char bmips_smp_movevec; Fixes: 18a1eef9 ("MIPS: BMIPS: Introduce bmips.h") Signed-off-by: NJonas Gorski <jonas.gorski@gmail.com> Reviewed-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NPaul Burton <paulburton@kernel.org> Cc: linux-mips@vger.kernel.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 06 11月, 2019 2 次提交
-
-
由 Thomas Bogendoerfer 提交于
[ Upstream commit 46f1619500d022501a4f0389f9f4c349ab46bb86 ] Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of __xchg this would cause to reference function __xchg_called_with_bad_pointer, which is an error case for catching bugs and will not happen for correct code, if __xchg is inlined. Signed-off-by: NThomas Bogendoerfer <tbogendoerfer@suse.de> Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Thomas Bogendoerfer 提交于
[ Upstream commit 88356d09904bc606182c625575237269aeece22e ] Commit ac7c3e4ff401 ("compiler: enable CONFIG_OPTIMIZE_INLINING forcibly") allows compiler to uninline functions marked as 'inline'. In cace of cmpxchg this would cause to reference function __cmpxchg_called_with_bad_pointer, which is a error case for catching bugs and will not happen for correct code, if __cmpxchg is inlined. Signed-off-by: NThomas Bogendoerfer <tbogendoerfer@suse.de> [paul.burton@mips.com: s/__cmpxchd/__cmpxchg in subject] Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 12 10月, 2019 1 次提交
-
-
由 Jiaxun Yang 提交于
commit d2f965549006acb865c4638f1f030ebcefdc71f6 upstream. Recently, binutils had split Loongson-3 Extensions into four ASEs: MMI, CAM, EXT, EXT2. This patch do the samething in kernel and expose them in cpuinfo so applications can probe supported ASEs at runtime. Signed-off-by: NJiaxun Yang <jiaxun.yang@flygoat.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: Yunqiang Su <ysu@wavecomp.com> Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 10月, 2019 1 次提交
-
-
由 Zhou Yanjie 提交于
[ Upstream commit 053951dda71ecb4b554a2cdbe26f5f6f9bee9dd2 ] In order to further reduce power consumption, the XBurst core by default attempts to avoid branch target buffer lookups by detecting & special casing loops. This feature will cause BogoMIPS and lpj calculate in error. Set cp0 config7 bit 4 to disable this feature. Signed-off-by: NZhou Yanjie <zhouyanjie@zoho.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: ralf@linux-mips.org Cc: paul@crapouillou.net Cc: jhogan@kernel.org Cc: malat@debian.org Cc: gregkh@linuxfoundation.org Cc: tglx@linutronix.de Cc: allison@lohutok.net Cc: syq@debian.org Cc: chenhc@lemote.com Cc: jiaxun.yang@flygoat.com Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 26 7月, 2019 1 次提交
-
-
由 Stefan Hellermann 提交于
[ Upstream commit db13a5ba2732755cf13320f3987b77cf2a71e790 ] While trying to get the uart with parity working I found setting even parity enabled odd parity insted. Fix the register settings to match the datasheet of AR9331. A similar patch was created by 8devices, but not sent upstream. https://github.com/8devices/openwrt-8devices/commit/77c5586ade3bb72cda010afad3f209ed0c98ea7cSigned-off-by: NStefan Hellermann <stefan@the2masters.de> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 03 7月, 2019 1 次提交
-
-
由 Paul Burton 提交于
commit 6d4d367d0e9ffab4d64a3436256a6a052dc1195d upstream. The MIPS GIC contains a block of registers used to map local interrupts to a particular CPU interrupt pin. Since these registers are found at a consecutive range of addresses we access them using an index, via the (read|write)_gic_v[lo]_map accessor functions. We currently use values from enum mips_gic_local_interrupt as those indices. Unfortunately whilst enum mips_gic_local_interrupt provides the correct offsets for bits in the pending & mask registers, the ordering of the map registers is subtly different... Compared with the ordering of pending & mask bits, the map registers move the FDC from the end of the list to index 3 after the timer interrupt. As a result the performance counter & software interrupts are therefore at indices 4-6 rather than indices 3-5. Notably this causes problems with performance counter interrupts being incorrectly mapped on some systems, and presumably will also cause problems for FDC interrupts. Introduce a function to map from enum mips_gic_local_interrupt to the index of the corresponding map register, and use it to ensure we access the map registers for the correct interrupts. Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: a0dc5cb5 ("irqchip: mips-gic: Simplify gic_local_irq_domain_map()") Fixes: da61fcf9 ("irqchip: mips-gic: Use irq_cpu_online to (un)mask all-VP(E) IRQs") Reported-and-tested-by: NArcher Yan <ayan@wavecomp.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: stable@vger.kernel.org # v4.14+ Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 27 3月, 2019 1 次提交
-
-
由 Archer Yan 提交于
commit 47c25036b60f27b86ab44b66a8861bcf81cde39b upstream. Insert Branch instruction instead of NOP to make sure assembler don't patch code in forbidden slot. In jump label function, it might be possible to patch Control Transfer Instructions(CTIs) into forbidden slot, which will generate Reserved Instruction exception in MIPS release 6. Signed-off-by: NArcher Yan <ayan@wavecomp.com> Reviewed-by: NPaul Burton <paul.burton@mips.com> [paul.burton@mips.com: - Add MIPS prefix to subject. - Mark for stable from v4.0, which introduced r6 support, onwards.] Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: stable@vger.kernel.org # v4.0+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 3月, 2019 1 次提交
-
-
由 Sean Christopherson 提交于
commit 152482580a1b0accb60676063a1ac57b2d12daf6 upstream. kvm_arch_memslots_updated() is at this point in time an x86-specific hook for handling MMIO generation wraparound. x86 stashes 19 bits of the memslots generation number in its MMIO sptes in order to avoid full page fault walks for repeat faults on emulated MMIO addresses. Because only 19 bits are used, wrapping the MMIO generation number is possible, if unlikely. kvm_arch_memslots_updated() alerts x86 that the generation has changed so that it can invalidate all MMIO sptes in case the effective MMIO generation has wrapped so as to avoid using a stale spte, e.g. a (very) old spte that was created with generation==0. Given that the purpose of kvm_arch_memslots_updated() is to prevent consuming stale entries, it needs to be called before the new generation is propagated to memslots. Invalidating the MMIO sptes after updating memslots means that there is a window where a vCPU could dereference the new memslots generation, e.g. 0, and incorrectly reuse an old MMIO spte that was created with (pre-wrap) generation==0. Fixes: e59dbe09 ("KVM: Introduce kvm_arch_memslots_updated()") Cc: <stable@vger.kernel.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 13 2月, 2019 1 次提交
-
-
由 Linus Walleij 提交于
[ Upstream commit 0c901c0566fb4edc2631c3786e5085a037be91f8 ] Modifty the JZ4740 driver to retrieve card detect and write protect GPIO pins from GPIO descriptors instead of hard-coded global numbers. Augment the only board file using this in the process and cut down on passed in platform data. Preserve the code setting the caps2 flags for CD and WP as active low or high since the slot GPIO code currently ignores the gpiolib polarity inversion semantice and uses the raw accessors to read the GPIO lines, but set the right polarity flags in the descriptor table for jz4740. Cc: Paul Cercueil <paul@crapouillou.net> Cc: linux-mips@linux-mips.org Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NPaul Burton <paul.burton@mips.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 10 1月, 2019 5 次提交
-
-
由 Paul Burton 提交于
commit 66a4059ba72c23ae74a7c702894ff76c4b7eac1f upstream. MIPS' asm/mmzone.h includes the machine/platform mmzone.h unconditionally, but since commit bb53fdf395ee ("MIPS: c-r4k: Add r4k_blast_scache_node for Loongson-3") is included by asm/rk4cache.h for all r4k-style configs regardless of CONFIG_NEED_MULTIPLE_NODES. This is problematic when CONFIG_NEED_MULTIPLE_NODES=n because both the loongson3 & ip27 mmzone.h headers unconditionally define the NODE_DATA preprocessor macro which is aready defined by linux/mmzone.h, resulting in the following build error: In file included from ./arch/mips/include/asm/mmzone.h:10, from ./arch/mips/include/asm/r4kcache.h:23, from arch/mips/mm/c-r4k.c:33: ./arch/mips/include/asm/mach-loongson64/mmzone.h:48: error: "NODE_DATA" redefined [-Werror] #define NODE_DATA(n) (&__node_data[(n)]->pglist) In file included from ./include/linux/topology.h:32, from ./include/linux/irq.h:19, from ./include/asm-generic/hardirq.h:13, from ./arch/mips/include/asm/hardirq.h:16, from ./include/linux/hardirq.h:9, from arch/mips/mm/c-r4k.c:11: ./include/linux/mmzone.h:907: note: this is the location of the previous definition #define NODE_DATA(nid) (&contig_page_data) Resolve this by only including the machine mmzone.h when CONFIG_NEED_MULTIPLE_NODES=y, which also removes the need for the empty mach-generic version of the header which we delete. Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: bb53fdf395ee ("MIPS: c-r4k: Add r4k_blast_scache_node for Loongson-3") Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Huacai Chen 提交于
commit db1ce3f5d01d2d6d5714aefba0159d2cb5167a0b upstream. Commit 4936084c ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") introduce a mistake in atomic64_fetch_##op##_relaxed(), because it forget to delete R10000_LLSC_WAR in the if-condition. So fix it. Fixes: 4936084c ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") Signed-off-by: NHuacai Chen <chenhc@lemote.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Steven J . Hill <Steven.Hill@cavium.com> Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # 4.19+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Paul Burton 提交于
commit ff4dd232ec45a0e45ea69f28f069f2ab22b4908a upstream. ASIDs have always been stored as unsigned longs, ie. 32 bits on MIPS32 kernels. This is problematic because it is feasible for the ASID version to overflow & wrap around to zero. We currently attempt to handle this overflow by simply setting the ASID version to 1, using asid_first_version(), but we make no attempt to account for the fact that there may be mm_structs with stale ASIDs that have versions which we now reuse due to the overflow & wrap around. Encountering this requires that: 1) A struct mm_struct X is active on CPU A using ASID (V,n). 2) That mm is not used on CPU A for the length of time that it takes for CPU A's asid_cache to overflow & wrap around to the same version V that the mm had in step 1. During this time tasks using the mm could either be sleeping or only scheduled on other CPUs. 3) Some other mm Y becomes active on CPU A and is allocated the same ASID (V,n). 4) mm X now becomes active on CPU A again, and now incorrectly has the same ASID as mm Y. Where struct mm_struct ASIDs are represented above in the format (version, EntryHi.ASID), and on a typical MIPS32 system version will be 24 bits wide & EntryHi.ASID will be 8 bits wide. The length of time required in step 2 is highly dependent upon the CPU & workload, but for a hypothetical 2GHz CPU running a workload which generates a new ASID every 10000 cycles this period is around 248 days. Due to this long period of time & the fact that tasks need to be scheduled in just the right (or wrong, depending upon your inclination) way, this is obviously a difficult bug to encounter but it's entirely possible as evidenced by reports. In order to fix this, simply extend ASIDs to 64 bits even on MIPS32 builds. This will extend the period of time required for the hypothetical system above to encounter the problem from 28 days to around 3 trillion years, which feels safely outside of the realms of possibility. The cost of this is slightly more generated code in some commonly executed paths, but this is pretty minimal: | Code Size Gain | Percentage -----------------------|----------------|------------- decstation_defconfig | +270 | +0.00% 32r2el_defconfig | +652 | +0.01% 32r6el_defconfig | +1000 | +0.01% I have been unable to measure any change in performance of the LMbench lat_ctx or lat_proc tests resulting from the 64b ASIDs on either 32r2el_defconfig+interAptiv or 32r6el_defconfig+I6500 systems. Signed-off-by: NPaul Burton <paul.burton@mips.com> Suggested-by: NJames Hogan <jhogan@kernel.org> References: https://lore.kernel.org/linux-mips/80B78A8B8FEE6145A87579E8435D78C30205D5F3@fzex.ruijie.com.cn/ References: https://lore.kernel.org/linux-mips/1488684260-18867-1-git-send-email-jiwei.sun@windriver.com/ Cc: Jiwei Sun <jiwei.sun@windriver.com> Cc: Yu Huabing <yhb@ruijie.com.cn> Cc: stable@vger.kernel.org # 2.6.12+ Cc: linux-mips@vger.kernel.org Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Huacai Chen 提交于
commit 92aa0718c9fa5160ad2f0e7b5bffb52f1ea1e51a upstream. This patch is borrowed from ARM64 to ensure pmd_present() returns false after pmd_mknotpresent(). This is needed for THP. References: 5bb1cc0f ("arm64: Ensure pmd_present() returns false after pmd_mknotpresent()") Reviewed-by: NJames Hogan <jhogan@kernel.org> Signed-off-by: NHuacai Chen <chenhc@lemote.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21135/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <james.hogan@mips.com> Cc: Steven J . Hill <Steven.Hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: <stable@vger.kernel.org> # 3.8+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
由 Huacai Chen 提交于
commit bb53fdf395eed103f85061bfff3b116cee123895 upstream. For multi-node Loongson-3 (NUMA configuration), r4k_blast_scache() can only flush Node-0's scache. So we add r4k_blast_scache_node() by using (CAC_BASE | (node_id << NODE_ADDRSPACE_SHIFT)) instead of CKSEG0 as the start address. Signed-off-by: NHuacai Chen <chenhc@lemote.com> [paul.burton@mips.com: Include asm/mmzone.h from asm/r4kcache.h for nid_to_addrbase(). Add asm/mach-generic/mmzone.h to allow inclusion for all platforms.] Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21129/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <james.hogan@mips.com> Cc: Steven J . Hill <Steven.Hill@cavium.com> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: <stable@vger.kernel.org> # 3.15+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 12月, 2018 1 次提交
-
-
由 Dmitry V. Levin 提交于
commit c50cbd85cd7027d32ac5945bb60217936b4f7eaf upstream. When checking for TIF_32BIT_REGS flag, mips_get_syscall_arg() should use the task specified as its argument instead of the current task. This potentially affects all syscall_get_arguments() users who specify tasks different from the current. Fixes: c0ff3c53 ("MIPS: Enable HAVE_ARCH_TRACEHOOK.") Signed-off-by: NDmitry V. Levin <ldv@altlinux.org> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/21185/ Cc: Elvira Khabirova <lineprinter@altlinux.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: stable@vger.kernel.org # v3.13+ Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 21 11月, 2018 1 次提交
-
-
由 Huacai Chen 提交于
[ Upstream commit 360fe725 ] After commit e509bd7d ("genirq: Allow migration of chained interrupts by installing default action") Loongson-3 fails at here: setup_irq(LOONGSON_HT1_IRQ, &cascade_irqaction); This is because both chained_action and cascade_irqaction don't have IRQF_SHARED flag. This will cause Loongson-3 resume fails because HPET timer interrupt can't be delivered during S3. So we set the irqchip of the chained irq to loongson_irq_chip which doesn't disable the chained irq in CP0.Status. Cc: stable@vger.kernel.org Signed-off-by: NHuacai Chen <chenhc@lemote.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20434/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: Huacai Chen <chenhuacai@gmail.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 14 11月, 2018 1 次提交
-
-
由 Huacai Chen 提交于
[ Upstream commit c61c7def ] Commit ea7e0480 ("MIPS: VDSO: Always map near top of user memory") set VDSO_RANDOMIZE_SIZE to 256MB for 64bit kernel. But take a look at arch/mips/mm/mmap.c we can see that MIN_GAP is 128MB, which means the mmap_base may be at (user_address_top - 128MB). This make the stack be surrounded by mmaped areas, then stack expanding fails and causes a segmentation fault. Therefore, VDSO_RANDOMIZE_SIZE should be less than MIN_GAP and this patch reduce it to 64MB. Signed-off-by: NHuacai Chen <chenhc@lemote.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: ea7e0480 ("MIPS: VDSO: Always map near top of user memory") Patchwork: https://patchwork.linux-mips.org/patch/20910/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org Cc: Fuxin Zhang <zhangfx@lemote.com> Cc: Zhangjin Wu <wuzhangjin@gmail.com> Cc: Huacai Chen <chenhuacai@gmail.com> Cc: stable@vger.kernel.org # 4.19 Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 29 9月, 2018 1 次提交
-
-
由 Paul Burton 提交于
When using the legacy mmap layout, for example triggered using ulimit -s unlimited, get_unmapped_area() fills memory from bottom to top starting from a fairly low address near TASK_UNMAPPED_BASE. This placement is suboptimal if the user application wishes to allocate large amounts of heap memory using the brk syscall. With the VDSO being located low in the user's virtual address space, the amount of space available for access using brk is limited much more than it was prior to the introduction of the VDSO. For example: # ulimit -s unlimited; cat /proc/self/maps 00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils 004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils 004fd000-0050f000 rwxp 00000000 00:00 0 00cc3000-00ce4000 rwxp 00000000 00:00 0 [heap] 2ab96000-2ab98000 r--p 00000000 00:00 0 [vvar] 2ab98000-2ab99000 r-xp 00000000 00:00 0 [vdso] 2ab99000-2ab9d000 rwxp 00000000 00:00 0 ... Resolve this by adjusting STACK_TOP to reserve space for the VDSO & providing an address hint to get_unmapped_area() causing it to use this space even when using the legacy mmap layout. We reserve enough space for the VDSO, plus 1MB or 256MB for 32 bit & 64 bit systems respectively within which we randomize the VDSO base address. Previously this randomization was taken care of by the mmap base address randomization performed by arch_mmap_rnd(). The 1MB & 256MB sizes are somewhat arbitrary but chosen such that we have some randomization without taking up too much of the user's virtual address space, which is often in short supply for 32 bit systems. With this the VDSO is always mapped at a high address, leaving lots of space for statically linked programs to make use of brk: # ulimit -s unlimited; cat /proc/self/maps 00400000-004ec000 r-xp 00000000 08:00 71436 /usr/bin/coreutils 004fc000-004fd000 rwxp 000ec000 08:00 71436 /usr/bin/coreutils 004fd000-0050f000 rwxp 00000000 00:00 0 00c28000-00c49000 rwxp 00000000 00:00 0 [heap] ... 7f67c000-7f69d000 rwxp 00000000 00:00 0 [stack] 7f7fc000-7f7fd000 rwxp 00000000 00:00 0 7fcf1000-7fcf3000 r--p 00000000 00:00 0 [vvar] 7fcf3000-7fcf4000 r-xp 00000000 00:00 0 [vdso] Signed-off-by: NPaul Burton <paul.burton@mips.com> Reported-by: NHuacai Chen <chenhc@lemote.com> Fixes: ebb5e78c ("MIPS: Initial implementation of a VDSO") Cc: Huacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: stable@vger.kernel.org # v4.4+
-
- 12 9月, 2018 1 次提交
-
-
由 Hauke Mehrtens 提交于
dma_zalloc_coherent() now crashes if no dev pointer is given. Add a dev pointer to the ltq_dma_channel structure and fill it in the driver using it. This fixes a bug introduced in kernel 4.19. Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 07 9月, 2018 1 次提交
-
-
由 Marc Zyngier 提交于
kvm_unmap_hva is long gone, and we only have kvm_unmap_hva_range to deal with. Drop the now obsolete code. Fixes: fb1522e0 ("KVM: update to new mmu_notifier semantic v2") Cc: James Hogan <jhogan@kernel.org> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@arm.com>
-
- 22 8月, 2018 1 次提交
-
-
由 Paul Burton 提交于
Some versions of GCC for the MIPS architecture suffer from a bug which can lead to instructions from beyond an unreachable statement being incorrectly reordered into earlier branch delay slots if the unreachable statement is the only content of a case in a switch statement. This can lead to seemingly random behaviour, such as invalid memory accesses from incorrectly reordered loads or stores, and link failures on microMIPS builds. See this potential GCC fix for details: https://gcc.gnu.org/ml/gcc-patches/2015-09/msg00360.html Runtime problems resulting from this bug were initially observed using a maltasmvp_defconfig v4.4 kernel built using GCC 4.9.2 (from a Codescape SDK 2015.06-05 toolchain), with the result being an address exception taken after log messages about the L1 caches (during probe of the L2 cache): Initmem setup node 0 [mem 0x0000000080000000-0x000000009fffffff] VPE topology {2,2} total 4 Primary instruction cache 64kB, VIPT, 4-way, linesize 32 bytes. Primary data cache 64kB, 4-way, PIPT, no aliases, linesize 32 bytes <AdEL exception here> This is early enough that the kernel exception vectors are not in use, so any further output depends upon the bootloader. This is reproducible in QEMU where no further output occurs - ie. the system hangs here. Given the nature of the bug it may potentially be hit with differing symptoms. The bug is known to affect GCC versions as recent as 7.3, and it is unclear whether GCC 8 fixed it or just happens not to encounter the bug in the testcase found at the link above due to differing optimizations. This bug can be worked around by placing a volatile asm statement, which GCC is prevented from reordering past, prior to the __builtin_unreachable call. That was actually done already for other reasons by commit 173a3efd ("bug.h: work around GCC PR82365 in BUG()"), but creates problems for microMIPS builds due to the lack of a .insn directive. The microMIPS ISA allows for interlinking with regular MIPS32 code by repurposing bit 0 of the program counter as an ISA mode bit. To switch modes one changes the value of this bit in the PC. However typical branch instructions encode their offsets as multiples of 2-byte instruction halfwords, which means they cannot change ISA mode - this must be done using either an indirect branch (a jump-register in MIPS terminology) or a dedicated jalx instruction. In order to ensure that regular branches don't attempt to target code in a different ISA which they can't actually switch to, the linker will check that branch targets are code in the same ISA as the branch. Unfortunately our empty asm volatile statements don't qualify as code, and the link for microMIPS builds fails with errors such as: arch/mips/mm/dma-default.s:3265: Error: branch to a symbol in another ISA mode arch/mips/mm/dma-default.s:5027: Error: branch to a symbol in another ISA mode Resolve this by adding a .insn directive within the asm statement which declares that what comes next is code. This may or may not be true, since we don't really know what comes next, but as this code is in an unreachable path anyway that doesn't matter since we won't execute it. We do this in asm/compiler.h & select CONFIG_HAVE_ARCH_COMPILER_H in order to have this included by linux/compiler_types.h after linux/compiler-gcc.h. This will result in asm/compiler.h being included in all C compilations via the -include linux/compiler_types.h argument in c_flags, which should be harmless. Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: 173a3efd ("bug.h: work around GCC PR82365 in BUG()") Patchwork: https://patchwork.linux-mips.org/patch/20270/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: linux-mips@linux-mips.org
-
- 18 8月, 2018 1 次提交
-
-
由 Paul Burton 提交于
MIPS_ISA_LEVEL is always defined as the 64 bit ISA that is a compatible superset of the ISA that the kernel build is targeting, and is used to allow us to emit instructions that we may detect support for at runtime. When we use a .set MIPS_ISA_LEVEL directive & are building a 32-bit kernel, we therefore are temporarily allowing the assembler to generate MIPS64 instructions. Using the move pseudo-instruction whilst this is the case is problematic because the assembler is likely to emit a daddu instruction which will generate a reserved instruction exception when executed on a MIPS32 machine. Unfortunately the combination of commit a0a5ac3c ("MIPS: Fix delay slot bug in `atomic*_sub_if_positive' for R10000_LLSC_WAR") and commit 4936084c ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") causes us to do exactly this in atomic_sub_if_positive(), and the result is MIPS64 daddu instructions in 32-bit kernels. Fix this by using .set mips0 to restore the default ISA after the ll instruction, and use .set MIPS_ISA_LEVEL again prior to the sc. This ensures everything but the ll & sc are assembled using the default ISA for the kernel build & the move pseudo-instruction is emitted as a MIPS32 addu instruction. We appear to have another pre-existing instance of the same issue in our atomic_fetch_*_relaxed() functions, and fix that up too by moving our .set move0 such that it occurs prior to use of the move pseudo-instruction. Signed-off-by: NPaul Burton <paul.burton@mips.com> Fixes: a0a5ac3c ("MIPS: Fix delay slot bug in `atomic*_sub_if_positive' for R10000_LLSC_WAR") Fixes: 4936084c ("MIPS: Cleanup R10000_LLSC_WAR logic in atomic.h") Patchwork: https://patchwork.linux-mips.org/patch/20253/ Cc: James Hogan <jhogan@kernel.org> Cc: Joshua Kinard <kumba@gentoo.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
- 11 8月, 2018 2 次提交
-
-
由 Paul Burton 提交于
Since at least the beginning of the git era we've declared our TLB exception handling functions inconsistently. They're actually functions, but we declare them as arrays of u32 where each u32 is an encoded instruction. This has always been the case for arch/mips/mm/tlbex.c, and has also been true for arch/mips/kernel/traps.c since commit 86a1708a ("MIPS: Make tlb exception handler definitions and declarations match.") which aimed for consistency but did so by consistently making the our C code inconsistent with our assembly. This is all usually harmless, but when using GCC 7 or newer to build a kernel targeting microMIPS (ie. CONFIG_CPU_MICROMIPS=y) it becomes problematic. With microMIPS bit 0 of the program counter indicates the ISA mode. When bit 0 is zero instructions are decoded using the standard MIPS32 or MIPS64 ISA. When bit 0 is one instructions are decoded using microMIPS. This means that function pointers become odd - their least significant bit is one for microMIPS code. We work around this in cases where we need to access code using loads & stores with our msk_isa16_mode() macro which simply clears bit 0 of the value it is given: #define msk_isa16_mode(x) ((x) & ~0x1) For example we do this for our TLB load handler in build_r4000_tlb_load_handler(): u32 *p = (u32 *)msk_isa16_mode((ulong)handle_tlbl); We then write code to p, expecting it to be suitably aligned (our LEAF macro aligns functions on 4 byte boundaries, so (ulong)handle_tlbl will give a value one greater than a multiple of 4 - ie. the start of a function on a 4 byte boundary, with the ISA mode bit 0 set). This worked fine up to GCC 6, but GCC 7 & onwards is smart enough to presume that handle_tlbl which we declared as an array of u32s must be aligned sufficiently that bit 0 of its address will never be set, and as a result optimize out msk_isa16_mode(). This leads to p having an address with bit 0 set, and when we go on to attempt to store code at that address we take an address error exception due to the unaligned memory access. This leads to an exception prior to the kernel having configured its own exception handlers, so we jump to whatever handlers the bootloader configured. In the case of QEMU this results in a silent hang, since it has no useful general exception vector. Fix this by consistently declaring our TLB-related functions as functions. For handle_tlbl(), handle_tlbs() & handle_tlbm() we do this in asm/tlbex.h & we make use of the existing declaration of tlbmiss_handler_setup_pgd() in asm/mmu_context.h. Our TLB handler generation code in arch/mips/mm/tlbex.c is adjusted to deal with these definitions, in most cases simply by casting the function pointers to u32 pointers. This allows us to include asm/mmu_context.h in arch/mips/mm/tlbex.c to get the definitions of tlbmiss_handler_setup_pgd & pgd_current, removing some needless duplication. Consistently using msk_isa16_mode() on function pointers means we no longer need the tlbmiss_handler_setup_pgd_start symbol so that is removed entirely. Now that we're declaring our functions as functions GCC stops optimizing out msk_isa16_mode() & a microMIPS kernel built with either GCC 7.3.0 or 8.1.0 boots successfully. Signed-off-by: NPaul Burton <paul.burton@mips.com>
-
由 Paul Burton 提交于
We export tlbmiss_handler_setup_pgd in arch/mips/mm/tlbex.c close to a declaration of it, rather than close to its definition as is standard. We've supported exporting symbols in assembly code since commit 22823ab4 ("EXPORT_SYMBOL() for asm"), so move the export to follow the function's (stub) definition. Signed-off-by: NPaul Burton <paul.burton@mips.com>
-
- 09 8月, 2018 1 次提交
-
-
由 Paul Burton 提交于
In nlm_fmn_send() we have a loop which attempts to send a message multiple times in order to handle the transient failure condition of a lack of available credit. When examining the status register to detect the failure we check for a condition that can never be true, which falls foul of gcc 8's -Wtautological-compare: In file included from arch/mips/netlogic/common/irq.c:65: ./arch/mips/include/asm/netlogic/xlr/fmn.h: In function 'nlm_fmn_send': ./arch/mips/include/asm/netlogic/xlr/fmn.h:304:22: error: bitwise comparison always evaluates to false [-Werror=tautological-compare] if ((status & 0x2) == 1) ^~ If the path taken if this condition were true all we do is print a message to the kernel console. Since failures seem somewhat expected here (making the console message questionable anyway) and the condition has clearly never evaluated true we simply remove it, rather than attempting to fix it to check status correctly. Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20174/ Cc: Ganesan Ramalingam <ganesanr@broadcom.com> Cc: James Hogan <jhogan@kernel.org> Cc: Jayachandran C <jnair@caviumnetworks.com> Cc: John Crispin <john@phrozen.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
- 08 8月, 2018 2 次提交
-
-
由 Paul Burton 提交于
The code in __write_64bit_c0_split() is used by MIPS32 kernels running on MIPS64 CPUs to write a 64-bit value to a 64-bit coprocessor 0 register using a single 64-bit dmtc0 instruction. It does this by combining the 2x 32-bit registers used to hold the 64-bit value into a single register, which in the existing code involves three steps: 1) Zero extend register A which holds bits 31:0 of our data, since it may have previously held a sign-extended value. 2) Shift register B which holds bits 63:32 of our data in bits 31:0 left by 32 bits, such that the bits forming our data are in the position they'll be in the final 64-bit value & bits 31:0 of the register are zero. 3) Or the two registers together to form the 64-bit value in one 64-bit register. From MIPS r2 onwards we have a dins instruction which can effectively perform all 3 of those steps using a single instruction. Add a path for MIPS r2 & beyond which uses dins to take bits 31:0 from register B & insert them into bits 63:32 of register A, giving us our full 64-bit value in register A with one instruction. Since we know that MIPS r2 & above support the sel field for the dmtc0 instruction, we don't bother special casing sel==0. Omiting the sel field would assemble to exactly the same instruction as when we explicitly specify that it equals zero. Signed-off-by: NPaul Burton <paul.burton@mips.com>
-
由 Paul Burton 提交于
Commit c22c8043 ("MIPS: Fix input modify in __write_64bit_c0_split()") modified __write_64bit_c0_split() constraints such that we have both an input & an output which we hope to assign to the same registers, and modify the output rather than incorrectly clobbering an input. The way in which we use both an output & an input parameter with the input constrained to share the output registers is a little convoluted & also problematic for clang, which complains if the input & output values have different widths. For example: In file included from kernel/fork.c:98: ./arch/mips/include/asm/mmu_context.h:149:19: error: unsupported inline asm: input with type 'unsigned long' matching output with type 'unsigned long long' write_c0_entryhi(cpu_asid(cpu, next)); ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~ ./arch/mips/include/asm/mmu_context.h:93:2: note: expanded from macro 'cpu_asid' (cpu_context((cpu), (mm)) & cpu_asid_mask(&cpu_data[cpu])) ^ ./arch/mips/include/asm/mipsregs.h:1617:65: note: expanded from macro 'write_c0_entryhi' #define write_c0_entryhi(val) __write_ulong_c0_register($10, 0, val) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~ ./arch/mips/include/asm/mipsregs.h:1430:39: note: expanded from macro '__write_ulong_c0_register' __write_64bit_c0_register(reg, sel, val); \ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~ ./arch/mips/include/asm/mipsregs.h:1400:41: note: expanded from macro '__write_64bit_c0_register' __write_64bit_c0_split(register, sel, value); \ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~ ./arch/mips/include/asm/mipsregs.h:1498:13: note: expanded from macro '__write_64bit_c0_split' : "r,0" (val)); \ ^~~ We can both fix this build failure & simplify the code somewhat by assigning the __tmp variable with the input value in C prior to our inline assembly, and then using a single read-write output operand (ie. a constraint beginning with +) to provide this value to our assembly. Signed-off-by: NPaul Burton <paul.burton@mips.com>
-
- 02 8月, 2018 1 次提交
-
-
由 Paul Burton 提交于
Our sigreturn functions make use of a macro named nabi_no_regargs to declare 8 dummy arguments to a function, forcing the compiler to expect a pt_regs structure on the stack rather than in argument registers. This is an ugly hack which unnecessarily causes these sigreturn functions to need to care about the calling convention of the ABI the kernel is built for. Although this is abstracted via nabi_no_regargs, it's still ugly & unnecessary. Remove nabi_no_regargs & the struct pt_regs argument from sigreturn functions, and instead use current_pt_regs() to find the struct pt_regs on the stack, which works cleanly regardless of ABI. Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20106/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
- 31 7月, 2018 3 次提交
-
-
由 Paul Burton 提交于
On systems where physical memory begins at a non-zero address, defining PHYS_OFFSET (which influences ARCH_PFN_OFFSET) can save us time & memory by avoiding book-keeping for pages from address zero to the start of memory. Some MIPS platforms already make use of this, but with the definition of PHYS_OFFSET being compile-time constant it hasn't been possible to enable this optimization for a kernel which may run on systems with varying physical memory base addresses. Introduce a new Kconfig option CONFIG_MIPS_AUTO_PFN_OFFSET which, when enabled, makes ARCH_PFN_OFFSET a variable & detects it from the boot memory map (which for example may have been populated from DT). The relationship with PHYS_OFFSET is reversed, with PHYS_OFFSET now being based on ARCH_PFN_OFFSET. This is because ARCH_PFN_OFFSET is used far more often, so avoiding the need for runtime calculation gives us a smaller impact on kernel text size (0.1% rather than 0.15% for 64r6el_defconfig). Signed-off-by: NPaul Burton <paul.burton@mips.com> Suggested-by: NVladimir Kondratiev <vladimir.kondratiev@intel.com> Patchwork: https://patchwork.linux-mips.org/patch/20048/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
由 Paul Burton 提交于
isa_virt_to_bus() & isa_bus_to_virt() claim to treat ISA bus addresses as being identical to physical addresses, but they fail to do so in the presence of a non-zero PHYS_OFFSET. Correct this by having them use virt_to_phys() & phys_to_virt(), which consolidates the calculations to one place & ensures that ISA bus addresses do indeed match physical addresses. Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20047/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
-
由 Paul Burton 提交于
Converting an address between cached & uncached (typically addresses in (c)kseg0 & (c)kseg1 or 2 xkphys regions) should not depend upon PHYS_OFFSET in any way - we're converting from a virtual address in one unmapped region to a virtual address in another unmapped region. For some reason our CAC_ADDR() & UNCAC_ADDR() macros make use of PAGE_OFFSET, which typically includes PHYS_OFFSET. This means that platforms with a non-zero PHYS_OFFSET typically have to workaround miscalculation by these 2 macros by also defining UNCAC_BASE to a value that isn't really correct. It appears that an attempt has previously been made to address this with commit 3f4579252aa1 ("MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET") which was later undone by commit ed3ce16c ("Revert "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"") which also introduced the ar7 workaround. That attempt at a fix was roughly equivalent, but essentially caused the CAC_ADDR() & UNCAC_ADDR() macros to cancel out PHYS_OFFSET by adding & then subtracting it again. In his revert Leonid is correct that using PHYS_OFFSET makes no sense in the context of these macros, but appears to have missed its inclusion via PAGE_OFFSET which means PHYS_OFFSET actually had an effect after the revert rather than before it. Here we fix this by modifying CAC_ADDR() & UNCAC_ADDR() to stop using PAGE_OFFSET (& thus PHYS_OFFSET), instead using __pa() & __va() along with UNCAC_BASE. For UNCAC_ADDR(), __pa() will convert a cached address to a physical address which we can simply use as an offset from UNCAC_BASE to obtain an address in the uncached region. For CAC_ADDR() we can undo the effect of UNCAC_ADDR() by subtracting UNCAC_BASE and using __va() on the result. With this change made, remove definitions of UNCAC_BASE from the ar7 & pic32 platforms which appear to have defined them only to workaround this problem. Signed-off-by: NPaul Burton <paul.burton@mips.com> References: 3f4579252aa1 ("MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET") References: ed3ce16c ("Revert "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"") Patchwork: https://patchwork.linux-mips.org/patch/20046/ Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org Cc: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
-
- 28 7月, 2018 3 次提交
-
-
由 Paul Burton 提交于
VDSO code should not be using smp_processor_id(), since it is executed in user mode. Introduce a VDSO-specific path which will cause a compile-time or link-time error (depending upon support for __compiletime_error) if the VDSO ever incorrectly attempts to use smp_processor_id(). [Matt Redfearn <matt.redfearn@imgtec.com>: Move before change to smp_processor_id in series] Signed-off-by: NPaul Burton <paul.burton@mips.com> Signed-off-by: NMatt Redfearn <matt.redfearn@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/17932/ Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: linux-mips@linux-mips.org
-
由 Christoph Hellwig 提交于
mips_swiotlb_ops differs from the generic swiotlb_dma_ops only in that it contains a mb() barrier after each operations that maps or syncs dma memory to the device. The dma operations are defined to not be memory barriers, but instead the write* operations to kick the DMA off are supposed to contain them. For mips this handled by war_io_reorder_wmb(), which evaluates to the stronger wmb() instead of the pure compiler barrier barrier() for just those platforms that use swiotlb, so I think we are covered properly. [paul.burton@mips.com: - Include linux/swiotlb.h to fix build failures for configs with CONFIG_SWIOTLB=y.] Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20038/ Cc: David Daney <ddaney@caviumnetworks.com> Cc: Huacai Chen <chenhc@lemote.com> Cc: linux-mips@linux-mips.org Cc: iommu@lists.linux-foundation.org Cc: linux-kernel@vger.kernel.org
-
由 Rafał Miłecki 提交于
This reverts commit 2a027b47 ("MIPS: BCM47XX: Enable 74K Core ExternalSync for PCIe erratum"). Enabling ExternalSync caused a regression for BCM4718A1 (used e.g. in Netgear E3000 and ASUS RT-N16): it simply hangs during PCIe initialization. It's likely that BCM4717A1 is also affected. I didn't notice that earlier as the only BCM47XX devices with PCIe I own are: 1) BCM4706 with 2 x 14e4:4331 2) BCM4706 with 14e4:4360 and 14e4:4331 it appears that BCM4706 is unaffected. While BCM5300X-ES300-RDS.pdf seems to document that erratum and its workarounds (according to quotes provided by Tokunori) it seems not even Broadcom follows them. According to the provided info Broadcom should define CONF7_ES in their SDK's mipsinc.h and implement workaround in the si_mips_init(). Checking both didn't reveal such code. It *could* mean Broadcom also had some problems with the given workaround. Signed-off-by: NRafał Miłecki <rafal@milecki.pl> Signed-off-by: NPaul Burton <paul.burton@mips.com> Reported-by: NMichael Marley <michael@michaelmarley.com> Patchwork: https://patchwork.linux-mips.org/patch/20032/ URL: https://bugs.openwrt.org/index.php?do=details&task_id=1688 Cc: Tokunori Ikegami <ikegami@allied-telesis.co.jp> Cc: Hauke Mehrtens <hauke@hauke-m.de> Cc: Chris Packham <chris.packham@alliedtelesis.co.nz> Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
- 27 7月, 2018 1 次提交
-
-
由 Alexandre Belloni 提交于
The RTC definitions were moved to the driver, remove them from the platform header. [paul.burton@mips.com: - Also remove the unused tx4939_rtcptr which would use struct tx4939_rtc_reg if it were ever expanded.] Signed-off-by: NAlexandre Belloni <alexandre.belloni@bootlin.com> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/20024/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-kernel@vger.kernel.org Cc: linux-mips@linux-mips.org
-
- 25 7月, 2018 4 次提交
-
-
由 Felix Fietkau 提交于
This patch adds a few additional cpu feature overrides so that they do not need to be probed at runtime. Signed-off-by: NFelix Fietkau <nbd@nbd.name> Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19914/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
由 Felix Fietkau 提交于
This patch disables irq on reboot to fix hang issues that were observed due to pending interrupts. Signed-off-by: NFelix Fietkau <nbd@nbd.name> Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19913/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
由 Matthias Schiffer 提交于
This patch adds support for 2 new types of QCA silicon. TP9343 is essentially the same as the QCA956X but is licensed by TPLink. Signed-off-by: NWeijie Gao <hackpascal@gmail.com> Signed-off-by: NMatthias Schiffer <mschiffer@universe-factory.net> Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19911/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-
由 Gabor Juhos 提交于
This patch adds many new registers for various QCA MIPS SoCs. The patch is an aggragate of many contributions made to OpenWrt. Signed-off-by: NGabor Juhos <juhosg@openwrt.org> Signed-off-by: NHenryk Heisig <hyniu@o2.pl> Signed-off-by: NMatthias Schiffer <mschiffer@universe-factory.net> Signed-off-by: NWeijie Gao <hackpascal@gmail.com> Signed-off-by: NFelix Fietkau <nbd@nbd.name> Signed-off-by: NJulien Dusser <julien.dusser@free.fr> Signed-off-by: NJohn Crispin <john@phrozen.org> Signed-off-by: NPaul Burton <paul.burton@mips.com> Patchwork: https://patchwork.linux-mips.org/patch/19910/ Cc: James Hogan <jhogan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: linux-mips@linux-mips.org
-