- 03 9月, 2013 1 次提交
-
-
由 Will Deacon 提交于
TCR.TBI0 can be used to cause hardware address translation to ignore the top byte of userspace virtual addresses. Whilst not especially useful in standard C programs, this can be used by JITs to `tag' pointers with various pieces of metadata. This patch enables this bit for AArch64 Linux, and adds a new file to Documentation/arm64/ which describes some potential caveats when using tagged virtual addresses. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 02 9月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This string has been moved to arch/arm64/kernel/cputable.c. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 8月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
-
- 19 7月, 2013 1 次提交
-
-
由 Will Deacon 提交于
On arm64, cache maintenance faults appear as data aborts with the CM bit set in the ESR. The WnR bit, usually used to distinguish between faulting loads and stores, always reads as 1 and (slightly confusingly) the instructions are treated as reads by the architecture. This patch fixes our fault handling code to treat cache maintenance faults in the same way as loads. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 7月, 2013 1 次提交
-
-
由 Michel Lespinasse 提交于
Since all architectures have been converted to use vm_unmapped_area(), there is no remaining use for the free_area_cache. Signed-off-by: NMichel Lespinasse <walken@google.com> Acked-by: NRik van Riel <riel@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Howells <dhowells@redhat.com> Cc: Helge Deller <deller@gmx.de> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Richard Henderson <rth@twiddle.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 04 7月, 2013 6 次提交
-
-
由 Jiang Liu 提交于
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Michal Simek <monstr@monstr.eu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Prepare for removing num_physpages and simplify mem_init(). Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Concentrate code to modify totalram_pages into the mm core, so the arch memory initialized code doesn't need to take care of it. With these changes applied, only following functions from mm core modify global variable totalram_pages: free_bootmem_late(), free_all_bootmem(), free_all_bootmem_node(), adjust_managed_page_count(). With this patch applied, it will be much more easier for us to keep totalram_pages and zone->managed_pages in consistence. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Acked-by: NDavid Howells <dhowells@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Use free_reserved_area() to poison initmem memory pages and kill poison_init_mem() on ARM64. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Address more review comments from last round of code review. 1) Enhance free_reserved_area() to support poisoning freed memory with pattern '0'. This could be used to get rid of poison_init_mem() on ARM64. 2) A previous patch has disabled memory poison for initmem on s390 by mistake, so restore to the original behavior. 3) Remove redundant PAGE_ALIGN() when calling free_reserved_area(). Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Change signature of free_reserved_area() according to Russell King's suggestion to fix following build warnings: arch/arm/mm/init.c: In function 'mem_init': arch/arm/mm/init.c:603:2: warning: passing argument 1 of 'free_reserved_area' makes integer from pointer without a cast [enabled by default] free_reserved_area(__va(PHYS_PFN_OFFSET), swapper_pg_dir, 0, NULL); ^ In file included from include/linux/mman.h:4:0, from arch/arm/mm/init.c:15: include/linux/mm.h:1301:22: note: expected 'long unsigned int' but argument is of type 'void *' extern unsigned long free_reserved_area(unsigned long start, unsigned long end, mm/page_alloc.c: In function 'free_reserved_area': >> mm/page_alloc.c:5134:3: warning: passing argument 1 of 'virt_to_phys' makes pointer from integer without a cast [enabled by default] In file included from arch/mips/include/asm/page.h:49:0, from include/linux/mmzone.h:20, from include/linux/gfp.h:4, from include/linux/mm.h:8, from mm/page_alloc.c:18: arch/mips/include/asm/io.h:119:29: note: expected 'const volatile void *' but argument is of type 'long unsigned int' mm/page_alloc.c: In function 'free_area_init_nodes': mm/page_alloc.c:5030:34: warning: array subscript is below array bounds [-Warray-bounds] Also address some minor code review comments. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Reported-by: NArnd Bergmann <arnd@arndb.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: <sworddragon2@aol.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jianguo Wu <wujianguo@huawei.com> Cc: Joonsoo Kim <js1304@gmail.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Michel Lespinasse <walken@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Rik van Riel <riel@redhat.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Tang Chen <tangchen@cn.fujitsu.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wen Congyang <wency@cn.fujitsu.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Russell King <rmk@arm.linux.org.uk> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 6月, 2013 2 次提交
-
-
由 Steve Capper 提交于
Add huge page support to ARM64, different huge page sizes are supported depending on the size of normal pages: PAGE_SIZE is 4KB: 2MB - (pmds) these can be allocated at any time. 1024MB - (puds) usually allocated on bootup with the command line with something like: hugepagesz=1G hugepages=6 PAGE_SIZE is 64KB: 512MB - (pmds) usually allocated on bootup via command line. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Steve Capper 提交于
In paging_init the memblock limit is set to restrict any addresses returned by early_alloc to fit within the initial direct kernel mapping in swapper_pg_dir. This allows map_mem to allocate puds, pmds and ptes from the initial direct kernel mapping. The limit stays low after paging_init() though, meaning any bootmem allocations will be from a restricted subset of memory. Gigabyte huge pages, for instance, are normally allocated from bootmem as their order (18) is too large for the default buddy allocator (MAX_ORDER = 11). This patch restores the memblock limit when map_mem has finished, allowing gigabyte huge pages (and other objects) to be allocated from all of bootmem. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 6月, 2013 3 次提交
-
-
由 Catalin Marinas 提交于
This function is only used in __sync_icache_dcache(), so remove it and call __flush_dcache_area() directly. The flush_icache_user_range() function is not used in the arm64 kernel. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The D-cache on AArch64 is VIPT non-aliasing, so there is no need to flush it for anonymous pages. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
由 Catalin Marinas 提交于
The flush_dcache_page() function is called when the kernel modified a page cache page. Since the D-cache on AArch64 does not have aliases this function can simply mark the page as dirty for later flushing via set_pte_at()/__sync_icache_dcache() if the page is executable (to ensure the I-D cache coherency). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 25 5月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
Currently user faults (page, undefined instruction) are always reported even though the user may have a signal handler for them. This patch adds unhandled_signal() check together with printk_ratelimit() for these cases. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 5月, 2013 1 次提交
-
-
由 Sukanto Ghosh 提交于
The format of the lower 32-bits of the 64-bit operand to 'dc cisw' is unchanged from ARMv7 architecture and the upper bits are RES0. This implies that the 'way' field of the operand of 'dc cisw' occupies the bit-positions [31 .. (32-A)]. Due to the use of 64-bit extended operands to 'clz', the existing implementation of __flush_dcache_all is incorrectly placing the 'way' field in the bit-positions [63 .. (64-A)]. Signed-off-by: NSukanto Ghosh <sghosh@apm.com> Tested-by: NAnup Patel <anup.patel@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
-
- 13 5月, 2013 1 次提交
-
-
由 Will Deacon 提交于
During boot, we take the debug OS lock before interrupts are enabled. This is required to prevent clearing of PSTATE.D on the interrupt entry path, which could result in spurious debug exceptions before we've got round to resetting things like the hardware breakpoints registers to a sane state. A problem with this approach is that taking the OS lock prevents an external JTAG debugger from debugging the system, which is especially irritating during boot, where JTAG debugging can be most useful. This patch clears mdscr_el1 rather than taking the lock, clearing the MDE and KDE bits and preventing self-hosted hardware debug exceptions from occurring. Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org
-
- 08 5月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
ESR.WnR bit is always set on data cache maintenance faults even though the page is not required to have write permission. If a translation fault (page not yet mapped) happens for read-only user address range, Linux incorrectly assumes a permission fault. This patch adds the check of the ESR.CM bit during the page fault handling to ignore the 'write' flag. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NTim Northover <Tim.Northover@arm.com> Cc: stable@vger.kernel.org
-
- 30 4月, 2013 2 次提交
-
-
由 Johannes Weiner 提交于
The sparse code, when asking the architecture to populate the vmemmap, specifies the section range as a starting page and a number of pages. This is an awkward interface, because none of the arch-specific code actually thinks of the range in terms of 'struct page' units and always translates it to bytes first. In addition, later patches mix huge page and regular page backing for the vmemmap. For this, they need to call vmemmap_populate_basepages() on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind. But these are not necessarily multiples of the 'struct page' size and so this unit is too coarse. Just translate the section range into bytes once in the generic sparse code, then pass byte ranges down the stack. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Bernhard Schmidt <Bernhard.Schmidt@lrz.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Tested-by: NDavid S. Miller <davem@davemloft.net> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jiang Liu 提交于
Use common help functions to free reserved pages. Signed-off-by: NJiang Liu <jiang.liu@huawei.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 4月, 2013 1 次提交
-
-
由 Steve Capper 提交于
show_pte makes use of the *_none_or_clear_bad style functions. If a pgd, pud or pmd is identified as being bad, it will then be cleared. As show_pte appears to be called from either the user or kernel fault handlers this side effect can lead to unpredictable behaviour; especially as TLB entries are not invalidated. This patch removes the page table sanitisation from show_pte. If a bad pgd, pud or pmd is encountered it is left unmodified. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 3月, 2013 1 次提交
-
-
由 Ben Hutchings 提交于
The 'CONFIG_' prefix is not implicit in IS_ENABLED(). Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 2月, 2013 1 次提交
-
-
由 Tang Chen 提交于
Introduce a new API vmemmap_free() to free and remove vmemmap pagetables. Since pagetable implements are different, each architecture has to provide its own version of vmemmap_free(), just like vmemmap_populate(). Note: vmemmap_free() is not implemented for ia64, ppc, s390, and sparc. [mhocko@suse.cz: fix implicit declaration of remove_pagetable] Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NJianguo Wu <wujianguo@huawei.com> Signed-off-by: NWen Congyang <wency@cn.fujitsu.com> Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Jiang Liu <jiang.liu@huawei.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Wu Jianguo <wujianguo@huawei.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NMichal Hocko <mhocko@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 1月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds support for "earlyprintk=" parameter on the kernel command line. The format is: earlyprintk=<name>[,<addr>][,<options>] where <name> is the name of the (UART) device, e.g. "pl011", <addr> is the I/O address. The <options> aren't currently used. The mapping of the earlyprintk device is done very early during kernel boot and there are restrictions on which functions it can call. A special early_io_map() function is added which creates the mapping from the pre-defined EARLY_IOBASE to the device I/O address passed via the kernel parameter. The pgd entry corresponding to EARLY_IOBASE is pre-populated in head.S during kernel boot. Only PL011 is currently supported and it is assumed that the interface is already initialised by the boot loader before the kernel is started. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
- 24 11月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
These functions are empty, just make them static inline in the header. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 14 11月, 2012 2 次提交
-
-
由 Will Deacon 提交于
Commit f483a853 ("arm64: mm: fix booting on systems with no memory below 4GB") sets max_dma32 to the minimum of the maximum pfn and MAX_DMA32_PFN. This value is later used as the base of the NORMAL zone, which is incorrect when MAX_DMA32_PFN is below the minimum pfn (i.e. all memory is above 4GB). This patch fixes the problem by ensuring that max_dma32 is always set to the end of the DMA32 zone. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
For user space faults the kernel reports "unhandled page fault" and it gives the ESR value. With this patch the error message looked up in the fault info array to give a better description. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 11月, 2012 1 次提交
-
-
由 Will Deacon 提交于
Booting on a system with all of its memory above the 4GB boundary breaks for two reasons: (1) We still try to create a non-empty DMA32 zone (2) no-bootmem limits allocations to 0xffffffff This patch fixes these issues for ARM64. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 10月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
Following commit 74838b75 (swiotlb: add the late swiotlb initialization function with iotlb memory) the swiotlb_init_with_default_size() is a static function. This patch changes the arm64 code to call swiotlb_init() instead and use the default size of 64MB. It is assumed that AArch64 platforms have enough RAM to afford the pre-allocated swiotlb memory. It also removes the #ifdef around this call since CONFIG_SWIOTLB is always enabled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 9月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
If such bit exists on a given CPU, it must be set by the firmware or boot-loader prior to starting the kernel (see Documentation/arm64/booting.txt). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 9月, 2012 8 次提交
-
-
由 Catalin Marinas 提交于
This patch adds Makefile and Kconfig files required for building an AArch64 kernel. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
由 Catalin Marinas 提交于
This patch adds support for the DMA mapping API. It uses dma_map_ops for flexibility and it currently supports swiotlb. This patch could be simplified further if the DMA accesses are coherent (not mandated by the architecture) or if corresponding hooks are placed in the generic swiotlb code to deal with cache maintenance. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
由 Catalin Marinas 提交于
This patch adds several definitions for device communication, including I/O accessors and ioremap(). The __raw_* accessors are implemented as inline asm to avoid compiler generation of post-indexed accesses (less efficient to emulate in a virtualised environment). Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Catalin Marinas 提交于
This patch adds the TLB maintenance functions. There is no distinction made between the I and D TLBs. TLB maintenance operations are automatically broadcast between CPUs in hardware. The inner-shareable operations are always present, even on UP systems. NOTE: Large part of this patch to be dropped once Peter Z's generic mmu_gather patches are merged. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
由 Catalin Marinas 提交于
The patch adds functionality required for cache maintenance. The AArch64 architecture mandates non-aliasing VIPT or PIPT D-cache and VIPT (may have aliases) or ASID-tagged VIVT I-cache. Cache maintenance operations are automatically broadcast in hardware between CPUs. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
由 Catalin Marinas 提交于
This patch adds AArch64 CPU specific functionality. It assumes that the implementation is generic to AArch64 and does not require specific identification. Different CPU implementations may require the setting of various ACTLR_EL1 bits but such information is not currently available and it should ideally be pushed to firmware. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
-
由 Catalin Marinas 提交于
The patch adds support for thread creation and context switching. The context switching CPU specific code is introduced with the CPU support patch (part of the arch/arm64/mm/proc.S file). AArch64 supports ASID-tagged TLBs and the ASID can be either 8 or 16-bit wide (detectable via the ID_AA64AFR0_EL1 register). Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
由 Catalin Marinas 提交于
This patch adds support for the handling of the MMU faults (exception entry code introduced by a previous patch) and page table management. The user translation table is pointed to by TTBR0 and the kernel one (swapper_pg_dir) by TTBR1. There is no translation information shared or address space overlapping between user and kernel page tables. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-