- 01 5月, 2014 1 次提交
-
-
由 Mark Salter 提交于
At boot time, before switching to a virtual UEFI memory map, firmware expects UEFI memory and IO regions to be identity mapped whenever kernel makes runtime services calls. The existing early boot code creates an identity map of kernel text/data but this is not sufficient for UEFI. This patch adds a create_id_mapping() function which reuses the core code of the existing create_mapping(). Signed-off-by: NMark Salter <msalter@redhat.com> [ Fixed error message formatting (%pa). ] Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 08 4月, 2014 2 次提交
-
-
由 Mark Salter 提交于
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mark Salter 提交于
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 2月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With the 64K page size configuration, __create_page_tables in head.S maps enough memory to get started but using 64K pages rather than 512M sections with a single pgd/pud/pmd entry pointing to a pte table. create_mapping() may override the pgd/pud/pmd table entry with a block (section) one if the RAM size is more than 512MB and aligned correctly. For the end of this block to be accessible, the old TLB entry must be invalidated. Cc: <stable@vger.kernel.org> Reported-by: NMark Salter <msalter@redhat.com> Tested-by: NMark Salter <msalter@redhat.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 28 8月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
The map_mem() function limits the current memblock limit to PGDIR_SIZE (the initial swapper_pg_dir mapping) to avoid create_mapping() allocating memory from unmapped areas. However, if the first block is within PGDIR_SIZE and not ending on a PMD_SIZE boundary, when 4K page configuration is enabled, create_mapping() will try to allocate a pte page. Such page may be returned by memblock_alloc() from the end of such bank (or any subsequent bank within PGDIR_SIZE) which is not mapped yet. The patch limits the current memblock limit to the aligned end of the first bank and gradually increases it as more memory is mapped. It also ensures that the start of the first bank is aligned to PMD_SIZE to avoid pte page allocation for this mapping. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com> Tested-by: N"Leizhen (ThunderTown, Euler)" <thunder.leizhen@huawei.com>
-
- 14 6月, 2013 1 次提交
-
-
由 Steve Capper 提交于
In paging_init the memblock limit is set to restrict any addresses returned by early_alloc to fit within the initial direct kernel mapping in swapper_pg_dir. This allows map_mem to allocate puds, pmds and ptes from the initial direct kernel mapping. The limit stays low after paging_init() though, meaning any bootmem allocations will be from a restricted subset of memory. Gigabyte huge pages, for instance, are normally allocated from bootmem as their order (18) is too large for the default buddy allocator (MAX_ORDER = 11). This patch restores the memblock limit when map_mem has finished, allowing gigabyte huge pages (and other objects) to be allocated from all of bootmem. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 6月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
The D-cache on AArch64 is VIPT non-aliasing, so there is no need to flush it for anonymous pages. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NWill Deacon <will.deacon@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com>
-
- 30 4月, 2013 1 次提交
-
-
由 Johannes Weiner 提交于
The sparse code, when asking the architecture to populate the vmemmap, specifies the section range as a starting page and a number of pages. This is an awkward interface, because none of the arch-specific code actually thinks of the range in terms of 'struct page' units and always translates it to bytes first. In addition, later patches mix huge page and regular page backing for the vmemmap. For this, they need to call vmemmap_populate_basepages() on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind. But these are not necessarily multiples of the 'struct page' size and so this unit is too coarse. Just translate the section range into bytes once in the generic sparse code, then pass byte ranges down the stack. Signed-off-by: NJohannes Weiner <hannes@cmpxchg.org> Cc: Ben Hutchings <ben@decadent.org.uk> Cc: Bernhard Schmidt <Bernhard.Schmidt@lrz.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Tested-by: NDavid S. Miller <davem@davemloft.net> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 26 3月, 2013 1 次提交
-
-
由 Ben Hutchings 提交于
The 'CONFIG_' prefix is not implicit in IS_ENABLED(). Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 24 2月, 2013 1 次提交
-
-
由 Tang Chen 提交于
Introduce a new API vmemmap_free() to free and remove vmemmap pagetables. Since pagetable implements are different, each architecture has to provide its own version of vmemmap_free(), just like vmemmap_populate(). Note: vmemmap_free() is not implemented for ia64, ppc, s390, and sparc. [mhocko@suse.cz: fix implicit declaration of remove_pagetable] Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NJianguo Wu <wujianguo@huawei.com> Signed-off-by: NWen Congyang <wency@cn.fujitsu.com> Signed-off-by: NTang Chen <tangchen@cn.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Jiang Liu <jiang.liu@huawei.com> Cc: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Wu Jianguo <wujianguo@huawei.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NMichal Hocko <mhocko@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 1月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds support for "earlyprintk=" parameter on the kernel command line. The format is: earlyprintk=<name>[,<addr>][,<options>] where <name> is the name of the (UART) device, e.g. "pl011", <addr> is the I/O address. The <options> aren't currently used. The mapping of the earlyprintk device is done very early during kernel boot and there are restrictions on which functions it can call. A special early_io_map() function is added which creates the mapping from the pre-defined EARLY_IOBASE to the device I/O address passed via the kernel parameter. The pgd entry corresponding to EARLY_IOBASE is pre-populated in head.S during kernel boot. Only PL011 is currently supported and it is assumed that the interface is already initialised by the boot loader before the kernel is started. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
- 17 9月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
This patch contains the initialisation of the memory blocks, MMU attributes and the memory map. Only five memory types are defined: Device nGnRnE (equivalent to Strongly Ordered), Device nGnRE (classic Device memory), Device GRE, Normal Non-cacheable and Normal Cacheable. Cache policies are supported via the memory attributes register (MAIR_EL1) and only affect the Normal Cacheable mappings. This patch also adds the SPARSEMEM_VMEMMAP initialisation. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-