1. 17 1月, 2017 1 次提交
    • A
      arm64: Fix swiotlb fallback allocation · 524dabe1
      Alexander Graf 提交于
      Commit b67a8b29 introduced logic to skip swiotlb allocation when all memory
      is DMA accessible anyway.
      
      While this is a great idea, __dma_alloc still calls swiotlb code unconditionally
      to allocate memory when there is no CMA memory available. The swiotlb code is
      called to ensure that we at least try get_free_pages().
      
      Without initialization, swiotlb allocation code tries to access io_tlb_list
      which is NULL. That results in a stack trace like this:
      
        Unable to handle kernel NULL pointer dereference at virtual address 00000000
        [...]
        [<ffff00000845b908>] swiotlb_tbl_map_single+0xd0/0x2b0
        [<ffff00000845be94>] swiotlb_alloc_coherent+0x10c/0x198
        [<ffff000008099dc0>] __dma_alloc+0x68/0x1a8
        [<ffff000000a1b410>] drm_gem_cma_create+0x98/0x108 [drm]
        [<ffff000000abcaac>] drm_fbdev_cma_create_with_funcs+0xbc/0x368 [drm_kms_helper]
        [<ffff000000abcd84>] drm_fbdev_cma_create+0x2c/0x40 [drm_kms_helper]
        [<ffff000000abc040>] drm_fb_helper_initial_config+0x238/0x410 [drm_kms_helper]
        [<ffff000000abce88>] drm_fbdev_cma_init_with_funcs+0x98/0x160 [drm_kms_helper]
        [<ffff000000abcf90>] drm_fbdev_cma_init+0x40/0x58 [drm_kms_helper]
        [<ffff000000b47980>] vc4_kms_load+0x90/0xf0 [vc4]
        [<ffff000000b46a94>] vc4_drm_bind+0xec/0x168 [vc4]
        [...]
      
      Thankfully swiotlb code just learned how to not do allocations with the FORCE_NO
      option. This patch configures the swiotlb code to use that if we decide not to
      initialize the swiotlb framework.
      
      Fixes: b67a8b29 ("arm64: mm: only initialize swiotlb when necessary")
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      CC: Jisheng Zhang <jszhang@marvell.com>
      CC: Geert Uytterhoeven <geert+renesas@glider.be>
      CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      524dabe1
  2. 19 12月, 2016 1 次提交
  3. 20 10月, 2016 1 次提交
    • M
      arm64: remove pr_cont abuse from mem_init · f7881bd6
      Mark Rutland 提交于
      All the lines printed by mem_init are independent, with each ending with
      a newline. While they logically form a large block, none are actually
      continuations of previous lines.
      
      The kernel-side printk code and the userspace demsg tool differ in their
      handling of KERN_CONT following a newline, and while this isn't always a
      problem kernel-side, it does cause difficulty for userspace. Using
      pr_cont causes the userspace tool to not print line prefix (e.g.
      timestamps) even when following a newline, mis-aligning the output and
      making it harder to read, e.g.
      
      [    0.000000] Virtual kernel memory layout:
      [    0.000000]     modules : 0xffff000000000000 - 0xffff000008000000   (   128 MB)
          vmalloc : 0xffff000008000000 - 0xffff7dffbfff0000   (129022 GB)
            .text : 0xffff000008080000 - 0xffff0000088b0000   (  8384 KB)
          .rodata : 0xffff0000088b0000 - 0xffff000008c50000   (  3712 KB)
            .init : 0xffff000008c50000 - 0xffff000008d50000   (  1024 KB)
            .data : 0xffff000008d50000 - 0xffff000008e25200   (   853 KB)
             .bss : 0xffff000008e25200 - 0xffff000008e6bec0   (   284 KB)
          fixed   : 0xffff7dfffe7fd000 - 0xffff7dfffec00000   (  4108 KB)
          PCI I/O : 0xffff7dfffee00000 - 0xffff7dffffe00000   (    16 MB)
          vmemmap : 0xffff7e0000000000 - 0xffff800000000000   (  2048 GB maximum)
                    0xffff7e0000000000 - 0xffff7e0026000000   (   608 MB actual)
          memory  : 0xffff800000000000 - 0xffff800980000000   ( 38912 MB)
      [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=6, Nodes=1
      
      Fix this by using pr_notice consistently for all lines, which both the
      kernel and userspace are happy with.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f7881bd6
  4. 07 9月, 2016 1 次提交
  5. 22 8月, 2016 1 次提交
  6. 29 7月, 2016 1 次提交
    • D
      arm64:acpi: fix the acpi alignment exception when 'mem=' specified · cb0a6502
      Dennis Chen 提交于
      When booting an ACPI enabled kernel with 'mem=x', there is the
      possibility that ACPI data regions from the firmware will lie above the
      memory limit.  Ordinarily these will be removed by
      memblock_enforce_memory_limit(.).
      
      Unfortunately, this means that these regions will then be mapped by
      acpi_os_ioremap(.) as device memory (instead of normal) thus unaligned
      accessess will then provoke alignment faults.
      
      In this patch we adopt memblock_mem_limit_remove_map instead, and this
      preserves these ACPI data regions (marked NOMAP) thus ensuring that
      these regions are not mapped as device memory.
      
      For example, below is an alignment exception observed on ARM platform
      when booting the kernel with 'acpi=on mem=8G':
      
        ...
        Unable to handle kernel paging request at virtual address ffff0000080521e7
        pgd = ffff000008aa0000
        [ffff0000080521e7] *pgd=000000801fffe003, *pud=000000801fffd003, *pmd=000000801fffc003, *pte=00e80083ff1c1707
        Internal error: Oops: 96000021 [#1] PREEMPT SMP
        Modules linked in:
        CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc3-next-20160616+ #172
        Hardware name: AMD Overdrive/Supercharger/Default string, BIOS ROD1001A 02/09/2016
        task: ffff800001ef0000 ti: ffff800001ef8000 task.ti: ffff800001ef8000
        PC is at acpi_ns_lookup+0x520/0x734
        LR is at acpi_ns_lookup+0x4a4/0x734
        pc : [<ffff0000083b8b10>] lr : [<ffff0000083b8a94>] pstate: 60000045
        sp : ffff800001efb8b0
        x29: ffff800001efb8c0 x28: 000000000000001b
        x27: 0000000000000001 x26: 0000000000000000
        x25: ffff800001efb9e8 x24: ffff000008a10000
        x23: 0000000000000001 x22: 0000000000000001
        x21: ffff000008724000 x20: 000000000000001b
        x19: ffff0000080521e7 x18: 000000000000000d
        x17: 00000000000038ff x16: 0000000000000002
        x15: 0000000000000007 x14: 0000000000007fff
        x13: ffffff0000000000 x12: 0000000000000018
        x11: 000000001fffd200 x10: 00000000ffffff76
        x9 : 000000000000005f x8 : ffff000008725fa8
        x7 : ffff000008a8df70 x6 : ffff000008a8df70
        x5 : ffff000008a8d000 x4 : 0000000000000010
        x3 : 0000000000000010 x2 : 000000000000000c
        x1 : 0000000000000006 x0 : 0000000000000000
        ...
          acpi_ns_lookup+0x520/0x734
          acpi_ds_load1_begin_op+0x174/0x4fc
          acpi_ps_build_named_op+0xf8/0x220
          acpi_ps_create_op+0x208/0x33c
          acpi_ps_parse_loop+0x204/0x838
          acpi_ps_parse_aml+0x1bc/0x42c
          acpi_ns_one_complete_parse+0x1e8/0x22c
          acpi_ns_parse_table+0x8c/0x128
          acpi_ns_load_table+0xc0/0x1e8
          acpi_tb_load_namespace+0xf8/0x2e8
          acpi_load_tables+0x7c/0x110
          acpi_init+0x90/0x2c0
          do_one_initcall+0x38/0x12c
          kernel_init_freeable+0x148/0x1ec
          kernel_init+0x10/0xec
          ret_from_fork+0x10/0x40
        Code: b9009fbc 2a00037b 36380057 3219037b (b9400260)
        ---[ end trace 03381e5eb0a24de4 ]---
        Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
      
      With 'efi=debug', we can see those ACPI regions loaded by firmware on
      that board as:
      
        efi:   0x0083ff185000-0x0083ff1b4fff [Reserved           |   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
        efi:   0x0083ff1b5000-0x0083ff1c2fff [ACPI Reclaim Memory|   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
        efi:   0x0083ff223000-0x0083ff224fff [ACPI Memory NVS    |   |  |  |  |  |  |  |   |WB|WT|WC|UC]*
      
      Link: http://lkml.kernel.org/r/1468475036-5852-3-git-send-email-dennis.chen@arm.comAcked-by: NSteve Capper <steve.capper@arm.com>
      Signed-off-by: NDennis Chen <dennis.chen@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Rafael J. Wysocki <rafael@kernel.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Kaly Xin <kaly.xin@arm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb0a6502
  7. 28 6月, 2016 2 次提交
  8. 21 6月, 2016 1 次提交
  9. 20 4月, 2016 2 次提交
  10. 16 4月, 2016 1 次提交
  11. 14 4月, 2016 4 次提交
    • A
      arm64: mm: move vmemmap region right below the linear region · 3e1907d5
      Ard Biesheuvel 提交于
      This moves the vmemmap region right below PAGE_OFFSET, aka the start
      of the linear region, and redefines its size to be a power of two.
      Due to the placement of PAGE_OFFSET in the middle of the address space,
      whose size is a power of two as well, this guarantees that virt to
      page conversions and vice versa can be implemented efficiently, by
      masking and shifting rather than ordinary arithmetic.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      3e1907d5
    • A
      arm64: mm: free __init memory via the linear mapping · d386825c
      Ard Biesheuvel 提交于
      The implementation of free_initmem_default() expects __init_begin
      and __init_end to be covered by the linear mapping, which is no
      longer the case. So open code it instead, using addresses that are
      explicitly translated from kernel virtual to linear virtual.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d386825c
    • A
      arm64: add the initrd region to the linear mapping explicitly · 177e15f0
      Ard Biesheuvel 提交于
      Instead of going out of our way to relocate the initrd if it turns out
      to occupy memory that is not covered by the linear mapping, just add the
      initrd to the linear mapping. This puts the burden on the bootloader to
      pass initrd= and mem= options that are mutually consistent.
      
      Note that, since the placement of the linear region in the PA space is
      also dependent on the placement of the kernel Image, which may reside
      anywhere in memory, we may still end up with a situation where the initrd
      and the kernel Image are simply too far apart to be covered by the linear
      region.
      
      Since we now leave it up to the bootloader to pass the initrd in memory
      that is guaranteed to be accessible by the kernel, add a mention of this to
      the arm64 boot protocol specification as well.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      177e15f0
    • A
      arm64/mm: ensure memstart_addr remains sufficiently aligned · 2958987f
      Ard Biesheuvel 提交于
      After choosing memstart_addr to be the highest multiple of
      ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
      address, we clip the memblocks to the maximum size of the linear region.
      Since the kernel may be high up in memory, we take care not to clip the
      kernel itself, which means we have to clip some memory from the bottom if
      this occurs, to ensure that the distance between the first and the last
      usable physical memory address can be covered by the linear region.
      
      However, we fail to update memstart_addr if this clipping from the bottom
      occurs, which means that we may still end up with virtual addresses that
      wrap into the userland range. So increment memstart_addr as appropriate to
      prevent this from happening.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      2958987f
  12. 21 3月, 2016 1 次提交
  13. 02 3月, 2016 1 次提交
  14. 01 3月, 2016 2 次提交
  15. 27 2月, 2016 1 次提交
    • A
      arm64: vmemmap: use virtual projection of linear region · dfd55ad8
      Ard Biesheuvel 提交于
      Commit dd006da2 ("arm64: mm: increase VA range of identity map") made
      some changes to the memory mapping code to allow physical memory to reside
      at an offset that exceeds the size of the virtual mapping.
      
      However, since the size of the vmemmap area is proportional to the size of
      the VA area, but it is populated relative to the physical space, we may
      end up with the struct page array being mapped outside of the vmemmap
      region. For instance, on my Seattle A0 box, I can see the following output
      in the dmesg log.
      
         vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                   0xffffffbfc0000000 - 0xffffffbfd0000000   (   256 MB actual)
      
      We can fix this by deciding that the vmemmap region is not a projection of
      the physical space, but of the virtual space above PAGE_OFFSET, i.e., the
      linear region. This way, we are guaranteed that the vmemmap region is of
      sufficient size, and we can even reduce the size by half.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      dfd55ad8
  16. 26 2月, 2016 2 次提交
  17. 24 2月, 2016 1 次提交
    • A
      arm64: kaslr: randomize the linear region · c031a421
      Ard Biesheuvel 提交于
      When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
      provided by the bootloader, randomize the placement of RAM inside the
      linear region if sufficient space is available. For instance, on a 4KB
      granule/3 levels kernel, the linear region is 256 GB in size, and we can
      choose any 1 GB aligned offset that is far enough from the top of the
      address space to fit the distance between the start of the lowest memblock
      and the top of the highest memblock.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c031a421
  18. 19 2月, 2016 3 次提交
    • A
      arm64: allow kernel Image to be loaded anywhere in physical memory · a7f8de16
      Ard Biesheuvel 提交于
      This relaxes the kernel Image placement requirements, so that it
      may be placed at any 2 MB aligned offset in physical memory.
      
      This is accomplished by ignoring PHYS_OFFSET when installing
      memblocks, and accounting for the apparent virtual offset of
      the kernel Image. As a result, virtual address references
      below PAGE_OFFSET are correctly mapped onto physical references
      into the kernel Image regardless of where it sits in memory.
      
      Special care needs to be taken for dealing with memory limits passed
      via mem=, since the generic implementation clips memory top down, which
      may clip the kernel image itself if it is loaded high up in memory. To
      deal with this case, we simply add back the memory covering the kernel
      image, which may result in more memory to be retained than was passed
      as a mem= parameter.
      
      Since mem= should not be considered a production feature, a panic notifier
      handler is installed that dumps the memory limit at panic time if one was
      set.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a7f8de16
    • A
      arm64: defer __va translation of initrd_start and initrd_end · a89dea58
      Ard Biesheuvel 提交于
      Before deferring the assignment of memstart_addr in a subsequent patch, to
      the moment where all memory has been discovered and possibly clipped based
      on the size of the linear region and the presence of a mem= command line
      parameter, we need to ensure that memstart_addr is not used to perform __va
      translations before it is assigned.
      
      One such use is in the generic early DT discovery of the initrd location,
      which is recorded as a virtual address in the globals initrd_start and
      initrd_end. So wire up the generic support to declare the initrd addresses,
      and implement it without __va() translations, and perform the translation
      after memstart_addr has been assigned.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a89dea58
    • A
      arm64: move kernel image to base of vmalloc area · f9040773
      Ard Biesheuvel 提交于
      This moves the module area to right before the vmalloc area, and moves
      the kernel image to the base of the vmalloc area. This is an intermediate
      step towards implementing KASLR, which allows the kernel image to be
      located anywhere in the vmalloc area.
      
      Since other subsystems such as hibernate may still need to refer to the
      kernel text or data segments via their linears addresses, both are mapped
      in the linear region as well. The linear alias of the text region is
      mapped read-only/non-executable to prevent inadvertent modification or
      execution.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f9040773
  19. 11 12月, 2015 1 次提交
    • M
      arm64: mm: fold alternatives into .init · 9aa4ec15
      Mark Rutland 提交于
      Currently we treat the alternatives separately from other data that's
      only used during initialisation, using separate .altinstructions and
      .altinstr_replacement linker sections. These are freed for general
      allocation separately from .init*. This is problematic as:
      
      * We do not remove execute permissions, as we do for .init, leaving the
        memory executable.
      
      * We pad between them, making the kernel Image bianry up to PAGE_SIZE
        bytes larger than necessary.
      
      This patch moves the two sections into the contiguous region used for
      .init*. This saves some memory, ensures that we remove execute
      permissions, and allows us to remove some code made redundant by this
      reorganisation.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9aa4ec15
  20. 10 12月, 2015 1 次提交
  21. 02 12月, 2015 1 次提交
  22. 30 10月, 2015 1 次提交
  23. 13 10月, 2015 1 次提交
    • L
      ARM64: kasan: print memory assignment · ee7f881b
      Linus Walleij 提交于
      This prints out the virtual memory assigned to KASan in the
      boot crawl along with other memory assignments, if and only
      if KASan is activated.
      
      Example dmesg from the Juno Development board:
      
      Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata,
      2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved)
      Virtual kernel memory layout:
          kasan   : 0xffffff8000000000 - 0xffffff9000000000   (    64 GB)
          vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000   (   182 GB)
          vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
                    0xffffffbdc2000000 - 0xffffffbdc3fc0000   (    31 MB actual)
          fixed   : 0xffffffbffabfd000 - 0xffffffbffac00000   (    12 KB)
          PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000   (    16 MB)
          modules : 0xffffffbffc000000 - 0xffffffc000000000   (    64 MB)
          memory  : 0xffffffc000000000 - 0xffffffc07f000000   (  2032 MB)
            .init : 0xffffffc0007f5000 - 0xffffffc00084a000   (   340 KB)
            .text : 0xffffffc000080000 - 0xffffffc0007f45b4   (  7634 KB)
            .data : 0xffffffc000850000 - 0xffffffc0008bf200   (   445 KB)
      Signed-off-by: NLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: NAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      ee7f881b
  24. 28 7月, 2015 1 次提交
  25. 17 6月, 2015 1 次提交
    • D
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin 提交于
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
  26. 02 6月, 2015 2 次提交
  27. 15 4月, 2015 1 次提交
  28. 28 2月, 2015 1 次提交
  29. 23 1月, 2015 1 次提交
    • M
      arm64: Fix overlapping VA allocations · aa03c428
      Mark Rutland 提交于
      PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but
      commit d1e6dc91 ("arm64: Add architectural support for PCI")
      extended this to cover the full 32MiB. The final 8KiB of this 32MiB is
      also allocated for the fixmap, allowing for potential clashes between
      the two.
      
      This change was masked by assumptions in mem_init and the page table
      dumping code, which assumed the I/O space to be 16MiB long through
      seaparte hard-coded definitions.
      
      This patch changes the definition of the PCI I/O space allocation to
      live in asm/memory.h, along with the other VA space allocations. As the
      fixmap allocation depends on the number of fixmap entries, this is moved
      below the PCI I/O space allocation. Both the fixmap and PCI I/O space
      are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB
      are moved over use new PCI_IO_{START,END} definitions, which will keep
      in sync with the size of the IO space (now restored to 16MiB).
      
      As a useful side effect, the use of the new PCI_IO_{START,END}
      definitions prevents a build issue in the dumping code due to a (now
      redundant) missing include of io.h for PCI_IOBASE.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Laura Abbott <lauraa@codeaurora.org>
      Cc: Liviu Dudau <liviu.dudau@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      aa03c428
  30. 22 1月, 2015 1 次提交