1. 21 11月, 2017 2 次提交
  2. 16 11月, 2017 1 次提交
  3. 06 11月, 2017 1 次提交
    • A
      ARM: 8719/1: NOMMU: work around maybe-uninitialized warning · fe9c0589
      Arnd Bergmann 提交于
      The reworked MPU code produces a new warning in some configurations,
      presumably starting with the code move after the compiler now makes
      different inlining decisions:
      
      arch/arm/mm/pmsa-v7.c: In function 'adjust_lowmem_bounds_mpu':
      arch/arm/mm/pmsa-v7.c:310:5: error: 'specified_mem_size' may be used uninitialized in this function [-Werror=maybe-uninitialized]
      
      This appears to be harmless, as we know that there is always at
      least one memblock, and the only way this could get triggered is
      if the for_each_memblock() loop was never entered.
      
      I could not come up with a better workaround than initializing
      the specified_mem_size to zero, but at least that is the value
      that the variable would have in the hypothetical case of no
      memblocks.
      
      Fixes: 877ec119 ("ARM: 8706/1: NOMMU: Move out MPU setup in separate module")
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Reviewed-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      fe9c0589
  4. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  5. 23 10月, 2017 6 次提交
  6. 12 10月, 2017 1 次提交
    • N
      ARM: 8700/1: nommu: always reserve address 0 away · 195320fd
      Nicolas Pitre 提交于
      Some nommu systems have RAM at address 0. When vectors are not located
      there, the very beginning of memory remains available for dynamic
      allocations. The memblock allocator explicitly skips the first page
      but the standard page allocator does not, and while it correctly returns
      a non-null struct page pointer for that page, page_address() gives 0
      which gets confused with NULL (out of memory) by callers despite having
      plenty of free memory left.
      Signed-off-by: NNicolas Pitre <nico@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      195320fd
  7. 28 9月, 2017 4 次提交
  8. 29 8月, 2017 2 次提交
    • M
      ARM: 8692/1: mm: abort uaccess retries upon fatal signal · 746a272e
      Mark Rutland 提交于
      When there's a fatal signal pending, arm's do_page_fault()
      implementation returns 0. The intent is that we'll return to the
      faulting userspace instruction, delivering the signal on the way.
      
      However, if we take a fatal signal during fixing up a uaccess, this
      results in a return to the faulting kernel instruction, which will be
      instantly retried, resulting in the same fault being taken forever. As
      the task never reaches userspace, the signal is not delivered, and the
      task is left unkillable. While the task is stuck in this state, it can
      inhibit the forward progress of the system.
      
      To avoid this, we must ensure that when a fatal signal is pending, we
      apply any necessary fixup for a faulting kernel instruction. Thus we
      will return to an error path, and it is up to that code to make forward
      progress towards delivering the fatal signal.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NSteve Capper <steve.capper@arm.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      746a272e
    • H
      ARM: 8690/1: lpae: build TTB control register value from scratch in v7_ttb_setup · f26fee5f
      Hoeun Ryu 提交于
      Reading TTBCR in early boot stage might return the value of the previous
      kernel's configuration, especially in case of kexec. For example, if
      normal kernel (first kernel) had run on a configuration of PHYS_OFFSET <=
      PAGE_OFFSET and crash kernel (second kernel) is running on a configuration
      PHYS_OFFSET > PAGE_OFFSET, which can happen because it depends on the
      reserved area for crash kernel, reading TTBCR and using the value to OR
      other bit fields might be risky because it doesn't have a reset value for TTBCR.
      Suggested-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NHoeun Ryu <hoeun.ryu@gmail.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      f26fee5f
  9. 14 8月, 2017 1 次提交
    • R
      ARM: align .data section · 1abd3502
      Russell King 提交于
      Robert Jarzmik reports that his PXA25x system fails to boot with 4.12,
      failing at __flush_whole_cache in arch/arm/mm/proc-xscale.S:215:
      
         0xc0019e20 <+0>:     ldr     r1, [pc, #788]
         0xc0019e24 <+4>:     ldr     r0, [r1]	<== here
      
      with r1 containing 0xc06f82cd, which is the address of "clean_addr".
      Examination of the System.map shows:
      
      c06f22c8 D user_pmd_table
      c06f22cc d __warned.19178
      c06f22cd d clean_addr
      
      indicating that a .data.unlikely section has appeared just before the
      .data section from proc-xscale.S.  According to objdump -h, it appears
      that our assembly files default their .data alignment to 2**0, which
      is bad news if the preceding .data section size is not power-of-2
      aligned at link time.
      
      Add the appropriate .align directives to all assembly files in arch/arm
      that are missing them where we require an appropriate alignment.
      Reported-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Tested-by: NRobert Jarzmik <robert.jarzmik@free.fr>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      1abd3502
  10. 20 7月, 2017 2 次提交
  11. 01 7月, 2017 4 次提交
  12. 30 6月, 2017 1 次提交
    • D
      ARM: 8685/1: ensure memblock-limit is pmd-aligned · 9e25ebfe
      Doug Berger 提交于
      The pmd containing memblock_limit is cleared by prepare_page_table()
      which creates the opportunity for early_alloc() to allocate unmapped
      memory if memblock_limit is not pmd aligned causing a boot-time hang.
      
      Commit 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
      attempted to resolve this problem, but there is a path through the
      adjust_lowmem_bounds() routine where if all memory regions start and
      end on pmd-aligned addresses the memblock_limit will be set to
      arm_lowmem_limit.
      
      Since arm_lowmem_limit can be affected by the vmalloc early parameter,
      the value of arm_lowmem_limit may not be pmd-aligned. This commit
      corrects this oversight such that memblock_limit is always rounded
      down to pmd-alignment.
      
      Fixes: 965278dc ("ARM: 8356/1: mm: handle non-pmd-aligned end of RAM")
      Signed-off-by: NDoug Berger <opendmb@gmail.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      9e25ebfe
  13. 28 6月, 2017 2 次提交
  14. 19 6月, 2017 1 次提交
    • H
      mm: larger stack guard gap, between vmas · 1be7107f
      Hugh Dickins 提交于
      Stack guard page is a useful feature to reduce a risk of stack smashing
      into a different mapping. We have been using a single page gap which
      is sufficient to prevent having stack adjacent to a different mapping.
      But this seems to be insufficient in the light of the stack usage in
      userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
      used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
      which is 256kB or stack strings with MAX_ARG_STRLEN.
      
      This will become especially dangerous for suid binaries and the default
      no limit for the stack size limit because those applications can be
      tricked to consume a large portion of the stack and a single glibc call
      could jump over the guard page. These attacks are not theoretical,
      unfortunatelly.
      
      Make those attacks less probable by increasing the stack guard gap
      to 1MB (on systems with 4k pages; but make it depend on the page size
      because systems with larger base pages might cap stack allocations in
      the PAGE_SIZE units) which should cover larger alloca() and VLA stack
      allocations. It is obviously not a full fix because the problem is
      somehow inherent, but it should reduce attack space a lot.
      
      One could argue that the gap size should be configurable from userspace,
      but that can be done later when somebody finds that the new 1MB is wrong
      for some special case applications.  For now, add a kernel command line
      option (stack_guard_gap) to specify the stack gap size (in page units).
      
      Implementation wise, first delete all the old code for stack guard page:
      because although we could get away with accounting one extra page in a
      stack vma, accounting a larger gap can break userspace - case in point,
      a program run with "ulimit -S -v 20000" failed when the 1MB gap was
      counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
      and strict non-overcommit mode.
      
      Instead of keeping gap inside the stack vma, maintain the stack guard
      gap as a gap between vmas: using vm_start_gap() in place of vm_start
      (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
      places which need to respect the gap - mainly arch_get_unmapped_area(),
      and and the vma tree's subtree_gap support for that.
      Original-patch-by: NOleg Nesterov <oleg@redhat.com>
      Original-patch-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Tested-by: Helge Deller <deller@gmx.de> # parisc
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1be7107f
  15. 30 5月, 2017 3 次提交
    • R
      ARM: make configuration of userspace Thumb support an expert option · 1515b186
      Russell King 提交于
      David Mosberger reports random segfaults and other problems when running
      his buildroot userspace.  It turns out that his kernel did not have
      support for Thumb userspace, nor did his application, but glibc made use
      of Thumb instructions in glibc.
      
      The kernel Thumb support option already recommends being enabled, and
      is also so biased, but clearly this is not enough of a recommendation.
      
      So, hide this behind CONFIG_EXPERT as well, and include a note to
      indicate the potential issues if it's turned off and userspace Thumb
      mode is made use of.
      Reported-by: NDavid Mosberger <davidm@egauge.net>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      1515b186
    • S
      arm: dma-mapping: Reset the device's dma_ops · d3e01c51
      Sricharan R 提交于
      arch_teardown_dma_ops() being the inverse of arch_setup_dma_ops()
      ,dma_ops should be cleared in the teardown path. Currently, only the
      device's iommu mapping structures are cleared in arch_teardown_dma_ops,
      but not the dma_ops. So on the next reprobe, dma_ops left in place is
      stale from the first IOMMU setup, but iommu mappings has been disposed
      of. This is a problem when the probe of the device is deferred and
      recalled with the IOMMU probe deferral.
      
      So for fixing this, slightly refactor by moving the code from
      __arm_iommu_detach_device to arm_iommu_detach_device and cleanup
      the former. This takes care of resetting the dma_ops in the teardown
      path.
      
      Fixes: 09515ef5 ("of/acpi: Configure dma operations at probe time for platform/amba/pci bus devices")
      Reviewed-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com>
      Signed-off-by: NSricharan R <sricharan@codeaurora.org>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      d3e01c51
    • L
      ARM: dma-mapping: Don't tear down third-party mappings · a93a121a
      Laurent Pinchart 提交于
      arch_setup_dma_ops() is used in device probe code paths to create an
      IOMMU mapping and attach it to the device. The function assumes that the
      device is attached to a device-specific IOMMU instance (or at least a
      device-specific TLB in a shared IOMMU instance) and thus creates a
      separate mapping for every device.
      
      On several systems (Renesas R-Car Gen2 being one of them), that
      assumption is not true, and IOMMU mappings must be shared between
      multiple devices. In those cases the IOMMU driver knows better than the
      generic ARM dma-mapping layer and attaches mapping to devices manually
      with arm_iommu_attach_device(), which sets the DMA ops for the device.
      
      The arch_setup_dma_ops() function takes this into account and bails out
      immediately if the device already has DMA ops assigned. However, the
      corresponding arch_teardown_dma_ops() function, called from driver
      unbind code paths (including probe deferral), will tear the mapping down
      regardless of who created it. When the device is reprobed
      arch_setup_dma_ops() will be called again but won't perform any
      operation as the DMA ops will still be set.
      
      We need to reset the DMA ops in arch_teardown_dma_ops() to fix this.
      However, we can't do so unconditionally, as then a new mapping would be
      created by arch_setup_dma_ops() when the device is reprobed, regardless
      of whether the device needs to share a mapping or not. We must thus keep
      track of whether arch_setup_dma_ops() created the mapping, and only in
      that case tear it down in arch_teardown_dma_ops().
      
      Keep track of that information in the dev_archdata structure. As the
      structure is embedded in all instances of struct device let's not grow
      it, but turn the existing dma_coherent bool field into a bitfield that
      can be used for other purposes.
      
      Fixes: 09515ef5 ("of/acpi: Configure dma operations at probe time for platform/amba/pci bus devices")
      Reviewed-by: NRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: NLaurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Signed-off-by: NJoerg Roedel <jroedel@suse.de>
      a93a121a
  16. 09 5月, 2017 1 次提交
  17. 02 5月, 2017 1 次提交
    • S
      xen/arm,arm64: fix xen_dma_ops after 815dd187 "Consolidate get_dma_ops..." · e0586326
      Stefano Stabellini 提交于
      The following commit:
      
        commit 815dd187
        Author: Bart Van Assche <bart.vanassche@sandisk.com>
        Date:   Fri Jan 20 13:04:04 2017 -0800
      
            treewide: Consolidate get_dma_ops() implementations
      
      rearranges get_dma_ops in a way that xen_dma_ops are not returned when
      running on Xen anymore, dev->dma_ops is returned instead (see
      arch/arm/include/asm/dma-mapping.h:get_arch_dma_ops and
      include/linux/dma-mapping.h:get_dma_ops).
      
      Fix the problem by storing dev->dma_ops in dev_archdata, and setting
      dev->dma_ops to xen_dma_ops. This way, xen_dma_ops is returned naturally
      by get_dma_ops. The Xen code can retrieve the original dev->dma_ops from
      dev_archdata when needed. It also allows us to remove __generic_dma_ops
      from common headers.
      Signed-off-by: NStefano Stabellini <sstabellini@kernel.org>
      Tested-by: NJulien Grall <julien.grall@arm.com>
      Suggested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: <stable@vger.kernel.org>        [4.11+]
      CC: linux@armlinux.org.uk
      CC: catalin.marinas@arm.com
      CC: will.deacon@arm.com
      CC: boris.ostrovsky@oracle.com
      CC: jgross@suse.com
      CC: Julien Grall <julien.grall@arm.com>
      e0586326
  18. 29 4月, 2017 1 次提交
  19. 26 4月, 2017 3 次提交
    • G
      ARM: 8672/1: mm: remove tasklist locking from update_sections_early() · 11ce4b33
      Grygorii Strashko 提交于
      The below backtrace can be observed on -rt kernel with
      CONFIG_DEBUG_MODULE_RONX (4.9 kernel CONFIG_DEBUG_RODATA) option enabled:
      
       BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:993
       in_atomic(): 1, irqs_disabled(): 128, pid: 14, name: migration/0
       1 lock held by migration/0/14:
        #0:  (tasklist_lock){+.+...}, at: [<c01183e8>] update_sections_early+0x24/0xdc
       irq event stamp: 38
       hardirqs last  enabled at (37): [<c08f6f7c>] _raw_spin_unlock_irq+0x24/0x68
       hardirqs last disabled at (38): [<c01fdfe8>] multi_cpu_stop+0xd8/0x138
       softirqs last  enabled at (0): [<c01303ec>] copy_process.part.5+0x238/0x1b64
       softirqs last disabled at (0): [<  (null)>]   (null)
       Preemption disabled at: [<c01fe244>] cpu_stopper_thread+0x80/0x10c
       CPU: 0 PID: 14 Comm: migration/0 Not tainted 4.9.21-rt16-02220-g49e319c #15
       Hardware name: Generic DRA74X (Flattened Device Tree)
       [<c0112014>] (unwind_backtrace) from [<c010d370>] (show_stack+0x10/0x14)
       [<c010d370>] (show_stack) from [<c049beb8>] (dump_stack+0xa8/0xd4)
       [<c049beb8>] (dump_stack) from [<c01631a0>] (___might_sleep+0x1bc/0x2ac)
       [<c01631a0>] (___might_sleep) from [<c08f7244>] (__rt_spin_lock+0x1c/0x30)
       [<c08f7244>] (__rt_spin_lock) from [<c08f77a4>] (rt_read_lock+0x54/0x68)
       [<c08f77a4>] (rt_read_lock) from [<c01183e8>] (update_sections_early+0x24/0xdc)
       [<c01183e8>] (update_sections_early) from [<c01184b0>] (__fix_kernmem_perms+0x10/0x1c)
       [<c01184b0>] (__fix_kernmem_perms) from [<c01fe010>] (multi_cpu_stop+0x100/0x138)
       [<c01fe010>] (multi_cpu_stop) from [<c01fe24c>] (cpu_stopper_thread+0x88/0x10c)
       [<c01fe24c>] (cpu_stopper_thread) from [<c015edc4>] (smpboot_thread_fn+0x174/0x31c)
       [<c015edc4>] (smpboot_thread_fn) from [<c015a988>] (kthread+0xf0/0x108)
       [<c015a988>] (kthread) from [<c0108818>] (ret_from_fork+0x14/0x3c)
       Freeing unused kernel memory: 1024K (c0d00000 - c0e00000)
      
      The stop_machine() is called with cpus = NULL from fix_kernmem_perms() and
      mark_rodata_ro() which means only one CPU will execute
      update_sections_early() while all other CPUs will spin and wait. Hence,
      it's safe to remove tasklist locking from update_sections_early(). As part
      of this change also mark functions which are local to this module as
      static.
      Signed-off-by: NGrygorii Strashko <grygorii.strashko@ti.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Acked-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      11ce4b33
    • V
      ARM: 8671/1: V7M: Preserve registers across switch from Thread to Handler mode · b70cd406
      Vladimir Murzin 提交于
      According to ARMv7 ARM, when exception is taken content of r0-r3, r12
      is unknown (see ExceptionTaken() pseudocode). Even though existent
      implementations keep these register unchanged, preserve them to be in
      line with architecture.
      Reported-by: NDobromir Stefanov <dobromir.stefanov@arm.com>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      b70cd406
    • V
      ARM: 8670/1: V7M: Do not corrupt vector table around v7m_invalidate_l1 call · 6d805949
      Vladimir Murzin 提交于
      We save/restore registers around v7m_invalidate_l1 to address pointed
      by r12, which is vector table, so the first eight entries are
      overwritten with a garbage. We already have stack setup at that stage,
      so use it to save/restore register.
      
      Fixes: 6a8146f4 ("ARM: 8609/1: V7M: Add support for the Cortex-M7 processor")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      6d805949
  20. 25 4月, 2017 1 次提交
    • L
      ARM: Implement pci_remap_cfgspace() interface · b9cdbe6e
      Lorenzo Pieralisi 提交于
      The PCI bus specification (rev 3.0, 3.2.5 "Transaction Ordering and
      Posting") defines rules for PCI configuration space transactions ordering
      and posting, that state that configuration writes have to be non-posted
      transactions.
      
      Current ioremap interface on ARM provides mapping functions that provide
      "bufferable" writes transactions (ie ioremap uses MT_DEVICE memory type)
      aka posted writes, so PCI host controller drivers have no arch interface to
      remap PCI configuration space with memory attributes that comply with the
      PCI specifications for configuration space.
      
      Implement an ARM specific pci_remap_cfgspace() interface that allows to map
      PCI config memory regions with MT_UNCACHED memory type (ie strongly ordered
      - non-posted writes), providing a remap function that complies with PCI
      specifications for config space transactions.
      Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Russell King <linux@armlinux.org.uk>
      b9cdbe6e
  21. 20 4月, 2017 1 次提交
    • J
      ARM: 8667/3: Fix memory attribute inconsistencies when using fixmap · b089c31c
      Jon Medhurst 提交于
      To cope with the variety in ARM architectures and configurations, the
      pagetable attributes for kernel memory are generated at runtime to match
      the system the kernel finds itself on. This calculated value is stored
      in pgprot_kernel.
      
      However, when early fixmap support was added for ARM (commit
      a5f4c561) the attributes used for mappings were hard coded because
      pgprot_kernel is not set up early enough. Unfortunately, when fixmap is
      used after early boot this means the memory being mapped can have
      different attributes to existing mappings, potentially leading to
      unpredictable behaviour. A specific problem also exists due to the hard
      coded values not include the 'shareable' attribute which means on
      systems where this matters (e.g. those with multiple CPU clusters) the
      cache contents for a memory location can become inconsistent between
      CPUs.
      
      To resolve these issues we change fixmap to use the same memory
      attributes (from pgprot_kernel) that the rest of the kernel uses. To
      enable this we need to refactor the initialisation code so
      build_mem_type_table() is called early enough. Note, that relies on early
      param parsing for memory type overrides passed via the kernel command
      line, so we need to make sure this call is still after
      parse_early_params().
      
      [ardb: keep early_fixmap_init() before param parsing, for earlycon]
      
      Fixes: a5f4c561 ("ARM: 8415/1: early fixmap support for earlycon")
      Cc: <stable@vger.kernel.org> # v4.3+
      Tested-by: Nafzal mohammed <afzal.mohd.ma@gmail.com>
      Signed-off-by: NJon Medhurst <tixy@linaro.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NRussell King <rmk+kernel@armlinux.org.uk>
      b089c31c