1. 25 5月, 2018 1 次提交
    • J
      Revert "mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE" · d883c6cf
      Joonsoo Kim 提交于
      This reverts the following commits that change CMA design in MM.
      
       3d2054ad ("ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y")
      
       1d47a3ec ("mm/cma: remove ALLOC_CMA")
      
       bad8c6c0 ("mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE")
      
      Ville reported a following error on i386.
      
        Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
        microcode: microcode updated early to revision 0x4, date = 2013-06-28
        Initializing CPU#0
        Initializing HighMem for node 0 (000377fe:00118000)
        Initializing Movable for node 0 (00000001:00118000)
        BUG: Bad page state in process swapper  pfn:377fe
        page:f53effc0 count:0 mapcount:-127 mapping:00000000 index:0x0
        flags: 0x80000000()
        raw: 80000000 00000000 00000000 ffffff80 00000000 00000100 00000200 00000001
        page dumped because: nonzero mapcount
        Modules linked in:
        CPU: 0 PID: 0 Comm: swapper Not tainted 4.17.0-rc5-elk+ #145
        Hardware name: Dell Inc. Latitude E5410/03VXMC, BIOS A15 07/11/2013
        Call Trace:
         dump_stack+0x60/0x96
         bad_page+0x9a/0x100
         free_pages_check_bad+0x3f/0x60
         free_pcppages_bulk+0x29d/0x5b0
         free_unref_page_commit+0x84/0xb0
         free_unref_page+0x3e/0x70
         __free_pages+0x1d/0x20
         free_highmem_page+0x19/0x40
         add_highpages_with_active_regions+0xab/0xeb
         set_highmem_pages_init+0x66/0x73
         mem_init+0x1b/0x1d7
         start_kernel+0x17a/0x363
         i386_start_kernel+0x95/0x99
         startup_32_smp+0x164/0x168
      
      The reason for this error is that the span of MOVABLE_ZONE is extended
      to whole node span for future CMA initialization, and, normal memory is
      wrongly freed here.  I submitted the fix and it seems to work, but,
      another problem happened.
      
      It's so late time to fix the later problem so I decide to reverting the
      series.
      Reported-by: NVille Syrjälä <ville.syrjala@linux.intel.com>
      Acked-by: NLaura Abbott <labbott@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d883c6cf
  2. 24 5月, 2018 1 次提交
  3. 23 5月, 2018 4 次提交
    • S
      alpha: io: reorder barriers to guarantee writeX() and iowriteX() ordering #2 · 92d7223a
      Sinan Kaya 提交于
      memory-barriers.txt has been updated with the following requirement.
      
      "When using writel(), a prior wmb() is not needed to guarantee that the
      cache coherent memory writes have completed before writing to the MMIO
      region."
      
      Current writeX() and iowriteX() implementations on alpha are not
      satisfying this requirement as the barrier is after the register write.
      
      Move mb() in writeX() and iowriteX() functions to guarantee that HW
      observes memory changes before performing register operations.
      Signed-off-by: NSinan Kaya <okaya@codeaurora.org>
      Reported-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      92d7223a
    • C
      alpha: simplify get_arch_dma_ops · f5e82fa2
      Christoph Hellwig 提交于
      Remove the dma_ops indirection.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      f5e82fa2
    • C
      alpha: use dma_direct_ops for jensen · 6db61543
      Christoph Hellwig 提交于
      The generic dma_direct implementation does the same thing as the alpha
      pci-noop implementation, just with more bells and whistles.  And unlike
      the current code it at least has a theoretical chance to actually compile.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NMatt Turner <mattst88@gmail.com>
      6db61543
    • P
      arm64: fault: Don't leak data in ESR context for user fault on kernel VA · cc198460
      Peter Maydell 提交于
      If userspace faults on a kernel address, handing them the raw ESR
      value on the sigframe as part of the delivered signal can leak data
      useful to attackers who are using information about the underlying hardware
      fault type (e.g. translation vs permission) as a mechanism to defeat KASLR.
      
      However there are also legitimate uses for the information provided
      in the ESR -- notably the GCC and LLVM sanitizers use this to report
      whether wild pointer accesses by the application are reads or writes
      (since a wild write is a more serious bug than a wild read), so we
      don't want to drop the ESR information entirely.
      
      For faulting addresses in the kernel, sanitize the ESR. We choose
      to present userspace with the illusion that there is nothing mapped
      in the kernel's part of the address space at all, by reporting all
      faults as level 0 translation faults taken to EL1.
      
      These fields are safe to pass through to userspace as they depend
      only on the instruction that userspace used to provoke the fault:
       EC IL (always)
       ISV CM WNR (for all data aborts)
      All the other fields in ESR except DFSC are architecturally RES0
      for an L0 translation fault taken to EL1, so can be zeroed out
      without confusing userspace.
      
      The illusion is not entirely perfect, as there is a tiny wrinkle
      where we will report an alignment fault that was not due to the memory
      type (for instance a LDREX to an unaligned address) as a translation
      fault, whereas if you do this on real unmapped memory the alignment
      fault takes precedence. This is not likely to trip anybody up in
      practice, as the only users we know of for the ESR information who
      care about the behaviour for kernel addresses only really want to
      know about the WnR bit.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      cc198460
  4. 22 5月, 2018 3 次提交
    • N
      powerpc/64s: Add support for a store forwarding barrier at kernel entry/exit · a048a07d
      Nicholas Piggin 提交于
      On some CPUs we can prevent a vulnerability related to store-to-load
      forwarding by preventing store forwarding between privilege domains,
      by inserting a barrier in kernel entry and exit paths.
      
      This is known to be the case on at least Power7, Power8 and Power9
      powerpc CPUs.
      
      Barriers must be inserted generally before the first load after moving
      to a higher privilege, and after the last store before moving to a
      lower privilege, HV and PR privilege transitions must be protected.
      
      Barriers are added as patch sections, with all kernel/hypervisor entry
      points patched, and the exit points to lower privilge levels patched
      similarly to the RFI flush patching.
      
      Firmware advertisement is not implemented yet, so CPU flush types
      are hard coded.
      
      Thanks to Michal Suchánek for bug fixes and review.
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: NMauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NMichal Suchánek <msuchanek@suse.de>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a048a07d
    • J
      arm64: export tishift functions to modules · 255845fc
      Jason A. Donenfeld 提交于
      Otherwise modules that use these arithmetic operations will fail to
      link. We accomplish this with the usual EXPORT_SYMBOL, which on most
      architectures goes in the .S file but the ARM64 maintainers prefer that
      insead it goes into arm64ksyms.
      
      While we're at it, we also fix this up to use SPDX, and I personally
      choose to relicense this as GPL2||BSD so that these symbols don't need
      to be export_symbol_gpl, so all modules can use the routines, since
      these are important general purpose compiler-generated function calls.
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      Reported-by: NPaX Team <pageexec@freemail.hu>
      Cc: stable@vger.kernel.org
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      255845fc
    • W
      arm64: lse: Add early clobbers to some input/output asm operands · 32c3fa7c
      Will Deacon 提交于
      For LSE atomics that read and write a register operand, we need to
      ensure that these operands are annotated as "early clobber" if the
      register is written before all of the input operands have been consumed.
      Failure to do so can result in the compiler allocating the same register
      to both operands, leading to splats such as:
      
       Unable to handle kernel paging request at virtual address 11111122222221
       [...]
       x1 : 1111111122222222 x0 : 1111111122222221
       Process swapper/0 (pid: 1, stack limit = 0x000000008209f908)
       Call trace:
        test_atomic64+0x1360/0x155c
      
      where x0 has been allocated as both the value to be stored and also the
      atomic_t pointer.
      
      This patch adds the missing clobbers.
      
      Cc: <stable@vger.kernel.org>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Reported-by: NMark Salter <msalter@redhat.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      32c3fa7c
  5. 20 5月, 2018 1 次提交
  6. 19 5月, 2018 11 次提交
  7. 18 5月, 2018 6 次提交
  8. 17 5月, 2018 13 次提交