1. 02 8月, 2018 3 次提交
  2. 16 6月, 2018 1 次提交
  3. 15 6月, 2018 1 次提交
  4. 14 6月, 2018 1 次提交
    • L
      Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables · 050e9baa
      Linus Torvalds 提交于
      The changes to automatically test for working stack protector compiler
      support in the Kconfig files removed the special STACKPROTECTOR_AUTO
      option that picked the strongest stack protector that the compiler
      supported.
      
      That was all a nice cleanup - it makes no sense to have the AUTO case
      now that the Kconfig phase can just determine the compiler support
      directly.
      
      HOWEVER.
      
      It also meant that doing "make oldconfig" would now _disable_ the strong
      stackprotector if you had AUTO enabled, because in a legacy config file,
      the sane stack protector configuration would look like
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_NONE is not set
        # CONFIG_CC_STACKPROTECTOR_REGULAR is not set
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_STACKPROTECTOR_AUTO=y
      
      and when you ran this through "make oldconfig" with the Kbuild changes,
      it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had
      been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just
      CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version
      used to be disabled (because it was really enabled by AUTO), and would
      disable it in the new config, resulting in:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_CC_STACKPROTECTOR=y
        # CONFIG_CC_STACKPROTECTOR_STRONG is not set
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      That's dangerously subtle - people could suddenly find themselves with
      the weaker stack protector setup without even realizing.
      
      The solution here is to just rename not just the old RECULAR stack
      protector option, but also the strong one.  This does that by just
      removing the CC_ prefix entirely for the user choices, because it really
      is not about the compiler support (the compiler support now instead
      automatially impacts _visibility_ of the options to users).
      
      This results in "make oldconfig" actually asking the user for their
      choice, so that we don't have any silent subtle security model changes.
      The end result would generally look like this:
      
        CONFIG_HAVE_CC_STACKPROTECTOR=y
        CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
        CONFIG_STACKPROTECTOR=y
        CONFIG_STACKPROTECTOR_STRONG=y
        CONFIG_CC_HAS_SANE_STACKPROTECTOR=y
      
      where the "CC_" versions really are about internal compiler
      infrastructure, not the user selections.
      Acked-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      050e9baa
  5. 22 5月, 2018 3 次提交
  6. 16 5月, 2018 1 次提交
  7. 08 5月, 2018 2 次提交
  8. 07 5月, 2018 1 次提交
    • C
      PCI: remove PCI_DMA_BUS_IS_PHYS · 325ef185
      Christoph Hellwig 提交于
      This was used by the ide, scsi and networking code in the past to
      determine if they should bounce payloads.  Now that the dma mapping
      always have to support dma to all physical memory (thanks to swiotlb
      for non-iommu systems) there is no need to this crude hack any more.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Acked-by: Palmer Dabbelt <palmer@sifive.com> (for riscv)
      Reviewed-by: NJens Axboe <axboe@kernel.dk>
      325ef185
  9. 25 4月, 2018 3 次提交
    • E
      signal/xtensa: Use force_sig_fault where appropriate · 91810105
      Eric W. Biederman 提交于
      Filling in struct siginfo before calling force_sig_info a tedious and
      error prone process, where once in a great while the wrong fields
      are filled out, and siginfo has been inconsistently cleared.
      
      Simplify this process by using the helper force_sig_fault.  Which
      takes as a parameters all of the information it needs, ensures
      all of the fiddly bits of filling in struct siginfo are done properly
      and then calls force_sig_info.
      
      In short about a 5 line reduction in code for every time force_sig_info
      is called, which makes the calling function clearer.
      
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: linux-xtensa@linux-xtensa.org
      Acked-by: NMax Filippov <jcmvbkbc@gmail.com>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      91810105
    • E
      signal/xtensa: Consistenly use SIGBUS in do_unaligned_user · 7de712cc
      Eric W. Biederman 提交于
      While working on changing this code to use force_sig_fault I
      discovered that do_unaliged_user is sets si_signo to SIGBUS and passes
      SIGSEGV to force_sig_info.  Which is just b0rked.
      
      The code is reporting a SIGBUS error so replace the SIGSEGV with SIGBUS.
      
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: linux-xtensa@linux-xtensa.org
      Cc: stable@vger.kernel.org
      Acked-by: NMax Filippov <jcmvbkbc@gmail.com>
      Fixes: 5a0015d6 ("[PATCH] xtensa: Architecture support for Tensilica Xtensa Part 3")
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      7de712cc
    • E
      signal: Ensure every siginfo we send has all bits initialized · 3eb0f519
      Eric W. Biederman 提交于
      Call clear_siginfo to ensure every stack allocated siginfo is properly
      initialized before being passed to the signal sending functions.
      
      Note: It is not safe to depend on C initializers to initialize struct
      siginfo on the stack because C is allowed to skip holes when
      initializing a structure.
      
      The initialization of struct siginfo in tracehook_report_syscall_exit
      was moved from the helper user_single_step_siginfo into
      tracehook_report_syscall_exit itself, to make it clear that the local
      variable siginfo gets fully initialized.
      
      In a few cases the scope of struct siginfo has been reduced to make it
      clear that siginfo siginfo is not used on other paths in the function
      in which it is declared.
      
      Instances of using memset to initialize siginfo have been replaced
      with calls clear_siginfo for clarity.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      3eb0f519
  10. 20 4月, 2018 1 次提交
    • A
      y2038: xtensa: Extend sysvipc data structures · b497ef57
      Arnd Bergmann 提交于
      xtensa, uses a nonstandard variation of the generic sysvipc
      data structures, intended to have the padding moved around
      so it can deal with big-endian 32-bit user space that has
      64-bit time_t.
      
      xtensa tries hard to define the structures so they work
      in both big-endian and little-endian systems with padding
      on the right side.
      However, they only succeeded for for two of the three structures,
      and their struct shmid64_ds ended up being defined in two
      identical copies, and the big-endian one is wrong.
      
      This takes just take the same approach here that we have for
      the asm-generic headers and adds separate 32-bit fields for the
      upper halves of the timestamps, to let libc deal with the mess
      in user space.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      b497ef57
  11. 19 4月, 2018 1 次提交
    • A
      time: Add an asm-generic/compat.h file · 2b5a9a37
      Arnd Bergmann 提交于
      We have a couple of files that try to include asm/compat.h on
      architectures where this is available. Those should generally use the
      higher-level linux/compat.h file, but that in turn fails to include
      asm/compat.h when CONFIG_COMPAT is disabled, unless we can provide
      that header on all architectures.
      
      This adds the asm/compat.h for all remaining architectures to
      simplify the dependencies.
      
      Architectures that are getting removed in linux-4.17 are not changed
      here, to avoid needless conflicts with the removal patches. Those
      architectures are broken by this patch, but we have already shown
      that they have no users.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      2b5a9a37
  12. 12 4月, 2018 1 次提交
    • M
      mm: introduce MAP_FIXED_NOREPLACE · a4ff8e86
      Michal Hocko 提交于
      Patch series "mm: introduce MAP_FIXED_NOREPLACE", v2.
      
      This has started as a follow up discussion [3][4] resulting in the
      runtime failure caused by hardening patch [5] which removes MAP_FIXED
      from the elf loader because MAP_FIXED is inherently dangerous as it
      might silently clobber an existing underlying mapping (e.g.  stack).
      The reason for the failure is that some architectures enforce an
      alignment for the given address hint without MAP_FIXED used (e.g.  for
      shared or file backed mappings).
      
      One way around this would be excluding those archs which do alignment
      tricks from the hardening [6].  The patch is really trivial but it has
      been objected, rightfully so, that this screams for a more generic
      solution.  We basically want a non-destructive MAP_FIXED.
      
      The first patch introduced MAP_FIXED_NOREPLACE which enforces the given
      address but unlike MAP_FIXED it fails with EEXIST if the given range
      conflicts with an existing one.  The flag is introduced as a completely
      new one rather than a MAP_FIXED extension because of the backward
      compatibility.  We really want a never-clobber semantic even on older
      kernels which do not recognize the flag.  Unfortunately mmap sucks
      wrt flags evaluation because we do not EINVAL on unknown flags.  On
      those kernels we would simply use the traditional hint based semantic so
      the caller can still get a different address (which sucks) but at least
      not silently corrupt an existing mapping.  I do not see a good way
      around that.  Except we won't export expose the new semantic to the
      userspace at all.
      
      It seems there are users who would like to have something like that.
      Jemalloc has been mentioned by Michael Ellerman [7]
      
      Florian Weimer has mentioned the following:
      : glibc ld.so currently maps DSOs without hints.  This means that the kernel
      : will map right next to each other, and the offsets between them a completely
      : predictable.  We would like to change that and supply a random address in a
      : window of the address space.  If there is a conflict, we do not want the
      : kernel to pick a non-random address. Instead, we would try again with a
      : random address.
      
      John Hubbard has mentioned CUDA example
      : a) Searches /proc/<pid>/maps for a "suitable" region of available
      : VA space.  "Suitable" generally means it has to have a base address
      : within a certain limited range (a particular device model might
      : have odd limitations, for example), it has to be large enough, and
      : alignment has to be large enough (again, various devices may have
      : constraints that lead us to do this).
      :
      : This is of course subject to races with other threads in the process.
      :
      : Let's say it finds a region starting at va.
      :
      : b) Next it does:
      :     p = mmap(va, ...)
      :
      : *without* setting MAP_FIXED, of course (so va is just a hint), to
      : attempt to safely reserve that region. If p != va, then in most cases,
      : this is a failure (almost certainly due to another thread getting a
      : mapping from that region before we did), and so this layer now has to
      : call munmap(), before returning a "failure: retry" to upper layers.
      :
      :     IMPROVEMENT: --> if instead, we could call this:
      :
      :             p = mmap(va, ... MAP_FIXED_NOREPLACE ...)
      :
      :         , then we could skip the munmap() call upon failure. This
      :         is a small thing, but it is useful here. (Thanks to Piotr
      :         Jaroszynski and Mark Hairgrove for helping me get that detail
      :         exactly right, btw.)
      :
      : c) After that, CUDA suballocates from p, via:
      :
      :      q = mmap(sub_region_start, ... MAP_FIXED ...)
      :
      : Interestingly enough, "freeing" is also done via MAP_FIXED, and
      : setting PROT_NONE to the subregion. Anyway, I just included (c) for
      : general interest.
      
      Atomic address range probing in the multithreaded programs in general
      sounds like an interesting thing to me.
      
      The second patch simply replaces MAP_FIXED use in elf loader by
      MAP_FIXED_NOREPLACE.  I believe other places which rely on MAP_FIXED
      should follow.  Actually real MAP_FIXED usages should be docummented
      properly and they should be more of an exception.
      
      [1] http://lkml.kernel.org/r/20171116101900.13621-1-mhocko@kernel.org
      [2] http://lkml.kernel.org/r/20171129144219.22867-1-mhocko@kernel.org
      [3] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au
      [4] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com
      [5] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org
      [6] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz
      [7] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au
      
      This patch (of 2):
      
      MAP_FIXED is used quite often to enforce mapping at the particular range.
      The main problem of this flag is, however, that it is inherently dangerous
      because it unmaps existing mappings covered by the requested range.  This
      can cause silent memory corruptions.  Some of them even with serious
      security implications.  While the current semantic might be really
      desiderable in many cases there are others which would want to enforce the
      given range but rather see a failure than a silent memory corruption on a
      clashing range.  Please note that there is no guarantee that a given range
      is obeyed by the mmap even when it is free - e.g.  arch specific code is
      allowed to apply an alignment.
      
      Introduce a new MAP_FIXED_NOREPLACE flag for mmap to achieve this
      behavior.  It has the same semantic as MAP_FIXED wrt.  the given address
      request with a single exception that it fails with EEXIST if the requested
      address is already covered by an existing mapping.  We still do rely on
      get_unmaped_area to handle all the arch specific MAP_FIXED treatment and
      check for a conflicting vma after it returns.
      
      The flag is introduced as a completely new one rather than a MAP_FIXED
      extension because of the backward compatibility.  We really want a
      never-clobber semantic even on older kernels which do not recognize the
      flag.  Unfortunately mmap sucks wrt.  flags evaluation because we do not
      EINVAL on unknown flags.  On those kernels we would simply use the
      traditional hint based semantic so the caller can still get a different
      address (which sucks) but at least not silently corrupt an existing
      mapping.  I do not see a good way around that.
      
      [mpe@ellerman.id.au: fix whitespace]
      [fail on clashing range with EEXIST as per Florian Weimer]
      [set MAP_FIXED before round_hint_to_min as per Khalid Aziz]
      Link: http://lkml.kernel.org/r/20171213092550.2774-2-mhocko@kernel.orgReviewed-by: NKhalid Aziz <khalid.aziz@oracle.com>
      Signed-off-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NMichael Ellerman <mpe@ellerman.id.au>
      Cc: Khalid Aziz <khalid.aziz@oracle.com>
      Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Florian Weimer <fweimer@redhat.com>
      Cc: John Hubbard <jhubbard@nvidia.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
      Cc: Joel Stanley <joel@jms.id.au>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Jason Evans <jasone@google.com>
      Cc: David Goldblatt <davidtgoldblatt@gmail.com>
      Cc: Edward Tomasz Napierała <trasz@FreeBSD.org>
      Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a4ff8e86
  13. 06 4月, 2018 1 次提交
    • H
      mm: fix races between swapoff and flush dcache · cb9f753a
      Huang Ying 提交于
      Thanks to commit 4b3ef9da ("mm/swap: split swap cache into 64MB
      trunks"), after swapoff the address_space associated with the swap
      device will be freed.  So page_mapping() users which may touch the
      address_space need some kind of mechanism to prevent the address_space
      from being freed during accessing.
      
      The dcache flushing functions (flush_dcache_page(), etc) in architecture
      specific code may access the address_space of swap device for anonymous
      pages in swap cache via page_mapping() function.  But in some cases
      there are no mechanisms to prevent the swap device from being swapoff,
      for example,
      
        CPU1					CPU2
        __get_user_pages()			swapoff()
          flush_dcache_page()
            mapping = page_mapping()
              ...				  exit_swap_address_space()
              ...				    kvfree(spaces)
              mapping_mapped(mapping)
      
      The address space may be accessed after being freed.
      
      But from cachetlb.txt and Russell King, flush_dcache_page() only care
      about file cache pages, for anonymous pages, flush_anon_page() should be
      used.  The implementation of flush_dcache_page() in all architectures
      follows this too.  They will check whether page_mapping() is NULL and
      whether mapping_mapped() is true to determine whether to flush the
      dcache immediately.  And they will use interval tree (mapping->i_mmap)
      to find all user space mappings.  While mapping_mapped() and
      mapping->i_mmap isn't used by anonymous pages in swap cache at all.
      
      So, to fix the race between swapoff and flush dcache, __page_mapping()
      is add to return the address_space for file cache pages and NULL
      otherwise.  All page_mapping() invoking in flush dcache functions are
      replaced with page_mapping_file().
      
      [akpm@linux-foundation.org: simplify page_mapping_file(), per Mike]
      Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.comSigned-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb9f753a
  14. 03 4月, 2018 1 次提交
  15. 18 3月, 2018 1 次提交
    • B
      block: Move SECTOR_SIZE and SECTOR_SHIFT definitions into <linux/blkdev.h> · 233bde21
      Bart Van Assche 提交于
      It happens often while I'm preparing a patch for a block driver that
      I'm wondering: is a definition of SECTOR_SIZE and/or SECTOR_SHIFT
      available for this driver? Do I have to introduce definitions of these
      constants before I can use these constants? To avoid this confusion,
      move the existing definitions of SECTOR_SIZE and SECTOR_SHIFT into the
      <linux/blkdev.h> header file such that these become available for all
      block drivers. Make the SECTOR_SIZE definition in the uapi msdos_fs.h
      header file conditional to avoid that including that header file after
      <linux/blkdev.h> causes the compiler to complain about a SECTOR_SIZE
      redefinition.
      
      Note: the SECTOR_SIZE / SECTOR_SHIFT / SECTOR_BITS definitions have
      not been removed from uapi header files nor from NAND drivers in
      which these constants are used for another purpose than converting
      block layer offsets and sizes into a number of sectors.
      
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Nitin Gupta <ngupta@vflare.org>
      Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Reviewed-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NBart Van Assche <bart.vanassche@wdc.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      233bde21
  16. 01 3月, 2018 1 次提交
  17. 22 2月, 2018 1 次提交
  18. 17 2月, 2018 1 次提交
  19. 16 2月, 2018 1 次提交
    • M
      xtensa: fix high memory/reserved memory collision · 6ac5a11d
      Max Filippov 提交于
      Xtensa memory initialization code frees high memory pages without
      checking whether they are in the reserved memory regions or not. That
      results in invalid value of totalram_pages and duplicate page usage by
      CMA and highmem. It produces a bunch of BUGs at startup looking like
      this:
      
      BUG: Bad page state in process swapper  pfn:70800
      page:be60c000 count:0 mapcount:-127 mapping:  (null) index:0x1
      flags: 0x80000000()
      raw: 80000000 00000000 00000001 ffffff80 00000000 be60c014 be60c014 0000000a
      page dumped because: nonzero mapcount
      Modules linked in:
      CPU: 0 PID: 1 Comm: swapper Tainted: G    B            4.16.0-rc1-00015-g7928b2cb-dirty #23
      Stack:
       bd839d33 00000000 00000018 ba97b64c a106578c bd839d70 be60c000 00000000
       a1378054 bd86a000 00000003 ba97b64c a1066166 bd839da0 be60c000 ffe00000
       a1066b58 bd839dc0 be504000 00000000 000002f4 bd838000 00000000 0000001e
      Call Trace:
       [<a1065734>] bad_page+0xac/0xd0
       [<a106578c>] free_pages_check_bad+0x34/0x4c
       [<a1066166>] __free_pages_ok+0xae/0x14c
       [<a1066b58>] __free_pages+0x30/0x64
       [<a1365de5>] init_cma_reserved_pageblock+0x35/0x44
       [<a13682dc>] cma_init_reserved_areas+0xf4/0x148
       [<a10034b8>] do_one_initcall+0x80/0xf8
       [<a1361c16>] kernel_init_freeable+0xda/0x13c
       [<a125b59d>] kernel_init+0x9/0xd0
       [<a1004304>] ret_from_kernel_thread+0xc/0x18
      
      Only free high memory pages that are not reserved.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      6ac5a11d
  20. 12 2月, 2018 2 次提交
    • A
      unify {de,}mangle_poll(), get rid of kernel-side POLL... · 7a163b21
      Al Viro 提交于
      except, again, POLLFREE and POLL_BUSY_LOOP.
      
      With this, we finally get to the promised end result:
      
       - POLL{IN,OUT,...} are plain integers and *not* in __poll_t, so any
         stray instances of ->poll() still using those will be caught by
         sparse.
      
       - eventpoll.c and select.c warning-free wrt __poll_t
      
       - no more kernel-side definitions of POLL... - userland ones are
         visible through the entire kernel (and used pretty much only for
         mangle/demangle)
      
       - same behavior as after the first series (i.e. sparc et.al. epoll(2)
         working correctly).
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7a163b21
    • M
      xtensa: fix build with KASAN · f8d0cbf2
      Max Filippov 提交于
      The commit 917538e2 ("kasan: clean up KASAN_SHADOW_SCALE_SHIFT
      usage") removed KASAN_SHADOW_SCALE_SHIFT definition from
      include/linux/kasan.h and added it to architecture-specific headers,
      except for xtensa. This broke the xtensa build with KASAN enabled.
      Define KASAN_SHADOW_SCALE_SHIFT in arch/xtensa/include/asm/kasan.h
      
      Reported by: kbuild test robot <fengguang.wu@intel.com>
      Fixes: 917538e2 ("kasan: clean up KASAN_SHADOW_SCALE_SHIFT usage")
      Acked-by: NAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      f8d0cbf2
  21. 31 1月, 2018 1 次提交
  22. 24 1月, 2018 1 次提交
  23. 23 1月, 2018 1 次提交
    • E
      signal/ptrace: Add force_sig_ptrace_errno_trap and use it where needed · f71dd7dc
      Eric W. Biederman 提交于
      There are so many places that build struct siginfo by hand that at
      least one of them is bound to get it wrong.  A handful of cases in the
      kernel arguably did just that when using the errno field of siginfo to
      pass no errno values to userspace.  The usage is limited to a single
      si_code so at least does not mess up anything else.
      
      Encapsulate this questionable pattern in a helper function so
      that the userspace ABI is preserved.
      
      Update all of the places that use this pattern to use the new helper
      function.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      f71dd7dc
  24. 10 1月, 2018 2 次提交
  25. 06 1月, 2018 1 次提交
    • M
      xtensa: fix futex_atomic_cmpxchg_inatomic · ca474809
      Max Filippov 提交于
      Return 0 if the operation was successful, not the userspace memory
      value. Check that userspace value equals passed oldval, not itself.
      Don't update *uval if the value wasn't read from userspace memory.
      
      This fixes process hang due to infinite loop in futex_lock_pi.
      It also fixes a bunch of glibc tests nptl/tst-mutexpi*.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      ca474809
  26. 04 1月, 2018 1 次提交
  27. 02 1月, 2018 1 次提交
    • A
      xtensa: shut up gcc-8 warnings · 32bb954d
      Arnd Bergmann 提交于
      Many uses of strncpy() on xtensa causes  a warning like
      
      arch/xtensa/include/asm/string.h:56:42: warning: array subscript is above array bounds [-Warray-bounds]
         : "0" (__dest), "1" (__src), "r" (__src+__n)
      
      This avoids the warning by turning the pointer arithmetic into an
      integer operation that does not get checked the same way.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      32bb954d
  28. 18 12月, 2017 2 次提交
  29. 17 12月, 2017 2 次提交
    • M
      xtensa: use __memset in __xtensa_clear_user · e0baa014
      Max Filippov 提交于
      memset on xtensa is capable of accessing user memory, but KASAN checks
      if memset function is actually used for that and reports it as an error:
      
       ==================================================================
       BUG: KASAN: user-memory-access in padzero+0x4d/0x58
       Write of size 519 at addr 0049ddf9 by task init/1
      
       Call Trace:
        [<b0189978>] kasan_report+0x160/0x238
        [<b0188818>] check_memory_region+0xf8/0x100
        [<b018891c>] memset+0x20/0x34
        [<b0238b71>] padzero+0x4d/0x58
       ==================================================================
      
      Use __memset in __xtensa_clear_user to avoid that.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      e0baa014
    • M
      xtensa: add support for KASAN · c633544a
      Max Filippov 提交于
      Cover kernel addresses above 0x90000000 by the shadow map. Enable
      HAVE_ARCH_KASAN when MMU is enabled. Provide kasan_early_init that fills
      shadow map with writable copies of kasan_zero_page. Call
      kasan_early_init right after mmu initialization in the setup_arch.
      Provide kasan_init that allocates proper shadow map pages from the
      memblock and puts these pages into the shadow map for addresses from
      VMALLOC area to the end of KSEG. Call kasan_init right after memblock
      initialization. Don't use KASAN for the boot code, MMU and KASAN
      initialization and page fault handler. Make kernel stack size 4 times
      larger when KASAN is enabled to avoid stack overflows.
      GCC 7.3, 8 or newer is required to build the xtensa kernel with KASAN.
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      c633544a