1. 29 9月, 2018 1 次提交
  2. 18 8月, 2018 1 次提交
    • S
      mm: convert return type of handle_mm_fault() caller to vm_fault_t · 50a7ca3c
      Souptick Joarder 提交于
      Use new return type vm_fault_t for fault handler.  For now, this is just
      documenting that the function returns a VM_FAULT value rather than an
      errno.  Once all instances are converted, vm_fault_t will become a
      distinct type.
      
      Ref-> commit 1c8f4220 ("mm: change return type to vm_fault_t")
      
      In this patch all the caller of handle_mm_fault() are changed to return
      vm_fault_t type.
      
      Link: http://lkml.kernel.org/r/20180617084810.GA6730@jordon-HP-15-Notebook-PCSigned-off-by: NSouptick Joarder <jrdr.linux@gmail.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: James E.J. Bottomley <jejb@parisc-linux.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Palmer Dabbelt <palmer@sifive.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Levin, Alexander (Sasha Levin)" <alexander.levin@verizon.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      50a7ca3c
  3. 11 8月, 2018 2 次提交
    • P
      MIPS: Consistently declare TLB functions · 4bcb4ad6
      Paul Burton 提交于
      Since at least the beginning of the git era we've declared our TLB
      exception handling functions inconsistently. They're actually functions,
      but we declare them as arrays of u32 where each u32 is an encoded
      instruction. This has always been the case for arch/mips/mm/tlbex.c, and
      has also been true for arch/mips/kernel/traps.c since commit
      86a1708a ("MIPS: Make tlb exception handler definitions and
      declarations match.") which aimed for consistency but did so by
      consistently making the our C code inconsistent with our assembly.
      
      This is all usually harmless, but when using GCC 7 or newer to build a
      kernel targeting microMIPS (ie. CONFIG_CPU_MICROMIPS=y) it becomes
      problematic. With microMIPS bit 0 of the program counter indicates the
      ISA mode. When bit 0 is zero instructions are decoded using the standard
      MIPS32 or MIPS64 ISA. When bit 0 is one instructions are decoded using
      microMIPS. This means that function pointers become odd - their least
      significant bit is one for microMIPS code. We work around this in cases
      where we need to access code using loads & stores with our
      msk_isa16_mode() macro which simply clears bit 0 of the value it is
      given:
      
        #define msk_isa16_mode(x) ((x) & ~0x1)
      
      For example we do this for our TLB load handler in
      build_r4000_tlb_load_handler():
      
        u32 *p = (u32 *)msk_isa16_mode((ulong)handle_tlbl);
      
      We then write code to p, expecting it to be suitably aligned (our LEAF
      macro aligns functions on 4 byte boundaries, so (ulong)handle_tlbl will
      give a value one greater than a multiple of 4 - ie. the start of a
      function on a 4 byte boundary, with the ISA mode bit 0 set).
      
      This worked fine up to GCC 6, but GCC 7 & onwards is smart enough to
      presume that handle_tlbl which we declared as an array of u32s must be
      aligned sufficiently that bit 0 of its address will never be set, and as
      a result optimize out msk_isa16_mode(). This leads to p having an
      address with bit 0 set, and when we go on to attempt to store code at
      that address we take an address error exception due to the unaligned
      memory access.
      
      This leads to an exception prior to the kernel having configured its own
      exception handlers, so we jump to whatever handlers the bootloader
      configured. In the case of QEMU this results in a silent hang, since it
      has no useful general exception vector.
      
      Fix this by consistently declaring our TLB-related functions as
      functions. For handle_tlbl(), handle_tlbs() & handle_tlbm() we do this
      in asm/tlbex.h & we make use of the existing declaration of
      tlbmiss_handler_setup_pgd() in asm/mmu_context.h. Our TLB handler
      generation code in arch/mips/mm/tlbex.c is adjusted to deal with these
      definitions, in most cases simply by casting the function pointers to
      u32 pointers.
      
      This allows us to include asm/mmu_context.h in arch/mips/mm/tlbex.c to
      get the definitions of tlbmiss_handler_setup_pgd & pgd_current, removing
      some needless duplication. Consistently using msk_isa16_mode() on
      function pointers means we no longer need the
      tlbmiss_handler_setup_pgd_start symbol so that is removed entirely.
      
      Now that we're declaring our functions as functions GCC stops optimizing
      out msk_isa16_mode() & a microMIPS kernel built with either GCC 7.3.0 or
      8.1.0 boots successfully.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      4bcb4ad6
    • P
      MIPS: Export tlbmiss_handler_setup_pgd near its definition · b29fea36
      Paul Burton 提交于
      We export tlbmiss_handler_setup_pgd in arch/mips/mm/tlbex.c close to a
      declaration of it, rather than close to its definition as is standard.
      
      We've supported exporting symbols in assembly code since commit
      22823ab4 ("EXPORT_SYMBOL() for asm"), so move the export to follow
      the function's (stub) definition.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      b29fea36
  4. 10 8月, 2018 1 次提交
  5. 07 8月, 2018 1 次提交
    • P
      MIPS: Avoid using array as parameter to write_c0_kpgd() · b023a939
      Paul Burton 提交于
      Passing an array (swapper_pg_dir) as the argument to write_c0_kpgd() in
      setup_pw() will become problematic if we modify __write_64bit_c0_split()
      to cast its val argument to unsigned long long, because for 32-bit
      kernel builds the size of a pointer will differ from the size of an
      unsigned long long. This would fall foul of gcc's pointer-to-int-cast
      diagnostic.
      
      Cast the value to a long, which should be the same width as the pointer
      that we ultimately want & will be sign extended if required to the
      unsigned long long that __write_64bit_c0_split() ultimately needs.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      b023a939
  6. 31 7月, 2018 1 次提交
    • P
      MIPS: Make (UN)CAC_ADDR() PHYS_OFFSET-agnostic · 0d0e1477
      Paul Burton 提交于
      Converting an address between cached & uncached (typically addresses in
      (c)kseg0 & (c)kseg1 or 2 xkphys regions) should not depend upon
      PHYS_OFFSET in any way - we're converting from a virtual address in one
      unmapped region to a virtual address in another unmapped region.
      
      For some reason our CAC_ADDR() & UNCAC_ADDR() macros make use of
      PAGE_OFFSET, which typically includes PHYS_OFFSET. This means that
      platforms with a non-zero PHYS_OFFSET typically have to workaround
      miscalculation by these 2 macros by also defining UNCAC_BASE to a value
      that isn't really correct.
      
      It appears that an attempt has previously been made to address this with
      commit 3f4579252aa1 ("MIPS: make CAC_ADDR and UNCAC_ADDR account for
      PHYS_OFFSET") which was later undone by commit ed3ce16c ("Revert
      "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"") which
      also introduced the ar7 workaround. That attempt at a fix was roughly
      equivalent, but essentially caused the CAC_ADDR() & UNCAC_ADDR() macros
      to cancel out PHYS_OFFSET by adding & then subtracting it again. In his
      revert Leonid is correct that using PHYS_OFFSET makes no sense in the
      context of these macros, but appears to have missed its inclusion via
      PAGE_OFFSET which means PHYS_OFFSET actually had an effect after the
      revert rather than before it.
      
      Here we fix this by modifying CAC_ADDR() & UNCAC_ADDR() to stop using
      PAGE_OFFSET (& thus PHYS_OFFSET), instead using __pa() & __va() along
      with UNCAC_BASE.
      
      For UNCAC_ADDR(), __pa() will convert a cached address to a physical
      address which we can simply use as an offset from UNCAC_BASE to obtain
      an address in the uncached region.
      
      For CAC_ADDR() we can undo the effect of UNCAC_ADDR() by subtracting
      UNCAC_BASE and using __va() on the result.
      
      With this change made, remove definitions of UNCAC_BASE from the ar7 &
      pic32 platforms which appear to have defined them only to workaround
      this problem.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      References: 3f4579252aa1 ("MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET")
      References: ed3ce16c ("Revert "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"")
      Patchwork: https://patchwork.linux-mips.org/patch/20046/
      Cc: Florian Fainelli <f.fainelli@gmail.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: Vladimir Kondratiev <vladimir.kondratiev@intel.com>
      0d0e1477
  7. 28 7月, 2018 1 次提交
    • C
      MIPS: remove mips_swiotlb_ops · a999933d
      Christoph Hellwig 提交于
      mips_swiotlb_ops differs from the generic swiotlb_dma_ops only in that
      it contains a mb() barrier after each operations that maps or syncs
      dma memory to the device.
      
      The dma operations are defined to not be memory barriers, but instead
      the write* operations to kick the DMA off are supposed to contain them.
      
      For mips this handled by war_io_reorder_wmb(), which evaluates to the
      stronger wmb() instead of the pure compiler barrier barrier() for
      just those platforms that use swiotlb, so I think we are covered
      properly.
      
      [paul.burton@mips.com:
        - Include linux/swiotlb.h to fix build failures for configs with
          CONFIG_SWIOTLB=y.]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Patchwork: https://patchwork.linux-mips.org/patch/20038/
      Cc: David Daney <ddaney@caviumnetworks.com>
      Cc: Huacai Chen <chenhc@lemote.com>
      Cc: linux-mips@linux-mips.org
      Cc: iommu@lists.linux-foundation.org
      Cc: linux-kernel@vger.kernel.org
      a999933d
  8. 27 7月, 2018 1 次提交
  9. 06 7月, 2018 1 次提交
    • P
      MIPS: Fix ioremap() RAM check · 523402fa
      Paul Burton 提交于
      We currently attempt to check whether a physical address range provided
      to __ioremap() may be in use by the page allocator by examining the
      value of PageReserved for each page in the region - lowmem pages not
      marked reserved are presumed to be in use by the page allocator, and
      requests to ioremap them fail.
      
      The way we check this has been broken since commit 92923ca3 ("mm:
      meminit: only set page reserved in the memblock region"), because
      memblock will typically not have any knowledge of non-RAM pages and
      therefore those pages will not have the PageReserved flag set. Thus when
      we attempt to ioremap a region outside of RAM we incorrectly fail
      believing that the region is RAM that may be in use.
      
      In most cases ioremap() on MIPS will take a fast-path to use the
      unmapped kseg1 or xkphys virtual address spaces and never hit this path,
      so the only way to hit it is for a MIPS32 system to attempt to ioremap()
      an address range in lowmem with flags other than _CACHE_UNCACHED.
      Perhaps the most straightforward way to do this is using
      ioremap_uncached_accelerated(), which is how the problem was discovered.
      
      Fix this by making use of walk_system_ram_range() to test the address
      range provided to __ioremap() against only RAM pages, rather than all
      lowmem pages. This means that if we have a lowmem I/O region, which is
      very common for MIPS systems, we're free to ioremap() address ranges
      within it. A nice bonus is that the test is no longer limited to lowmem.
      
      The approach here matches the way x86 performed the same test after
      commit c81c8a1e ("x86, ioremap: Speed up check for RAM pages") until
      x86 moved towards a slightly more complicated check using walk_mem_res()
      for unrelated reasons with commit 0e4c12b4 ("x86/mm, resource: Use
      PAGE_KERNEL protection for ioremap of memory pages").
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Reported-by: NSerge Semin <fancer.lancer@gmail.com>
      Tested-by: NSerge Semin <fancer.lancer@gmail.com>
      Fixes: 92923ca3 ("mm: meminit: only set page reserved in the memblock region")
      Cc: James Hogan <jhogan@kernel.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: stable@vger.kernel.org # v4.2+
      Patchwork: https://patchwork.linux-mips.org/patch/19786/
      523402fa
  10. 25 6月, 2018 9 次提交
  11. 15 5月, 2018 2 次提交
    • A
      MIPS: Re-use kstrtobool_from_user() · f83e4e1e
      Andy Shevchenko 提交于
      Re-use kstrtobool_from_user() instead of open coded variant.
      Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Signed-off-by: NJames Hogan <jhogan@kernel.org>
      f83e4e1e
    • N
      MIPS: c-r4k: Fix data corruption related to cache coherence · 55a2aa08
      NeilBrown 提交于
      When DMA will be performed to a MIPS32 1004K CPS, the L1-cache for the
      range needs to be flushed and invalidated first.
      The code currently takes one of two approaches.
      1/ If the range is less than the size of the dcache, then HIT type
         requests flush/invalidate cache lines for the particular addresses.
         HIT-type requests a globalised by the CPS so this is safe on SMP.
      
      2/ If the range is larger than the size of dcache, then INDEX type
         requests flush/invalidate the whole cache. INDEX type requests affect
         the local cache only. CPS does not propagate them in any way. So this
         invalidation is not safe on SMP CPS systems.
      
      Data corruption due to '2' can quite easily be demonstrated by
      repeatedly "echo 3 > /proc/sys/vm/drop_caches" and then sha1sum a file
      that is several times the size of available memory. Dropping caches
      means that large contiguous extents (large than dcache) are more likely.
      
      This was not a problem before Linux-4.8 because option 2 was never used
      if CONFIG_MIPS_CPS was defined. The commit which removed that apparently
      didn't appreciate the full consequence of the change.
      
      We could, in theory, globalize the INDEX based flush by sending an IPI
      to other cores. These cache invalidation routines can be called with
      interrupts disabled and synchronous IPI require interrupts to be
      enabled. Asynchronous IPI may not trigger writeback soon enough. So we
      cannot use IPI in practice.
      
      We can already test if IPI would be needed for an INDEX operation with
      r4k_op_needs_ipi(R4K_INDEX). If this is true then we mustn't try the
      INDEX approach as we cannot use IPI. If this is false (e.g. when there
      is only one core and hence one L1 cache) then it is safe to use the
      INDEX approach without IPI.
      
      This patch avoids options 2 if r4k_op_needs_ipi(R4K_INDEX), and so
      eliminates the corruption.
      
      Fixes: c00ab489 ("MIPS: Remove cpu_has_safe_index_cacheops")
      Signed-off-by: NNeilBrown <neil@brown.name>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Burton <paul.burton@mips.com>
      Cc: linux-mips@linux-mips.org
      Cc: <stable@vger.kernel.org> # 4.8+
      Patchwork: https://patchwork.linux-mips.org/patch/19259/Signed-off-by: NJames Hogan <jhogan@kernel.org>
      55a2aa08
  12. 08 5月, 2018 1 次提交
  13. 25 4月, 2018 2 次提交
    • E
      signal/mips: Use force_sig_fault where appropriate · f43a54a0
      Eric W. Biederman 提交于
      Filling in struct siginfo before calling force_sig_info a tedious and
      error prone process, where once in a great while the wrong fields
      are filled out, and siginfo has been inconsistently cleared.
      
      Simplify this process by using the helper force_sig_fault.  Which
      takes as a parameters all of the information it needs, ensures
      all of the fiddly bits of filling in struct siginfo are done properly
      and then calls force_sig_info.
      
      In short about a 5 line reduction in code for every time force_sig_info
      is called, which makes the calling function clearer.
      
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: linux-mips@linux-mips.org
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      f43a54a0
    • E
      signal: Ensure every siginfo we send has all bits initialized · 3eb0f519
      Eric W. Biederman 提交于
      Call clear_siginfo to ensure every stack allocated siginfo is properly
      initialized before being passed to the signal sending functions.
      
      Note: It is not safe to depend on C initializers to initialize struct
      siginfo on the stack because C is allowed to skip holes when
      initializing a structure.
      
      The initialization of struct siginfo in tracehook_report_syscall_exit
      was moved from the helper user_single_step_siginfo into
      tracehook_report_syscall_exit itself, to make it clear that the local
      variable siginfo gets fully initialized.
      
      In a few cases the scope of struct siginfo has been reduced to make it
      clear that siginfo siginfo is not used on other paths in the function
      in which it is declared.
      
      Instances of using memset to initialize siginfo have been replaced
      with calls clear_siginfo for clarity.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      3eb0f519
  14. 14 4月, 2018 1 次提交
  15. 12 4月, 2018 1 次提交
    • K
      exec: pass stack rlimit into mm layout functions · 8f2af155
      Kees Cook 提交于
      Patch series "exec: Pin stack limit during exec".
      
      Attempts to solve problems with the stack limit changing during exec
      continue to be frustrated[1][2].  In addition to the specific issues
      around the Stack Clash family of flaws, Andy Lutomirski pointed out[3]
      other places during exec where the stack limit is used and is assumed to
      be unchanging.  Given the many places it gets used and the fact that it
      can be manipulated/raced via setrlimit() and prlimit(), I think the only
      way to handle this is to move away from the "current" view of the stack
      limit and instead attach it to the bprm, and plumb this down into the
      functions that need to know the stack limits.  This series implements
      the approach.
      
      [1] 04e35f44 ("exec: avoid RLIMIT_STACK races with prlimit()")
      [2] 779f4e1c ("Revert "exec: avoid RLIMIT_STACK races with prlimit()"")
      [3] to security@kernel.org, "Subject: existing rlimit races?"
      
      This patch (of 3):
      
      Since it is possible that the stack rlimit can change externally during
      exec (either via another thread calling setrlimit() or another process
      calling prlimit()), provide a way to pass the rlimit down into the
      per-architecture mm layout functions so that the rlimit can stay in the
      bprm structure instead of sitting in the signal structure until exec is
      finalized.
      
      Link: http://lkml.kernel.org/r/1518638796-20819-2-git-send-email-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Willy Tarreau <w@1wt.eu>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
      Cc: Brad Spengler <spender@grsecurity.net>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f2af155
  16. 06 4月, 2018 1 次提交
    • H
      mm: fix races between swapoff and flush dcache · cb9f753a
      Huang Ying 提交于
      Thanks to commit 4b3ef9da ("mm/swap: split swap cache into 64MB
      trunks"), after swapoff the address_space associated with the swap
      device will be freed.  So page_mapping() users which may touch the
      address_space need some kind of mechanism to prevent the address_space
      from being freed during accessing.
      
      The dcache flushing functions (flush_dcache_page(), etc) in architecture
      specific code may access the address_space of swap device for anonymous
      pages in swap cache via page_mapping() function.  But in some cases
      there are no mechanisms to prevent the swap device from being swapoff,
      for example,
      
        CPU1					CPU2
        __get_user_pages()			swapoff()
          flush_dcache_page()
            mapping = page_mapping()
              ...				  exit_swap_address_space()
              ...				    kvfree(spaces)
              mapping_mapped(mapping)
      
      The address space may be accessed after being freed.
      
      But from cachetlb.txt and Russell King, flush_dcache_page() only care
      about file cache pages, for anonymous pages, flush_anon_page() should be
      used.  The implementation of flush_dcache_page() in all architectures
      follows this too.  They will check whether page_mapping() is NULL and
      whether mapping_mapped() is true to determine whether to flush the
      dcache immediately.  And they will use interval tree (mapping->i_mmap)
      to find all user space mappings.  While mapping_mapped() and
      mapping->i_mmap isn't used by anonymous pages in swap cache at all.
      
      So, to fix the race between swapoff and flush dcache, __page_mapping()
      is add to return the address_space for file cache pages and NULL
      otherwise.  All page_mapping() invoking in flush dcache functions are
      replaced with page_mapping_file().
      
      [akpm@linux-foundation.org: simplify page_mapping_file(), per Mike]
      Link: http://lkml.kernel.org/r/20180305083634.15174-1-ying.huang@intel.comSigned-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cb9f753a
  17. 19 2月, 2018 1 次提交
  18. 19 1月, 2018 1 次提交
  19. 15 1月, 2018 1 次提交
  20. 10 1月, 2018 2 次提交
  21. 04 11月, 2017 1 次提交
    • P
      Update MIPS email addresses · fb615d61
      Paul Burton 提交于
      MIPS will soon not be a part of Imagination Technologies, and as such
      many @imgtec.com email addresses will no longer be valid. This patch
      updates the addresses for those who:
      
       - Have 10 or more patches in mainline authored using an @imgtec.com
         email address, or any patches dated within the past year.
      
       - Are still with Imagination but leaving as part of the MIPS business
         unit, as determined from an internal email address list.
      
       - Haven't already updated their email address (ie. JamesH) or expressed
         a desire to be excluded (ie. Maciej).
      
       - Acked v2 or earlier of this patch, which leaves Deng-Cheng, Matt &
         myself.
      
      New addresses are of the form firstname.lastname@mips.com, and all
      verified against an internal email address list.  An entry is added to
      .mailmap for each person such that get_maintainer.pl will report the new
      addresses rather than @imgtec.com addresses which will soon be dead.
      
      Instances of the affected addresses throughout the tree are then
      mechanically replaced with the new @mips.com address.
      Signed-off-by: NPaul Burton <paul.burton@mips.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@imgtec.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@mips.com>
      Acked-by: NDengcheng Zhu <dengcheng.zhu@mips.com>
      Cc: Matt Redfearn <matt.redfearn@imgtec.com>
      Cc: Matt Redfearn <matt.redfearn@mips.com>
      Acked-by: NMatt Redfearn <matt.redfearn@mips.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: linux-kernel@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: trivial@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      fb615d61
  22. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  23. 01 11月, 2017 2 次提交
  24. 19 10月, 2017 1 次提交
  25. 09 10月, 2017 2 次提交
  26. 06 9月, 2017 1 次提交