1. 02 3月, 2018 1 次提交
  2. 01 3月, 2018 2 次提交
  3. 28 2月, 2018 9 次提交
  4. 15 2月, 2018 5 次提交
  5. 12 2月, 2018 3 次提交
    • A
      unify {de,}mangle_poll(), get rid of kernel-side POLL... · 7a163b21
      Al Viro 提交于
      except, again, POLLFREE and POLL_BUSY_LOOP.
      
      With this, we finally get to the promised end result:
      
       - POLL{IN,OUT,...} are plain integers and *not* in __poll_t, so any
         stray instances of ->poll() still using those will be caught by
         sparse.
      
       - eventpoll.c and select.c warning-free wrt __poll_t
      
       - no more kernel-side definitions of POLL... - userland ones are
         visible through the entire kernel (and used pretty much only for
         mangle/demangle)
      
       - same behavior as after the first series (i.e. sparc et.al. epoll(2)
         working correctly).
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7a163b21
    • L
      vfs: do bulk POLL* -> EPOLL* replacement · a9a08845
      Linus Torvalds 提交于
      This is the mindless scripted replacement of kernel use of POLL*
      variables as described by Al, done by this script:
      
          for V in IN OUT PRI ERR RDNORM RDBAND WRNORM WRBAND HUP RDHUP NVAL MSG; do
              L=`git grep -l -w POLL$V | grep -v '^t' | grep -v /um/ | grep -v '^sa' | grep -v '/poll.h$'|grep -v '^D'`
              for f in $L; do sed -i "-es/^\([^\"]*\)\(\<POLL$V\>\)/\\1E\\2/" $f; done
          done
      
      with de-mangling cleanups yet to come.
      
      NOTE! On almost all architectures, the EPOLL* constants have the same
      values as the POLL* constants do.  But they keyword here is "almost".
      For various bad reasons they aren't the same, and epoll() doesn't
      actually work quite correctly in some cases due to this on Sparc et al.
      
      The next patch from Al will sort out the final differences, and we
      should be all done.
      Scripted-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a9a08845
    • M
      xtensa: fix build with KASAN · f8d0cbf2
      Max Filippov 提交于
      The commit 917538e2 ("kasan: clean up KASAN_SHADOW_SCALE_SHIFT
      usage") removed KASAN_SHADOW_SCALE_SHIFT definition from
      include/linux/kasan.h and added it to architecture-specific headers,
      except for xtensa. This broke the xtensa build with KASAN enabled.
      Define KASAN_SHADOW_SCALE_SHIFT in arch/xtensa/include/asm/kasan.h
      
      Reported by: kbuild test robot <fengguang.wu@intel.com>
      Fixes: 917538e2 ("kasan: clean up KASAN_SHADOW_SCALE_SHIFT usage")
      Acked-by: NAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      f8d0cbf2
  6. 11 2月, 2018 3 次提交
  7. 10 2月, 2018 1 次提交
  8. 09 2月, 2018 4 次提交
    • J
      KVM: PPC: Book3S: Add MMIO emulation for VMX instructions · 09f98496
      Jose Ricardo Ziviani 提交于
      This patch provides the MMIO load/store vector indexed
      X-Form emulation.
      
      Instructions implemented:
      lvx: the quadword in storage addressed by the result of EA &
      0xffff_ffff_ffff_fff0 is loaded into VRT.
      
      stvx: the contents of VRS are stored into the quadword in storage
      addressed by the result of EA & 0xffff_ffff_ffff_fff0.
      Reported-by: NGopesh Kumar Chaudhary <gopchaud@in.ibm.com>
      Reported-by: NBalamuruhan S <bala24@linux.vnet.ibm.com>
      Signed-off-by: NJose Ricardo Ziviani <joserz@linux.vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      09f98496
    • A
      KVM: PPC: Book3S HV: Branch inside feature section · d20fe50a
      Alexander Graf 提交于
      We ended up with code that did a conditional branch inside a feature
      section to code outside of the feature section. Depending on how the
      object file gets organized, that might mean we exceed the 14bit
      relocation limit for conditional branches:
      
        arch/powerpc/kvm/built-in.o:arch/powerpc/kvm/book3s_hv_rmhandlers.S:416:(__ftr_alt_97+0x8): relocation truncated to fit: R_PPC64_REL14 against `.text'+1ca4
      
      So instead of doing a conditional branch outside of the feature section,
      let's just jump at the end of the same, making the branch very short.
      Signed-off-by: NAlexander Graf <agraf@suse.de>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      d20fe50a
    • D
      KVM: PPC: Book3S HV: Make HPT resizing work on POWER9 · 790a9df5
      David Gibson 提交于
      This adds code to enable the HPT resizing code to work on POWER9,
      which uses a slightly modified HPT entry format compared to POWER8.
      On POWER9, we convert HPTEs read from the HPT from the new format to
      the old format so that the rest of the HPT resizing code can work as
      before.  HPTEs written to the new HPT are converted to the new format
      as the last step before writing them into the new HPT.
      
      This takes out the checks added by commit bcd3bb63 ("KVM: PPC:
      Book3S HV: Disable HPT resizing on POWER9 for now", 2017-02-18),
      now that HPT resizing works on POWER9.
      
      On POWER9, when we pivot to the new HPT, we now call
      kvmppc_setup_partition_table() to update the partition table in order
      to make the hardware use the new HPT.
      
      [paulus@ozlabs.org - added kvmppc_setup_partition_table() call,
       wrote commit message.]
      Tested-by: NLaurent Vivier <lvivier@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      790a9df5
    • P
      KVM: PPC: Book3S HV: Fix handling of secondary HPTEG in HPT resizing code · 05f2bb03
      Paul Mackerras 提交于
      This fixes the computation of the HPTE index to use when the HPT
      resizing code encounters a bolted HPTE which is stored in its
      secondary HPTE group.  The code inverts the HPTE group number, which
      is correct, but doesn't then mask it with new_hash_mask.  As a result,
      new_pteg will be effectively negative, resulting in new_hptep
      pointing before the new HPT, which will corrupt memory.
      
      In addition, this removes two BUG_ON statements.  The condition that
      the BUG_ONs were testing -- that we have computed the hash value
      incorrectly -- has never been observed in testing, and if it did
      occur, would only affect the guest, not the host.  Given that
      BUG_ON should only be used in conditions where the kernel (i.e.
      the host kernel, in this case) can't possibly continue execution,
      it is not appropriate here.
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
      05f2bb03
  9. 08 2月, 2018 4 次提交
  10. 07 2月, 2018 8 次提交
    • M
      s390: introduce execute-trampolines for branches · f19fbd5e
      Martin Schwidefsky 提交于
      Add CONFIG_EXPOLINE to enable the use of the new -mindirect-branch= and
      -mfunction_return= compiler options to create a kernel fortified against
      the specte v2 attack.
      
      With CONFIG_EXPOLINE=y all indirect branches will be issued with an
      execute type instruction. For z10 or newer the EXRL instruction will
      be used, for older machines the EX instruction. The typical indirect
      call
      
      	basr	%r14,%r1
      
      is replaced with a PC relative call to a new thunk
      
      	brasl	%r14,__s390x_indirect_jump_r1
      
      The thunk contains the EXRL/EX instruction to the indirect branch
      
      __s390x_indirect_jump_r1:
      	exrl	0,0f
      	j	.
      0:	br	%r1
      
      The detour via the execute type instruction has a performance impact.
      To get rid of the detour the new kernel parameter "nospectre_v2" and
      "spectre_v2=[on,off,auto]" can be used. If the parameter is specified
      the kernel and module code will be patched at runtime.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      f19fbd5e
    • A
      x86: hibernate: fix swsusp_arch_resume() prototype · 168b6511
      Arnd Bergmann 提交于
      The declaration for swsusp_arch_resume() marks it as 'asmlinkage',
      but the definition in x86-32 does not, and it fails to include
      the header with the declaration.  This leads to a warning when
      building with link-time-optimizations:
      
      kernel/power/power.h:108:23: error: type of 'swsusp_arch_resume' does not match original declaration [-Werror=lto-type-mismatch]
       extern asmlinkage int swsusp_arch_resume(void);
                             ^
      arch/x86/power/hibernate_32.c:148:0: note: 'swsusp_arch_resume' was previously declared here
       int swsusp_arch_resume(void)
      
      This moves the declaration into a globally visible header file
      and fixes up both x86 definitions to match it.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      168b6511
    • P
      ACPI: SPCR: Make SPCR available to x86 · 0231d000
      Prarit Bhargava 提交于
      SPCR is currently only enabled or ARM64 and x86 can use SPCR to setup
      an early console.
      
      General fixes include updating Documentation & Kconfig (for x86),
      updating comments, and changing parse_spcr() to acpi_parse_spcr(),
      and earlycon_init_is_deferred to earlycon_acpi_spcr_enable to be
      more descriptive.
      
      On x86, many systems have a valid SPCR table but the table version is
      not 2 so the table version check must be a warning.
      
      On ARM64 when the kernel parameter earlycon is used both the early console
      and console are enabled.  On x86, only the earlycon should be enabled by
      by default.  Modify acpi_parse_spcr() to allow options for initializing
      the early console and console separately.
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Reviewed-by: NMark Salter <msalter@redhat.com>
      Tested-by: NMark Salter <msalter@redhat.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      0231d000
    • M
      arch/score/kernel/setup.c: combine two seq_printf() calls into one call in show_cpuinfo() · b0f7e32c
      Markus Elfring 提交于
      Some data were printed into a sequence by two separate function calls.
      Print the same data by a single function call instead.
      
      This issue was detected by using the Coccinelle software.
      
      Link: http://lkml.kernel.org/r/ddcfff3a-9502-6ce0-b08a-365eb55ce958@users.sourceforge.netSigned-off-by: NMarkus Elfring <elfring@users.sourceforge.net>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0f7e32c
    • M
      pids: introduce find_get_task_by_vpid() helper · 2ee08260
      Mike Rapoport 提交于
      There are several functions that do find_task_by_vpid() followed by
      get_task_struct().  We can use a helper function instead.
      
      Link: http://lkml.kernel.org/r/1509602027-11337-1-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: NMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2ee08260
    • C
      lib: optimize cpumask_next_and() · 0ade34c3
      Clement Courbet 提交于
      We've measured that we spend ~0.6% of sys cpu time in cpumask_next_and().
      It's essentially a joined iteration in search for a non-zero bit, which is
      currently implemented as a lookup join (find a nonzero bit on the lhs,
      lookup the rhs to see if it's set there).
      
      Implement a direct join (find a nonzero bit on the incrementally built
      join).  Also add generic bitmap benchmarks in the new `test_find_bit`
      module for new function (see `find_next_and_bit` in [2] and [3] below).
      
      For cpumask_next_and, direct benchmarking shows that it's 1.17x to 14x
      faster with a geometric mean of 2.1 on 32 CPUs [1].  No impact on memory
      usage.  Note that on Arm, the new pure-C implementation still outperforms
      the old one that uses a mix of C and asm (`find_next_bit`) [3].
      
      [1] Approximate benchmark code:
      
      ```
        unsigned long src1p[nr_cpumask_longs] = {pattern1};
        unsigned long src2p[nr_cpumask_longs] = {pattern2};
        for (/*a bunch of repetitions*/) {
          for (int n = -1; n <= nr_cpu_ids; ++n) {
            asm volatile("" : "+rm"(src1p)); // prevent any optimization
            asm volatile("" : "+rm"(src2p));
            unsigned long result = cpumask_next_and(n, src1p, src2p);
            asm volatile("" : "+rm"(result));
          }
        }
      ```
      
      Results:
      pattern1    pattern2     time_before/time_after
      0x0000ffff  0x0000ffff   1.65
      0x0000ffff  0x00005555   2.24
      0x0000ffff  0x00001111   2.94
      0x0000ffff  0x00000000   14.0
      0x00005555  0x0000ffff   1.67
      0x00005555  0x00005555   1.71
      0x00005555  0x00001111   1.90
      0x00005555  0x00000000   6.58
      0x00001111  0x0000ffff   1.46
      0x00001111  0x00005555   1.49
      0x00001111  0x00001111   1.45
      0x00001111  0x00000000   3.10
      0x00000000  0x0000ffff   1.18
      0x00000000  0x00005555   1.18
      0x00000000  0x00001111   1.17
      0x00000000  0x00000000   1.25
      -----------------------------
                     geo.mean  2.06
      
      [2] test_find_next_bit, X86 (skylake)
      
       [ 3913.477422] Start testing find_bit() with random-filled bitmap
       [ 3913.477847] find_next_bit: 160868 cycles, 16484 iterations
       [ 3913.477933] find_next_zero_bit: 169542 cycles, 16285 iterations
       [ 3913.478036] find_last_bit: 201638 cycles, 16483 iterations
       [ 3913.480214] find_first_bit: 4353244 cycles, 16484 iterations
       [ 3913.480216] Start testing find_next_and_bit() with random-filled
       bitmap
       [ 3913.481074] find_next_and_bit: 89604 cycles, 8216 iterations
       [ 3913.481075] Start testing find_bit() with sparse bitmap
       [ 3913.481078] find_next_bit: 2536 cycles, 66 iterations
       [ 3913.481252] find_next_zero_bit: 344404 cycles, 32703 iterations
       [ 3913.481255] find_last_bit: 2006 cycles, 66 iterations
       [ 3913.481265] find_first_bit: 17488 cycles, 66 iterations
       [ 3913.481266] Start testing find_next_and_bit() with sparse bitmap
       [ 3913.481272] find_next_and_bit: 764 cycles, 1 iterations
      
      [3] test_find_next_bit, arm (v7 odroid XU3).
      
      [  267.206928] Start testing find_bit() with random-filled bitmap
      [  267.214752] find_next_bit: 4474 cycles, 16419 iterations
      [  267.221850] find_next_zero_bit: 5976 cycles, 16350 iterations
      [  267.229294] find_last_bit: 4209 cycles, 16419 iterations
      [  267.279131] find_first_bit: 1032991 cycles, 16420 iterations
      [  267.286265] Start testing find_next_and_bit() with random-filled
      bitmap
      [  267.302386] find_next_and_bit: 2290 cycles, 8140 iterations
      [  267.309422] Start testing find_bit() with sparse bitmap
      [  267.316054] find_next_bit: 191 cycles, 66 iterations
      [  267.322726] find_next_zero_bit: 8758 cycles, 32703 iterations
      [  267.329803] find_last_bit: 84 cycles, 66 iterations
      [  267.336169] find_first_bit: 4118 cycles, 66 iterations
      [  267.342627] Start testing find_next_and_bit() with sparse bitmap
      [  267.356919] find_next_and_bit: 91 cycles, 1 iterations
      
      [courbet@google.com: v6]
        Link: http://lkml.kernel.org/r/20171129095715.23430-1-courbet@google.com
      [geert@linux-m68k.org: m68k/bitops: always include <asm-generic/bitops/find.h>]
        Link: http://lkml.kernel.org/r/1512556816-28627-1-git-send-email-geert@linux-m68k.org
      Link: http://lkml.kernel.org/r/20171128131334.23491-1-courbet@google.comSigned-off-by: NClement Courbet <courbet@google.com>
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Cc: Yury Norov <ynorov@caviumnetworks.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ade34c3
    • Y
      bitmap: replace bitmap_{from,to}_u32array · 3aa56885
      Yury Norov 提交于
      with bitmap_{from,to}_arr32 over the kernel. Additionally to it:
      * __check_eq_bitmap() now takes single nbits argument.
      * __check_eq_u32_array is not used in new test but may be used in
        future. So I don't remove it here, but annotate as __used.
      
      Tested on arm64 and 32-bit BE mips.
      
      [arnd@arndb.de: perf: arm_dsu_pmu: convert to bitmap_from_arr32]
        Link: http://lkml.kernel.org/r/20180201172508.5739-2-ynorov@caviumnetworks.com
      [ynorov@caviumnetworks.com: fix net/core/ethtool.c]
        Link: http://lkml.kernel.org/r/20180205071747.4ekxtsbgxkj5b2fz@yury-thinkpad
      Link: http://lkml.kernel.org/r/20171228150019.27953-2-ynorov@caviumnetworks.comSigned-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: David Decotigny <decot@googlers.com>,
      Cc: David S. Miller <davem@davemloft.net>,
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Heiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3aa56885
    • K
      Makefile: introduce CONFIG_CC_STACKPROTECTOR_AUTO · 44c6dc94
      Kees Cook 提交于
      Nearly all modern compilers support a stack-protector option, and nearly
      all modern distributions enable the kernel stack-protector, so enabling
      this by default in kernel builds would make sense.  However, Kconfig does
      not have knowledge of available compiler features, so it isn't safe to
      force on, as this would unconditionally break builds for the compilers or
      architectures that don't have support.  Instead, this introduces a new
      option, CONFIG_CC_STACKPROTECTOR_AUTO, which attempts to discover the best
      possible stack-protector available, and will allow builds to proceed even
      if the compiler doesn't support any stack-protector.
      
      This option is made the default so that kernels built with modern
      compilers will be protected-by-default against stack buffer overflows,
      avoiding things like the recent BlueBorne attack.  Selection of a specific
      stack-protector option remains available, including disabling it.
      
      Additionally, tiny.config is adjusted to use CC_STACKPROTECTOR_NONE, since
      that's the option with the least code size (and it used to be the default,
      so we have to explicitly choose it there now).
      
      Link: http://lkml.kernel.org/r/1510076320-69931-4-git-send-email-keescook@chromium.orgSigned-off-by: NKees Cook <keescook@chromium.org>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      44c6dc94