1. 23 11月, 2020 1 次提交
    • H
      s390/mm: remove set_fs / rework address space handling · 87d59863
      Heiko Carstens 提交于
      Remove set_fs support from s390. With doing this rework address space
      handling and simplify it. As a result address spaces are now setup
      like this:
      
      CPU running in              | %cr1 ASCE | %cr7 ASCE | %cr13 ASCE
      ----------------------------|-----------|-----------|-----------
      user space                  |  user     |  user     |  kernel
      kernel, normal execution    |  kernel   |  user     |  kernel
      kernel, kvm guest execution |  gmap     |  user     |  kernel
      
      To achieve this the getcpu vdso syscall is removed in order to avoid
      secondary address mode and a separate vdso address space in for user
      space. The getcpu vdso syscall will be implemented differently with a
      subsequent patch.
      
      The kernel accesses user space always via secondary address space.
      This happens in different ways:
      - with mvcos in home space mode and directly read/write to secondary
        address space
      - with mvcs/mvcp in primary space mode and copy from primary space to
        secondary space or vice versa
      - with e.g. cs in secondary space mode and access secondary space
      
      Switching translation modes happens with sacf before and after
      instructions which access user space, like before.
      
      Lazy handling of control register reloading is removed in the hope to
      make everything simpler, but at the cost of making kernel entry and
      exit a bit slower. That is: on kernel entry the primary asce is always
      changed to contain the kernel asce, and on kernel exit the primary
      asce is changed again so it contains the user asce.
      
      In kernel mode there is only one exception to the primary asce: when
      kvm guests are executed the primary asce contains the gmap asce (which
      describes the guest address space). The primary asce is reset to
      kernel asce whenever kvm guest execution is interrupted, so that this
      doesn't has to be taken into account for any user space accesses.
      Reviewed-by: NSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      87d59863
  2. 09 11月, 2020 1 次提交
    • V
      s390: remove unused s390_base_ext_handler · f38b0a74
      Vasily Gorbik 提交于
      s390_base_ext_handler_fn haven't been used since its introduction in
      commit ab14de6c ("[S390] Convert memory detection into C code.").
      
      s390_base_ext_handler itself is currently falsely storing 16 registers
      at __LC_SAVE_AREA_ASYNC rewriting several following lowcore values:
      cpu_flags, return_psw, return_mcck_psw, sync_enter_timer and
      async_enter_timer.
      
      Besides that s390_base_ext_handler itself is only potentially hiding
      EXT interrupts which should not have happen in the first place. Any
      piece of code which requires EXT interrupts before fully functional
      ext_int_handler is enabled has to do it on its own, like this is done
      by sclp_early_cmd() which is doing EXT interrupts handling synchronously
      in sclp_early_wait_irq().
      
      With s390_base_ext_handler removed unexpected EXT interrupt leads
      to disabled wait with the address 0x1b0 (__LC_EXT_NEW_PSW), which is
      currently setup in the decompressor.
      Reviewed-by: NHeiko Carstens <hca@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      Signed-off-by: NHeiko Carstens <hca@linux.ibm.com>
      f38b0a74
  3. 28 5月, 2020 1 次提交
    • S
      s390: remove critical section cleanup from entry.S · 0b0ed657
      Sven Schnelle 提交于
      The current code is rather complex and caused a lot of subtle
      and hard to debug bugs in the past. Simplify the code by calling
      the system_call handler with interrupts disabled, save
      machine state, and re-enable them later.
      
      This requires significant changes to the machine check handling code
      as well. When the machine check interrupt arrived while being in kernel
      mode the new code will signal pending machine checks with a SIGP external
      call. When userspace was interrupted, the handler will switch to the
      kernel stack and directly execute s390_handle_mcck().
      Signed-off-by: NSven Schnelle <svens@linux.ibm.com>
      Signed-off-by: NVasily Gorbik <gor@linux.ibm.com>
      0b0ed657
  4. 28 3月, 2020 2 次提交
  5. 10 3月, 2020 1 次提交
  6. 19 2月, 2020 1 次提交
  7. 30 11月, 2019 1 次提交
  8. 01 11月, 2019 1 次提交
  9. 03 9月, 2019 1 次提交
  10. 17 7月, 2019 1 次提交
  11. 15 6月, 2019 2 次提交
    • H
      processor: get rid of cpu_relax_yield · 4ecf0a43
      Heiko Carstens 提交于
      stop_machine is the only user left of cpu_relax_yield. Given that it
      now has special semantics which are tied to stop_machine introduce a
      weak stop_machine_yield function which architectures can override, and
      get rid of the generic cpu_relax_yield implementation.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      4ecf0a43
    • M
      s390: improve wait logic of stop_machine · 38f2c691
      Martin Schwidefsky 提交于
      The stop_machine loop to advance the state machine and to wait for all
      affected CPUs to check-in calls cpu_relax_yield in a tight loop until
      the last missing CPUs acknowledged the state transition.
      
      On a virtual system where not all logical CPUs are backed by real CPUs
      all the time it can take a while for all CPUs to check-in. With the
      current definition of cpu_relax_yield a diagnose 0x44 is done which
      tells the hypervisor to schedule *some* other CPU. That can be any
      CPU and not necessarily one of the CPUs that need to run in order to
      advance the state machine. This can lead to a pretty bad diagnose 0x44
      storm until the last missing CPU finally checked-in.
      
      Replace the undirected cpu_relax_yield based on diagnose 0x44 with a
      directed yield. Each CPU in the wait loop will pick up the next CPU
      in the cpumask of stop_machine. The diagnose 0x9c is used to tell the
      hypervisor to run this next CPU instead of the current one. If there
      is only a limited number of real CPUs backing the virtual CPUs we
      end up with the real CPUs passed around in a round-robin fashion.
      
      [heiko.carstens@de.ibm.com]:
          Use cpumask_next_wrap as suggested by Peter Zijlstra.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      38f2c691
  12. 02 5月, 2019 2 次提交
    • M
      s390: simplify disabled_wait · 98587c2d
      Martin Schwidefsky 提交于
      The disabled_wait() function uses its argument as the PSW address when
      it stops the CPU with a wait PSW that is disabled for interrupts.
      The different callers sometimes use a specific number like 0xdeadbeef
      to indicate a specific failure, the early boot code uses 0 and some
      other calls sites use __builtin_return_address(0).
      
      At the time a dump is created the current PSW and the registers of a
      CPU are written to lowcore to make them avaiable to the dump analysis
      tool. For a CPU stopped with disabled_wait the PSW and the registers
      do not really make sense together, the PSW address does not point to
      the function the registers belong to.
      
      Simplify disabled_wait() by using _THIS_IP_ for the PSW address and
      drop the argument to the function.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      98587c2d
    • M
      s390/unwind: introduce stack unwind API · 78c98f90
      Martin Schwidefsky 提交于
      Rework the dump_trace() stack unwinder interface to support different
      unwinding algorithms. The new interface looks like this:
      
      	struct unwind_state state;
      	unwind_for_each_frame(&state, task, regs, start_stack)
      		do_something(state.sp, state.ip, state.reliable);
      
      The unwind_bc.c file contains the implementation for the classic
      back-chain unwinder.
      
      One positive side effect of the new code is it now handles ftraced
      functions gracefully. It prints the real name of the return function
      instead of 'return_to_handler'.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      78c98f90
  13. 11 4月, 2019 2 次提交
  14. 06 11月, 2018 1 次提交
  15. 31 10月, 2018 1 次提交
  16. 09 10月, 2018 5 次提交
  17. 05 2月, 2018 2 次提交
  18. 14 11月, 2017 2 次提交
    • V
      s390: correct some inline assembly constraints · 11776eaa
      Vasily Gorbik 提交于
      Inline assembly code changed in this patch should really use "Q"
      constraint "Memory reference without index register and with short
      displacement". The kernel does not compile with kasan support enabled
      otherwise (due to stack instrumentation).
      Signed-off-by: NVasily Gorbik <gor@linux.vnet.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      11776eaa
    • M
      s390: remove all code using the access register mode · 0aaba41b
      Martin Schwidefsky 提交于
      The vdso code for the getcpu() and the clock_gettime() call use the access
      register mode to access the per-CPU vdso data page with the current code.
      
      An alternative to the complicated AR mode is to use the secondary space
      mode. This makes the vdso faster and quite a bit simpler. The downside is
      that the uaccess code has to be changed quite a bit.
      
      Which instructions are used depends on the machine and what kind of uaccess
      operation is requested. The instruction dictates which ASCE value needs
      to be loaded into %cr1 and %cr7.
      
      The different cases:
      
      * User copy with MVCOS for z10 and newer machines
        The MVCOS instruction can copy between the primary space (aka user) and
        the home space (aka kernel) directly. For set_fs(KERNEL_DS) the kernel
        ASCE is loaded into %cr1. For set_fs(USER_DS) the user space is already
        loaded in %cr1.
      
      * User copy with MVCP/MVCS for older machines
        To be able to execute the MVCP/MVCS instructions the kernel needs to
        switch to primary mode. The control register %cr1 has to be set to the
        kernel ASCE and %cr7 to either the kernel ASCE or the user ASCE dependent
        on set_fs(KERNEL_DS) vs set_fs(USER_DS).
      
      * Data access in the user address space for strnlen / futex
        To use "normal" instruction with data from the user address space the
        secondary space mode is used. The kernel needs to switch to primary mode,
        %cr1 has to contain the kernel ASCE and %cr7 either the user ASCE or the
        kernel ASCE, dependent on set_fs.
      
      To load a new value into %cr1 or %cr7 is an expensive operation, the kernel
      tries to be lazy about it. E.g. for multiple user copies in a row with
      MVCP/MVCS the replacement of the vdso ASCE in %cr7 with the user ASCE is
      done only once. On return to user space a CPU bit is checked that loads the
      vdso ASCE again.
      
      To enable and disable the data access via the secondary space two new
      functions are added, enable_sacf_uaccess and disable_sacf_uaccess. The fact
      that a context is in secondary space uaccess mode is stored in the
      mm_segment_t value for the task. The code of an interrupt may use set_fs
      as long as it returns to the previous state it got with get_fs with another
      call to set_fs. The code in finish_arch_post_lock_switch simply has to do a
      set_fs with the current mm_segment_t value for the task.
      
      For CPUs with MVCOS:
      
      CPU running in                        | %cr1 ASCE | %cr7 ASCE |
      --------------------------------------|-----------|-----------|
      user space                            |  user     |  vdso     |
      kernel, USER_DS, normal-mode          |  user     |  vdso     |
      kernel, USER_DS, normal-mode, lazy    |  user     |  user     |
      kernel, USER_DS, sacf-mode            |  kernel   |  user     |
      kernel, KERNEL_DS, normal-mode        |  kernel   |  vdso     |
      kernel, KERNEL_DS, normal-mode, lazy  |  kernel   |  kernel   |
      kernel, KERNEL_DS, sacf-mode          |  kernel   |  kernel   |
      
      For CPUs without MVCOS:
      
      CPU running in                        | %cr1 ASCE | %cr7 ASCE |
      --------------------------------------|-----------|-----------|
      user space                            |  user     |  vdso     |
      kernel, USER_DS, normal-mode          |  user     |  vdso     |
      kernel, USER_DS, normal-mode lazy     |  kernel   |  user     |
      kernel, USER_DS, sacf-mode            |  kernel   |  user     |
      kernel, KERNEL_DS, normal-mode        |  kernel   |  vdso     |
      kernel, KERNEL_DS, normal-mode, lazy  |  kernel   |  kernel   |
      kernel, KERNEL_DS, sacf-mode          |  kernel   |  kernel   |
      
      The lines with "lazy" refer to the state after a copy via the secondary
      space with a delayed reload of %cr1 and %cr7.
      
      There are three hardware address spaces that can cause a DAT exception,
      primary, secondary and home space. The exception can be related to
      four different fault types: user space fault, vdso fault, kernel fault,
      and the gmap faults.
      
      Dependent on the set_fs state and normal vs. sacf mode there are a number
      of fault combinations:
      
      1) user address space fault via the primary ASCE
      2) gmap address space fault via the primary ASCE
      3) kernel address space fault via the primary ASCE for machines with
         MVCOS and set_fs(KERNEL_DS)
      4) vdso address space faults via the secondary ASCE with an invalid
         address while running in secondary space in problem state
      5) user address space fault via the secondary ASCE for user-copy
         based on the secondary space mode, e.g. futex_ops or strnlen_user
      6) kernel address space fault via the secondary ASCE for user-copy
         with secondary space mode with set_fs(KERNEL_DS)
      7) kernel address space fault via the primary ASCE for user-copy
         with secondary space mode with set_fs(USER_DS) on machines without
         MVCOS.
      8) kernel address space fault via the home space ASCE
      
      Replace user_space_fault() with a new function get_fault_type() that
      can distinguish all four different fault types.
      
      With these changes the futex atomic ops from the kernel and the
      strnlen_user will get a little bit slower, as well as the old style
      uaccess with MVCP/MVCS. All user accesses based on MVCOS will be as
      fast as before. On the positive side, the user space vdso code is a
      lot faster and Linux ceases to use the complicated AR mode.
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      0aaba41b
  19. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  20. 28 9月, 2017 3 次提交
  21. 29 6月, 2017 1 次提交
  22. 27 6月, 2017 1 次提交
  23. 12 6月, 2017 1 次提交
  24. 25 4月, 2017 1 次提交
  25. 22 3月, 2017 1 次提交
    • M
      s390: add a system call for guarded storage · 916cda1a
      Martin Schwidefsky 提交于
      This adds a new system call to enable the use of guarded storage for
      user space processes. The system call takes two arguments, a command
      and pointer to a guarded storage control block:
      
          s390_guarded_storage(int command, struct gs_cb *gs_cb);
      
      The second argument is relevant only for the GS_SET_BC_CB command.
      
      The commands in detail:
      
      0 - GS_ENABLE
          Enable the guarded storage facility for the current task. The
          initial content of the guarded storage control block will be
          all zeros. After the enablement the user space code can use
          load-guarded-storage-controls instruction (LGSC) to load an
          arbitrary control block. While a task is enabled the kernel
          will save and restore the current content of the guarded
          storage registers on context switch.
      1 - GS_DISABLE
          Disables the use of the guarded storage facility for the current
          task. The kernel will cease to save and restore the content of
          the guarded storage registers, the task specific content of
          these registers is lost.
      2 - GS_SET_BC_CB
          Set a broadcast guarded storage control block. This is called
          per thread and stores a specific guarded storage control block
          in the task struct of the current task. This control block will
          be used for the broadcast event GS_BROADCAST.
      3 - GS_CLEAR_BC_CB
          Clears the broadcast guarded storage control block. The guarded-
          storage control block is removed from the task struct that was
          established by GS_SET_BC_CB.
      4 - GS_BROADCAST
          Sends a broadcast to all thread siblings of the current task.
          Every sibling that has established a broadcast guarded storage
          control block will load this control block and will be enabled
          for guarded storage. The broadcast guarded storage control block
          is used up, a second broadcast without a refresh of the stored
          control block with GS_SET_BC_CB will not have any effect.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      916cda1a
  26. 24 2月, 2017 1 次提交
  27. 23 2月, 2017 2 次提交