1. 28 9月, 2017 3 次提交
  2. 29 6月, 2017 1 次提交
  3. 27 6月, 2017 1 次提交
  4. 12 6月, 2017 1 次提交
  5. 25 4月, 2017 1 次提交
  6. 22 3月, 2017 1 次提交
    • M
      s390: add a system call for guarded storage · 916cda1a
      Martin Schwidefsky 提交于
      This adds a new system call to enable the use of guarded storage for
      user space processes. The system call takes two arguments, a command
      and pointer to a guarded storage control block:
      
          s390_guarded_storage(int command, struct gs_cb *gs_cb);
      
      The second argument is relevant only for the GS_SET_BC_CB command.
      
      The commands in detail:
      
      0 - GS_ENABLE
          Enable the guarded storage facility for the current task. The
          initial content of the guarded storage control block will be
          all zeros. After the enablement the user space code can use
          load-guarded-storage-controls instruction (LGSC) to load an
          arbitrary control block. While a task is enabled the kernel
          will save and restore the current content of the guarded
          storage registers on context switch.
      1 - GS_DISABLE
          Disables the use of the guarded storage facility for the current
          task. The kernel will cease to save and restore the content of
          the guarded storage registers, the task specific content of
          these registers is lost.
      2 - GS_SET_BC_CB
          Set a broadcast guarded storage control block. This is called
          per thread and stores a specific guarded storage control block
          in the task struct of the current task. This control block will
          be used for the broadcast event GS_BROADCAST.
      3 - GS_CLEAR_BC_CB
          Clears the broadcast guarded storage control block. The guarded-
          storage control block is removed from the task struct that was
          established by GS_SET_BC_CB.
      4 - GS_BROADCAST
          Sends a broadcast to all thread siblings of the current task.
          Every sibling that has established a broadcast guarded storage
          control block will load this control block and will be enabled
          for guarded storage. The broadcast guarded storage control block
          is used up, a second broadcast without a refresh of the stored
          control block with GS_SET_BC_CB will not have any effect.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      916cda1a
  7. 24 2月, 2017 1 次提交
  8. 23 2月, 2017 2 次提交
  9. 17 2月, 2017 1 次提交
  10. 14 1月, 2017 1 次提交
    • M
      sched/cputime, s390: Implement delayed accounting of system time · b7394a5f
      Martin Schwidefsky 提交于
      The account_system_time() function is called with a cputime that
      occurred while running in the kernel. The function detects which
      context the CPU is currently running in and accounts the time to
      the correct bucket. This forces the arch code to account the
      cputime for hardirq and softirq immediately.
      
      Such accounting function can be costly and perform unwelcome divisions
      and multiplications, among others.
      
      The arch code can delay the accounting for system time. For s390
      the accounting is done once per timer tick and for each task switch.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      [ Rebase against latest linus tree and move account_system_index_scaled(). ]
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Stanislaw Gruszka <sgruszka@redhat.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Link: http://lkml.kernel.org/r/1483636310-6557-10-git-send-email-fweisbec@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      b7394a5f
  11. 17 11月, 2016 1 次提交
  12. 16 11月, 2016 3 次提交
  13. 15 11月, 2016 1 次提交
  14. 11 11月, 2016 2 次提交
  15. 17 10月, 2016 1 次提交
    • H
      s390/dumpstack: restore reliable indicator for call traces · d0208639
      Heiko Carstens 提交于
      Before merging all different stack tracers the call traces printed had
      an indicator if an entry can be considered reliable or not.
      Unreliable entries were put in braces, reliable not. Currently all
      lines contain these extra braces.
      
      This patch restores the old behaviour by adding an extra "reliable"
      parameter to the callback functions. Only show_trace makes currently
      use of it.
      
      Before:
      [    0.804751] Call Trace:
      [    0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
      [    0.804756] ([<0000000000161d64>] create_worker+0x174/0x1c0)
      
      After:
      [    0.804751] Call Trace:
      [    0.804753] ([<000000000017d0e0>] try_to_wake_up+0x318/0x5e0)
      [    0.804756]  [<0000000000161d64>] create_worker+0x174/0x1c0
      
      Fixes: 758d39eb ("s390/dumpstack: merge all four stack tracers")
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d0208639
  16. 20 6月, 2016 2 次提交
    • D
      s390/mm: remember the int code for the last gmap fault · 4a494439
      David Hildenbrand 提交于
      For nested virtualization, we want to know if we are handling a protection
      exception, because these can directly be forwarded to the guest without
      additional checks.
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      4a494439
    • M
      s390/mm: add shadow gmap support · 4be130a0
      Martin Schwidefsky 提交于
      For a nested KVM guest the outer KVM host needs to create shadow
      page tables for the nested guest. This patch adds the basic support
      to the guest address space (gmap) code.
      
      For each guest address space the inner KVM host creates, the first
      outer KVM host needs to create shadow page tables. The address space
      is identified by the ASCE loaded into the control register 1 at the
      time the inner SIE instruction for the second nested KVM guest is
      executed. The outer KVM host creates the shadow tables starting with
      the table identified by the ASCE on a on-demand basis. The outer KVM
      host will get repeated faults for all the shadow tables needed to
      run the second KVM guest.
      
      While a shadow page table for the second KVM guest is active the access
      to the origin region, segment and page tables needs to be restricted
      for the first KVM guest. For region and segment and page tables the first
      KVM guest may read the memory, but write attempt has to lead to an
      unshadow.  This is done using the page invalid and read-only bits in the
      page table of the first KVM guest. If the first guest re-accesses one of
      the origin pages of a shadow, it gets a fault and the affected parts of
      the shadow page table hierarchy needs to be removed again.
      
      PGSTE tables don't have to be shadowed, as all interpretation assist can't
      deal with the invalid bits in the shadow pte being set differently than
      the original ones provided by the first KVM guest.
      
      Many bug fixes and improvements by David Hildenbrand.
      Reviewed-by: NDavid Hildenbrand <dahi@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      4be130a0
  17. 13 6月, 2016 1 次提交
    • H
      s390/cpuinfo: show dynamic and static cpu mhz · 097a116c
      Heiko Carstens 提交于
      Show the dynamic and static cpu mhz of each cpu. Since these values
      are per cpu this requires a fundamental extension of the format of
      /proc/cpuinfo.
      
      Historically we had only a single line per cpu and a summary at the
      top of the file. This format is hardly extendible if we want to add
      more per cpu information.
      
      Therefore this patch adds per cpu blocks at the end of /proc/cpuinfo:
      
      cpu             : 0
      cpu Mhz dynamic : 5504
      cpu Mhz static  : 5504
      
      cpu             : 1
      cpu Mhz dynamic : 5504
      cpu Mhz static  : 5504
      
      cpu             : 2
      cpu Mhz dynamic : 5504
      cpu Mhz static  : 5504
      
      cpu             : 3
      cpu Mhz dynamic : 5504
      cpu Mhz static  : 5504
      
      Right now each block contains only the dynamic and static cpu mhz,
      but it can be easily extended like on every other architecture.
      
      This extension is supposed to be compatible with the old format.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Acked-by: NSascha Silbe <silbe@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      097a116c
  18. 21 4月, 2016 2 次提交
    • M
      s390/fpu: allocate 'struct fpu' with the task_struct · 3f6813b9
      Martin Schwidefsky 提交于
      Analog to git commit 0c8c0f03
      "x86/fpu, sched: Dynamically allocate 'struct fpu'"
      move the struct fpu to the end of the struct thread_struct,
      set CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT and add the
      setup_task_size() function to calculate the correct size
      fo the task struct.
      
      For the performance_defconfig this increases the size of
      struct task_struct from 7424 bytes to 7936 bytes (MACHINE_HAS_VX==1)
      or 7552 bytes (MACHINE_HAS_VX==0). The dynamic allocation of the
      struct fpu is removed. The slab cache uses an 8KB block for the
      task struct in all cases, there is enough room for the struct fpu.
      For MACHINE_HAS_VX==1 each task now needs 512 bytes less memory.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      3f6813b9
    • G
      s390/mm: fix asce_bits handling with dynamic pagetable levels · 723cacbd
      Gerald Schaefer 提交于
      There is a race with multi-threaded applications between context switch and
      pagetable upgrade. In switch_mm() a new user_asce is built from mm->pgd and
      mm->context.asce_bits, w/o holding any locks. A concurrent mmap with a
      pagetable upgrade on another thread in crst_table_upgrade() could already
      have set new asce_bits, but not yet the new mm->pgd. This would result in a
      corrupt user_asce in switch_mm(), and eventually in a kernel panic from a
      translation exception.
      
      Fix this by storing the complete asce instead of just the asce_bits, which
      can then be read atomically from switch_mm(), so that it either sees the
      old value or the new value, but no mixture. Both cases are OK. Having the
      old value would result in a page fault on access to the higher level memory,
      but the fault handler would see the new mm->pgd, if it was a valid access
      after the mmap on the other thread has completed. So as worst-case scenario
      we would have a page fault loop for the racing thread until the next time
      slice.
      
      Also remove dead code and simplify the upgrade/downgrade path, there are no
      upgrades from 2 levels, and only downgrades from 3 levels for compat tasks.
      There are also no concurrent upgrades, because the mmap_sem is held with
      down_write() in do_mmap, so the flush and table checks during upgrade can
      be removed.
      Reported-by: NMichael Munday <munday@ca.ibm.com>
      Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      723cacbd
  19. 23 2月, 2016 2 次提交
    • H
      s390/dumpstack: merge all four stack tracers · 758d39eb
      Heiko Carstens 提交于
      We have four different stack tracers of which three had bugs. So it's
      time to merge them to a single stack tracer which allows to specify a
      call back function which will be called for each step.
      
      This patch changes behavior a bit:
      
      - the "nosched" and "in_sched_functions" check within
        save_stack_trace_tsk did work only for the last stack frame within a
        context. Now it considers the check for each stack frame like it
        should.
      
      - both the oprofile variant and the perf_events variant did save a
        return address twice if a zero back chain was detected, which
        indicates an interrupt frame. The new dump_trace function will call
        the oprofile and perf_events backends with the psw address that is
        contained within the corresponding pt_regs structure instead.
      
      - the original show_trace and save_context_stack functions did already
        use the psw address of the pt_regs structure if a zero back chain
        was detected. However now we ignore the psw address if it is a user
        space address. After all we trace the kernel stack and not the user
        space stack. This way we also get rid of the garbage user space
        address in case of warnings and / or panic call traces.
      
      So this should make life easier since now there is only one stack
      tracer left which we can break.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      758d39eb
    • H
      s390: add current_stack_pointer() helper function · 76737ce1
      Heiko Carstens 提交于
      Implement current_stack_pointer() helper function and use it
      everywhere, instead of having several different inline assembly
      variants.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Tested-by: NPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      76737ce1
  20. 19 1月, 2016 1 次提交
  21. 11 1月, 2016 1 次提交
  22. 27 11月, 2015 1 次提交
  23. 03 11月, 2015 1 次提交
  24. 27 10月, 2015 2 次提交
  25. 16 10月, 2015 1 次提交
  26. 14 10月, 2015 4 次提交
  27. 29 7月, 2015 1 次提交