1. 14 12月, 2016 1 次提交
  2. 13 12月, 2016 1 次提交
  3. 29 11月, 2016 1 次提交
  4. 11 11月, 2016 1 次提交
    • M
      s390/preempt: move preempt_count to the lowcore · c360192b
      Martin Schwidefsky 提交于
      Convert s390 to use a field in the struct lowcore for the CPU
      preemption count. It is a bit cheaper to access a lowcore field
      compared to a thread_info variable and it removes the depencency
      on a task related structure.
      
      bloat-o-meter on the vmlinux image for the default configuration
      (CONFIG_PREEMPT_NONE=y) reports a small reduction in text size:
      
      add/remove: 0/0 grow/shrink: 18/578 up/down: 228/-5448 (-5220)
      
      A larger improvement is achieved with the default configuration
      but with CONFIG_PREEMPT=y and CONFIG_DEBUG_PREEMPT=n:
      
      add/remove: 2/6 grow/shrink: 59/4477 up/down: 1618/-228762 (-227144)
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      c360192b
  5. 24 10月, 2016 1 次提交
    • G
      s390/mm: fix zone calculation in arch_add_memory() · 4a654294
      Gerald Schaefer 提交于
      Standby (hotplug) memory should be added to ZONE_MOVABLE on s390. After
      commit 199071f1 "s390/mm: make arch_add_memory() NUMA aware",
      arch_add_memory() used memblock_end_of_DRAM() to find out the end of
      ZONE_NORMAL and the beginning of ZONE_MOVABLE. However, commit 7f36e3e5
      "memory-hotplug: add hot-added memory ranges to memblock before allocate
      node_data for a node." moved the call of memblock_add_node() before
      the call of arch_add_memory() in add_memory_resource(), and thus changed
      the return value of memblock_end_of_DRAM() when called in
      arch_add_memory(). As a result, arch_add_memory() will think that all
      memory blocks should be added to ZONE_NORMAL.
      
      Fix this by changing the logic in arch_add_memory() so that it will
      manually iterate over all zones of a given node to find out which zone
      a memory block should be added to.
      Reviewed-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      4a654294
  6. 19 10月, 2016 1 次提交
  7. 17 10月, 2016 1 次提交
  8. 20 9月, 2016 2 次提交
  9. 24 8月, 2016 3 次提交
  10. 10 8月, 2016 1 次提交
    • H
      s390/pageattr: handle numpages parameter correctly · 4d81aaa5
      Heiko Carstens 提交于
      Both set_memory_ro() and set_memory_rw() will modify the page
      attributes of at least one page, even if the numpages parameter is
      zero.
      
      The author expected that calling these functions with numpages == zero
      would never happen. However with the new 444d13ff ("modules: add
      ro_after_init support") feature this happens frequently.
      
      Therefore do the right thing and make these two functions return
      gracefully if nothing should be done.
      
      Fixes crashes on module load like this one:
      
      Unable to handle kernel pointer dereference in virtual kernel address space
      Failing address: 000003ff80008000 TEID: 000003ff80008407
      Fault in home space mode while using kernel ASCE.
      AS:0000000000d18007 R3:00000001e6aa4007 S:00000001e6a10800 P:00000001e34ee21d
      Oops: 0004 ilc:3 [#1] SMP
      Modules linked in: x_tables
      CPU: 10 PID: 1 Comm: systemd Not tainted 4.7.0-11895-g3fa9045 #4
      Hardware name: IBM              2964 N96              703              (LPAR)
      task: 00000001e9118000 task.stack: 00000001e9120000
      Krnl PSW : 0704e00180000000 00000000005677f8 (rb_erase+0xf0/0x4d0)
                 R:0 T:1 IO:1 EX:1 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
      Krnl GPRS: 000003ff80008b20 000003ff80008b20 000003ff80008b70 0000000000b9d608
                 000003ff80008b20 0000000000000000 00000001e9123e88 000003ff80008950
                 00000001e485ab40 000003ff00000000 000003ff80008b00 00000001e4858480
                 0000000100000000 000003ff80008b68 00000000001d5998 00000001e9123c28
      Krnl Code: 00000000005677e8: ec1801c3007c        cgij    %r1,0,8,567b6e
                 00000000005677ee: e32010100020        cg      %r2,16(%r1)
                #00000000005677f4: a78401c2            brc     8,567b78
                >00000000005677f8: e35010080024        stg     %r5,8(%r1)
                 00000000005677fe: ec5801af007c        cgij    %r5,0,8,567b5c
                 0000000000567804: e30050000024        stg     %r0,0(%r5)
                 000000000056780a: ebacf0680004        lmg     %r10,%r12,104(%r15)
                 0000000000567810: 07fe                bcr     15,%r14
      Call Trace:
      ([<000003ff80008900>] __this_module+0x0/0xffffffffffffd700 [x_tables])
      ([<0000000000264fd4>] do_init_module+0x12c/0x220)
      ([<00000000001da14a>] load_module+0x24e2/0x2b10)
      ([<00000000001da976>] SyS_finit_module+0xbe/0xd8)
      ([<0000000000803b26>] system_call+0xd6/0x264)
      Last Breaking-Event-Address:
       [<000000000056771a>] rb_erase+0x12/0x4d0
       Kernel panic - not syncing: Fatal exception: panic_on_oops
      Reported-by: NChristian Borntraeger <borntraeger@de.ibm.com>
      Reported-and-tested-by: NSebastian Ott <sebott@linux.vnet.ibm.com>
      Fixes: e8a97e42 ("s390/pageattr: allow kernel page table splitting")
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      4d81aaa5
  11. 31 7月, 2016 1 次提交
    • G
      s390/mm: clean up pte/pmd encoding · bc29b7ac
      Gerald Schaefer 提交于
      The hugetlbfs pte<->pmd conversion functions currently assume that the pmd
      bit layout is consistent with the pte layout, which is not really true.
      
      The SW read and write bits are encoded as the sequence "wr" in a pte, but
      in a pmd it is "rw". The hugetlbfs conversion assumes that the sequence
      is identical in both cases, which results in swapped read and write bits
      in the pmd. In practice this is not a problem, because those pmd bits are
      only relevant for THP pmds and not for hugetlbfs pmds. The hugetlbfs code
      works on (fake) ptes, and the converted pte bits are correct.
      
      There is another variation in pte/pmd encoding which affects dirty
      prot-none ptes/pmds. In this case, a pmd has both its HW read-only and
      invalid bit set, while it is only the invalid bit for a pte. This also has
      no effect in practice, but it should better be consistent.
      
      This patch fixes both inconsistencies by changing the SW read/write bit
      layout for pmds as well as the PAGE_NONE encoding for ptes. It also makes
      the hugetlbfs conversion functions more robust by introducing a
      move_set_bit() macro that uses the pte/pmd bit #defines instead of
      constant shifts.
      Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      bc29b7ac
  12. 27 7月, 2016 1 次提交
  13. 13 7月, 2016 1 次提交
  14. 06 7月, 2016 1 次提交
  15. 28 6月, 2016 1 次提交
  16. 25 6月, 2016 1 次提交
  17. 20 6月, 2016 18 次提交
  18. 14 6月, 2016 1 次提交
    • H
      s390/mm: fix compile for PAGE_DEFAULT_KEY != 0 · de3fa841
      Heiko Carstens 提交于
      The usual problem for code that is ifdef'ed out is that it doesn't
      compile after a while. That's also the case for the storage key
      initialisation code, if it would be used (set PAGE_DEFAULT_KEY to
      something not zero):
      
      ./arch/s390/include/asm/page.h: In function 'storage_key_init_range':
      ./arch/s390/include/asm/page.h:36:2: error: implicit declaration of function '__storage_key_init_range'
      
      Since the code itself has been useful for debugging purposes several
      times, remove the ifdefs and make sure the code gets compiler
      coverage. The cost for this is eight bytes.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      de3fa841
  19. 13 6月, 2016 2 次提交
    • H
      s390: avoid extable collisions · 6c22c986
      Heiko Carstens 提交于
      We have some inline assemblies where the extable entry points to a
      label at the end of an inline assembly which is not followed by an
      instruction.
      
      On the other hand we have also inline assemblies where the extable
      entry points to the first instruction of an inline assembly.
      
      If a first type inline asm (extable point to empty label at the end)
      would be directly followed by a second type inline asm (extable points
      to first instruction) then we would have two different extable entries
      that point to the same instruction but would have a different target
      address.
      
      This can lead to quite random behaviour, depending on sorting order.
      
      I verified that we currently do not have such collisions within the
      kernel. However to avoid such subtle bugs add a couple of nop
      instructions to those inline assemblies which contain an extable that
      points to an empty label.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      6c22c986
    • H
      s390: add proper __ro_after_init support · d07a980c
      Heiko Carstens 提交于
      On s390 __ro_after_init is currently mapped to __read_mostly which
      means that data marked as __ro_after_init will not be protected.
      
      Reason for this is that the common code __ro_after_init implementation
      is x86 centric: the ro_after_init data section was added to rodata,
      since x86 enables write protection to kernel text and rodata very
      late. On s390 we have write protection for these sections enabled with
      the initial page tables. So adding the ro_after_init data section to
      rodata does not work on s390.
      
      In order to make __ro_after_init work properly on s390 move the
      ro_after_init data, right behind rodata. Unlike the rodata section it
      will be marked read-only later after all init calls happened.
      
      This s390 specific implementation adds new __start_ro_after_init and
      __end_ro_after_init labels. Everything in between will be marked
      read-only after the init calls happened. In addition to the
      __ro_after_init data move also the exception table there, since from a
      practical point of view it fits the __ro_after_init requirements.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d07a980c