1. 09 12月, 2021 1 次提交
  2. 02 12月, 2021 1 次提交
  3. 13 8月, 2021 2 次提交
  4. 26 7月, 2021 1 次提交
  5. 12 5月, 2021 4 次提交
  6. 08 4月, 2021 2 次提交
    • Y
      powerpc/pseries: remove unneeded semicolon · 01ed0510
      Yang Li 提交于
      Eliminate the following coccicheck warning:
      ./arch/powerpc/platforms/pseries/lpar.c:1633:2-3: Unneeded semicolon
      Reported-by: NAbaci Robot <abaci@linux.alibaba.com>
      Signed-off-by: NYang Li <yang.lee@linux.alibaba.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/1617672785-81372-1-git-send-email-yang.lee@linux.alibaba.com
      01ed0510
    • M
      powerpc/pseries: Add key to flags in pSeries_lpar_hpte_updateboltedpp() · b56d55a5
      Michael Ellerman 提交于
      The flags argument to plpar_pte_protect() (aka. H_PROTECT), includes
      the key in bits 9-13, but currently we always set those bits to zero.
      
      In the past that hasn't been a problem because we always used key 0
      for the kernel, and updateboltedpp() is only used for kernel mappings.
      
      However since commit d94b827e ("powerpc/book3s64/kuap: Use Key 3
      for kernel mapping with hash translation") we are now inadvertently
      changing the key (to zero) when we call plpar_pte_protect().
      
      That hasn't broken anything because updateboltedpp() is only used for
      STRICT_KERNEL_RWX, which is currently disabled on 64s due to other
      bugs.
      
      But we want to fix that, so first we need to pass the key correctly to
      plpar_pte_protect(). We can't pass our newpp value directly in, we
      have to convert it into the form expected by the hcall.
      
      The hcall we're using here is H_PROTECT, which is specified in section
      14.5.4.1.6 of LoPAPR v1.1.
      
      It takes a `flags` parameter, and the description for flags says:
      
       * flags: AVPN, pp0, pp1, pp2, key0-key4, n, and for the CMO
         option: CMO Option flags as defined in Table 189‚
      
      If you then go to the start of the parent section, 14.5.4.1, on page
      405, it says:
      
      Register Linkage (For hcall() tokens 0x04 - 0x18)
       * On Call
         * R3 function call token
         * R4 flags (see Table 178‚ “Page Frame Table Access flags field
           definition‚” on page 401)
      
      Then you have to go to section 14.5.3, and on page 394 there is a list
      of hcalls and their tokens (table 176), and there you can see that
      H_PROTECT == 0x18.
      
      Finally you can look at table 178, on page 401, where it specifies the
      layout of the bits for the key:
      
       Bit     Function
       -----------------
       50-54 | key0-key4
      
      Those are big-endian bit numbers, converting to normal bit numbers you
      get bits 9-13, or 0x3e00.
      
      In the kernel we have:
      
        #define HPTE_R_KEY_HI		ASM_CONST(0x3000000000000000)
        #define HPTE_R_KEY_LO		ASM_CONST(0x0000000000000e00)
      
      So the LO bits of newpp are already in the right place, and the HI
      bits need to be shifted down by 48.
      
      Fixes: d94b827e ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation")
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20210331003845.216246-2-mpe@ellerman.id.au
      b56d55a5
  7. 26 3月, 2021 1 次提交
    • A
      powerpc/mm/book3s64: Use the correct storage key value when calling H_PROTECT · 53f1d317
      Aneesh Kumar K.V 提交于
      H_PROTECT expects the flag value to include flags:
        AVPN, pp0, pp1, pp2, key0-key4, Noexec, CMO Option flags
      
      This patch updates hpte_updatepp() to fetch the storage key value from
      the linux page table and use the same in H_PROTECT hcall.
      
      native_hpte_updatepp() is not updated because the kernel doesn't clear
      the existing storage key value there. The kernel also doesn't use
      hpte_updatepp() callback for updating storage keys.
      
      This fixes the below kernel crash observed with KUAP enabled.
      
        BUG: Unable to handle kernel data access on write at 0xc009fffffc440000
        Faulting instruction address: 0xc0000000000b7030
        Key fault AMR: 0xfcffffffffffffff IAMR: 0xc0000077bc498100
        Found HPTE: v = 0x40070adbb6fffc05 r = 0x1ffffffffff1194
        Oops: Kernel access of bad area, sig: 11 [#1]
        LE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
        ...
        CFAR: c000000000010100 DAR: c009fffffc440000 DSISR: 02200000 IRQMASK: 0
        ...
        NIP memset+0x68/0x104
        LR  pcpu_alloc+0x54c/0xb50
        Call Trace:
          pcpu_alloc+0x55c/0xb50 (unreliable)
          blk_stat_alloc_callback+0x94/0x150
          blk_mq_init_allocated_queue+0x64/0x560
          blk_mq_init_queue+0x54/0xb0
          scsi_mq_alloc_queue+0x30/0xa0
          scsi_alloc_sdev+0x1cc/0x300
          scsi_probe_and_add_lun+0xb50/0x1020
          __scsi_scan_target+0x17c/0x790
          scsi_scan_channel+0x90/0xe0
          scsi_scan_host_selected+0x148/0x1f0
          do_scan_async+0x2c/0x2a0
          async_run_entry_fn+0x78/0x220
          process_one_work+0x264/0x540
          worker_thread+0xa8/0x600
          kthread+0x190/0x1a0
          ret_from_kernel_thread+0x5c/0x6c
      
      With KUAP enabled the kernel uses storage key 3 for all its
      translations. But as shown by the debug print, in this specific case we
      have the hash page table entry created with key value 0.
      
        Found HPTE: v = 0x40070adbb6fffc05 r = 0x1ffffffffff1194
      
      and DSISR indicates a key fault.
      
      This can happen due to parallel fault on the same EA by different CPUs:
      
        CPU 0					CPU 1
        fault on X
      
        H_PAGE_BUSY set
        					fault on X
      
        finish fault handling and
        clear H_PAGE_BUSY
        					check for H_PAGE_BUSY
        					continue with fault handling.
      
      This implies CPU1 will end up calling hpte_updatepp for address X and
      the kernel updated the hash pte entry with key 0
      
      Fixes: d94b827e ("powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation")
      Reported-by: NMurilo Opsfelder Araujo <muriloo@linux.ibm.com>
      Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Debugged-by: NMichael Ellerman <mpe@ellerman.id.au>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20210326070755.304625-1-aneesh.kumar@linux.ibm.com
      53f1d317
  8. 18 9月, 2020 1 次提交
  9. 16 7月, 2020 1 次提交
  10. 10 7月, 2020 1 次提交
  11. 10 6月, 2020 2 次提交
    • M
      mm: reorder includes after introduction of linux/pgtable.h · 65fddcfc
      Mike Rapoport 提交于
      The replacement of <asm/pgrable.h> with <linux/pgtable.h> made the include
      of the latter in the middle of asm includes.  Fix this up with the aid of
      the below script and manual adjustments here and there.
      
      	import sys
      	import re
      
      	if len(sys.argv) is not 3:
      	    print "USAGE: %s <file> <header>" % (sys.argv[0])
      	    sys.exit(1)
      
      	hdr_to_move="#include <linux/%s>" % sys.argv[2]
      	moved = False
      	in_hdrs = False
      
      	with open(sys.argv[1], "r") as f:
      	    lines = f.readlines()
      	    for _line in lines:
      		line = _line.rstrip('
      ')
      		if line == hdr_to_move:
      		    continue
      		if line.startswith("#include <linux/"):
      		    in_hdrs = True
      		elif not moved and in_hdrs:
      		    moved = True
      		    print hdr_to_move
      		print line
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-4-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      65fddcfc
    • M
      mm: introduce include/linux/pgtable.h · ca5999fd
      Mike Rapoport 提交于
      The include/linux/pgtable.h is going to be the home of generic page table
      manipulation functions.
      
      Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
      make the latter include asm/pgtable.h.
      Signed-off-by: NMike Rapoport <rppt@linux.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Cain <bcain@codeaurora.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Greentime Hu <green.hu@gmail.com>
      Cc: Greg Ungerer <gerg@linux-m68k.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Guo Ren <guoren@kernel.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Nick Hu <nickhu@andestech.com>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Russell King <linux@armlinux.org.uk>
      Cc: Stafford Horne <shorne@gmail.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vincent Chen <deanbo422@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will@kernel.org>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.orgSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ca5999fd
  12. 25 3月, 2020 1 次提交
  13. 04 2月, 2020 1 次提交
  14. 13 11月, 2019 1 次提交
  15. 03 11月, 2019 1 次提交
  16. 28 10月, 2019 2 次提交
  17. 09 10月, 2019 1 次提交
  18. 24 9月, 2019 2 次提交
    • L
      powerpc/pseries: Call H_BLOCK_REMOVE when supported · 59545ebe
      Laurent Dufour 提交于
      Depending on the hardware and the hypervisor, the hcall H_BLOCK_REMOVE
      may not be able to process all the page sizes for a segment base page
      size, as reported by the TLB Invalidate Characteristics.
      
      For each pair of base segment page size and actual page size, this
      characteristic tells us the size of the block the hcall supports.
      
      In the case, the hcall is not supporting a pair of base segment page
      size, actual page size, it is returning H_PARAM which leads to a panic
      like this:
      
        kernel BUG at /home/srikar/work/linux.git/arch/powerpc/platforms/pseries/lpar.c:466!
        Oops: Exception in kernel mode, sig: 5 [#1]
        BE PAGE_SIZE=64K MMU=Hash SMP NR_CPUS=2048 NUMA pSeries
        Modules linked in:
        CPU: 28 PID: 583 Comm: modprobe Not tainted 5.2.0-master #5
        NIP: c0000000000be8dc LR: c0000000000be880 CTR: 0000000000000000
        REGS: c0000007e77fb130 TRAP: 0700  Not tainted (5.2.0-master)
        MSR: 8000000000029032 <SF,EE,ME,IR,DR,RI> CR: 42224824 XER: 20000000
        CFAR: c0000000000be8fc IRQMASK: 0
        GPR00: 0000000022224828 c0000007e77fb3c0 c000000001434d00 0000000000000005
        GPR04: 9000000004fa8c00 0000000000000000 0000000000000003 0000000000000001
        GPR08: c0000007e77fb450 0000000000000000 0000000000000001 ffffffffffffffff
        GPR12: c0000007e77fb450 c00000000edfcb80 0000cd7d3ea30000 c0000000016022b0
        GPR16: 00000000000000b0 0000cd7d3ea30000 0000000000000001 c080001f04f00105
        GPR20: 0000000000000003 0000000000000004 c000000fbeb05f58 c000000001602200
        GPR24: 0000000000000000 0000000000000004 8800000000000000 c000000000c5d148
        GPR28: c000000000000000 8000000000000000 a000000000000000 c0000007e77fb580
        NIP [c0000000000be8dc] .call_block_remove+0x12c/0x220
        LR [c0000000000be880] .call_block_remove+0xd0/0x220
        Call Trace:
          0xc000000fb8c00240 (unreliable)
          .pSeries_lpar_flush_hash_range+0x578/0x670
          .flush_hash_range+0x44/0x100
          .__flush_tlb_pending+0x3c/0xc0
          .zap_pte_range+0x7ec/0x830
          .unmap_page_range+0x3f4/0x540
          .unmap_vmas+0x94/0x120
          .exit_mmap+0xac/0x1f0
          .mmput+0x9c/0x1f0
          .do_exit+0x388/0xd60
          .do_group_exit+0x54/0x100
          .__se_sys_exit_group+0x14/0x20
          system_call+0x5c/0x70
        Instruction dump:
        39400001 38a00000 4800003c 60000000 60420000 7fa9e800 38e00000 419e0014
        7d29d278 7d290074 7929d182 69270001 <0b070000> 7d495378 394a0001 7fa93040
      
      The call to H_BLOCK_REMOVE should only be made for the supported pair
      of base segment page size, actual page size and using the correct
      maximum block size.
      
      Due to the required complexity in do_block_remove() and
      call_block_remove(), and the fact that currently a block size of 8 is
      returned by the hypervisor, we are only supporting 8 size block to the
      H_BLOCK_REMOVE hcall.
      
      In order to identify this limitation easily in the code, a local
      define HBLKR_SUPPORTED_SIZE defining the currently supported block
      size, and a dedicated checking helper is_supported_hlbkr() are
      introduced.
      
      For regular pages and hugetlb, the assumption is made that the page
      size is equal to the base page size. For THP the page size is assumed
      to be 16M.
      
      Fixes: ba2dd8a2 ("powerpc/pseries/mm: call H_BLOCK_REMOVE")
      Signed-off-by: NLaurent Dufour <ldufour@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20190920130523.20441-3-ldufour@linux.ibm.com
      59545ebe
    • L
      powerpc/pseries: Read TLB Block Invalidate Characteristics · 1211ee61
      Laurent Dufour 提交于
      The PAPR document specifies the TLB Block Invalidate Characteristics
      which tells for each pair of segment base page size, actual page size,
      the size of the block the hcall H_BLOCK_REMOVE supports.
      
      These characteristics are loaded at boot time in a new table
      hblkr_size. The table is separate from the mmu_psize_def because this
      is specific to the pseries platform.
      
      A new init function, pseries_lpar_read_hblkrm_characteristics() is
      added to read the characteristics. It is called from
      pSeries_setup_arch().
      
      Fixes: ba2dd8a2 ("powerpc/pseries/mm: call H_BLOCK_REMOVE")
      Signed-off-by: NLaurent Dufour <ldufour@linux.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20190920130523.20441-2-ldufour@linux.ibm.com
      1211ee61
  19. 05 9月, 2019 2 次提交
  20. 19 8月, 2019 1 次提交
    • G
      powerpc/pseries: Fix cpu_hotplug_lock acquisition in resize_hpt() · c784be43
      Gautham R. Shenoy 提交于
      The calls to arch_add_memory()/arch_remove_memory() are always made
      with the read-side cpu_hotplug_lock acquired via memory_hotplug_begin().
      On pSeries, arch_add_memory()/arch_remove_memory() eventually call
      resize_hpt() which in turn calls stop_machine() which acquires the
      read-side cpu_hotplug_lock again, thereby resulting in the recursive
      acquisition of this lock.
      
      In the absence of CONFIG_PROVE_LOCKING, we hadn't observed a system
      lockup during a memory hotplug operation because cpus_read_lock() is a
      per-cpu rwsem read, which, in the fast-path (in the absence of the
      writer, which in our case is a CPU-hotplug operation) simply
      increments the read_count on the semaphore. Thus a recursive read in
      the fast-path doesn't cause any problems.
      
      However, we can hit this problem in practice if there is a concurrent
      CPU-Hotplug operation in progress which is waiting to acquire the
      write-side of the lock. This will cause the second recursive read to
      block until the writer finishes. While the writer is blocked since the
      first read holds the lock. Thus both the reader as well as the writers
      fail to make any progress thereby blocking both CPU-Hotplug as well as
      Memory Hotplug operations.
      
      Memory-Hotplug				CPU-Hotplug
      CPU 0					CPU 1
      ------                                  ------
      
      1. down_read(cpu_hotplug_lock.rw_sem)
         [memory_hotplug_begin]
      					2. down_write(cpu_hotplug_lock.rw_sem)
      					[cpu_up/cpu_down]
      3. down_read(cpu_hotplug_lock.rw_sem)
         [stop_machine()]
      
      Lockdep complains as follows in these code-paths.
      
       swapper/0/1 is trying to acquire lock:
       (____ptrval____) (cpu_hotplug_lock.rw_sem){++++}, at: stop_machine+0x2c/0x60
      
      but task is already holding lock:
      (____ptrval____) (cpu_hotplug_lock.rw_sem){++++}, at: mem_hotplug_begin+0x20/0x50
      
       other info that might help us debug this:
        Possible unsafe locking scenario:
      
              CPU0
              ----
         lock(cpu_hotplug_lock.rw_sem);
         lock(cpu_hotplug_lock.rw_sem);
      
        *** DEADLOCK ***
      
        May be due to missing lock nesting notation
      
       3 locks held by swapper/0/1:
        #0: (____ptrval____) (&dev->mutex){....}, at: __driver_attach+0x12c/0x1b0
        #1: (____ptrval____) (cpu_hotplug_lock.rw_sem){++++}, at: mem_hotplug_begin+0x20/0x50
        #2: (____ptrval____) (mem_hotplug_lock.rw_sem){++++}, at: percpu_down_write+0x54/0x1a0
      
      stack backtrace:
       CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.0.0-rc5-58373-gbc99402235f3-dirty #166
       Call Trace:
         dump_stack+0xe8/0x164 (unreliable)
         __lock_acquire+0x1110/0x1c70
         lock_acquire+0x240/0x290
         cpus_read_lock+0x64/0xf0
         stop_machine+0x2c/0x60
         pseries_lpar_resize_hpt+0x19c/0x2c0
         resize_hpt_for_hotplug+0x70/0xd0
         arch_add_memory+0x58/0xfc
         devm_memremap_pages+0x5e8/0x8f0
         pmem_attach_disk+0x764/0x830
         nvdimm_bus_probe+0x118/0x240
         really_probe+0x230/0x4b0
         driver_probe_device+0x16c/0x1e0
         __driver_attach+0x148/0x1b0
         bus_for_each_dev+0x90/0x130
         driver_attach+0x34/0x50
         bus_add_driver+0x1a8/0x360
         driver_register+0x108/0x170
         __nd_driver_register+0xd0/0xf0
         nd_pmem_driver_init+0x34/0x48
         do_one_initcall+0x1e0/0x45c
         kernel_init_freeable+0x540/0x64c
         kernel_init+0x2c/0x160
         ret_from_kernel_thread+0x5c/0x68
      
      Fix this issue by
        1) Requiring all the calls to pseries_lpar_resize_hpt() be made
           with cpu_hotplug_lock held.
      
        2) In pseries_lpar_resize_hpt() invoke stop_machine_cpuslocked()
           as a consequence of 1)
      
        3) To satisfy 1), in hpt_order_set(), call mmu_hash_ops.resize_hpt()
           with cpu_hotplug_lock held.
      
      Fixes: dbcf929c ("powerpc/pseries: Add support for hash table resizing")
      Cc: stable@vger.kernel.org # v4.11+
      Reported-by: NAneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Signed-off-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/1557906352-29048-1-git-send-email-ego@linux.vnet.ibm.com
      c784be43
  21. 04 7月, 2019 5 次提交
  22. 31 5月, 2019 1 次提交
  23. 20 4月, 2019 1 次提交
  24. 06 1月, 2019 1 次提交
    • M
      jump_label: move 'asm goto' support test to Kconfig · e9666d10
      Masahiro Yamada 提交于
      Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label".
      
      The jump label is controlled by HAVE_JUMP_LABEL, which is defined
      like this:
      
        #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
        # define HAVE_JUMP_LABEL
        #endif
      
      We can improve this by testing 'asm goto' support in Kconfig, then
      make JUMP_LABEL depend on CC_HAS_ASM_GOTO.
      
      Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will
      match to the real kernel capability.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: NSedat Dilek <sedat.dilek@gmail.com>
      e9666d10
  25. 20 10月, 2018 1 次提交
  26. 17 9月, 2018 2 次提交