1. 18 7月, 2022 2 次提交
  2. 13 7月, 2022 1 次提交
  3. 06 7月, 2022 4 次提交
  4. 19 5月, 2022 1 次提交
  5. 10 5月, 2022 2 次提交
  6. 27 4月, 2022 4 次提交
  7. 19 4月, 2022 1 次提交
  8. 22 2月, 2022 2 次提交
  9. 29 1月, 2022 2 次提交
  10. 14 1月, 2022 1 次提交
    • K
      powerpc: Fix virt_addr_valid() check · 44634062
      Kefeng Wang 提交于
      hulk inclusion
      category: bugfix
      bugzilla: 186017 https://gitee.com/openeuler/kernel/issues/I4DDEL
      
      --------------------------------
      
      When run ethtool eth0, the BUG occurred,
      
        usercopy: Kernel memory exposure attempt detected from SLUB object not in SLUB page?! (offset 0, size 1048)!
        kernel BUG at mm/usercopy.c:99
        ...
        usercopy_abort+0x64/0xa0 (unreliable)
        __check_heap_object+0x168/0x190
        __check_object_size+0x1a0/0x200
        dev_ethtool+0x2494/0x2b20
        dev_ioctl+0x5d0/0x770
        sock_do_ioctl+0xf0/0x1d0
        sock_ioctl+0x3ec/0x5a0
        __se_sys_ioctl+0xf0/0x160
        system_call_exception+0xfc/0x1f0
        system_call_common+0xf8/0x200
      
      The code shows below,
      
        data = vzalloc(array_size(gstrings.len, ETH_GSTRING_LEN));
        copy_to_user(useraddr, data, gstrings.len * ETH_GSTRING_LEN))
      
      The data is alloced by vmalloc(), virt_addr_valid(ptr) will return true
      on PowerPC64, which leads to the panic.
      
      As commit 4dd7554a ("powerpc/64: Add VIRTUAL_BUG_ON checks for __va
      and __pa addresses") does, make sure the virt addr above PAGE_OFFSET in
      the virt_addr_valid().
      Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NYuanzheng Song <songyuanzheng@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      44634062
  11. 06 12月, 2021 6 次提交
  12. 15 11月, 2021 1 次提交
    • S
      powerpc/numa: Update cpu_cpu_map on CPU online/offline · da40c308
      Srikar Dronamraju 提交于
      mainline inclusion
      from mainline-v5.15-rc1
      commit 9a245d0e
      category: bugfix
      bugzilla: 180732 https://gitee.com/openeuler/kernel/issues/I4DDEL
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=9a245d0e1f006bc7ccf0285d0d520ed304d00c4a
      
      ---------------------------
      
      cpu_cpu_map holds all the CPUs in the DIE. However in PowerPC, when
      onlining/offlining of CPUs, this mask doesn't get updated.  This mask
      is however updated when CPUs are added/removed. So when both
      operations like online/offline of CPUs and adding/removing of CPUs are
      done simultaneously, then cpumaps end up broken.
      
      WARNING: CPU: 13 PID: 1142 at kernel/sched/topology.c:898
      build_sched_domains+0xd48/0x1720
      Modules linked in: rpadlpar_io rpaphp mptcp_diag xsk_diag tcp_diag
      udp_diag raw_diag inet_diag unix_diag af_packet_diag netlink_diag
      bonding tls nft_fib_inet nft_fib_ipv4 nft_fib_ipv6 nft_fib
      nft_reject_inet nf_reject_ipv4 nf_reject_ipv6 nft_reject nft_ct
      nft_chain_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 ip_set
      rfkill nf_tables nfnetlink pseries_rng xts vmx_crypto uio_pdrv_genirq
      uio binfmt_misc ip_tables xfs libcrc32c dm_service_time sd_mod t10_pi sg
      ibmvfc scsi_transport_fc ibmveth dm_multipath dm_mirror dm_region_hash
      dm_log dm_mod fuse
      CPU: 13 PID: 1142 Comm: kworker/13:2 Not tainted 5.13.0-rc6+ #28
      Workqueue: events cpuset_hotplug_workfn
      NIP:  c0000000001caac8 LR: c0000000001caac4 CTR: 00000000007088ec
      REGS: c00000005596f220 TRAP: 0700   Not tainted  (5.13.0-rc6+)
      MSR:  8000000000029033 <SF,EE,ME,IR,DR,RI,LE>  CR: 48828222  XER:
      00000009
      CFAR: c0000000001ea698 IRQMASK: 0
      GPR00: c0000000001caac4 c00000005596f4c0 c000000001c4a400 0000000000000036
      GPR04: 00000000fffdffff c00000005596f1d0 0000000000000027 c0000018cfd07f90
      GPR08: 0000000000000023 0000000000000001 0000000000000027 c0000018fe68ffe8
      GPR12: 0000000000008000 c00000001e9d1880 c00000013a047200 0000000000000800
      GPR16: c000000001d3c7d0 0000000000000240 0000000000000048 c000000010aacd18
      GPR20: 0000000000000001 c000000010aacc18 c00000013a047c00 c000000139ec2400
      GPR24: 0000000000000280 c000000139ec2520 c000000136c1b400 c000000001c93060
      GPR28: c00000013a047c20 c000000001d3c6c0 c000000001c978a0 000000000000000d
      NIP [c0000000001caac8] build_sched_domains+0xd48/0x1720
      LR [c0000000001caac4] build_sched_domains+0xd44/0x1720
      Call Trace:
      [c00000005596f4c0] [c0000000001caac4] build_sched_domains+0xd44/0x1720 (unreliable)
      [c00000005596f670] [c0000000001cc5ec] partition_sched_domains_locked+0x3ac/0x4b0
      [c00000005596f710] [c0000000002804e4] rebuild_sched_domains_locked+0x404/0x9e0
      [c00000005596f810] [c000000000283e60] rebuild_sched_domains+0x40/0x70
      [c00000005596f840] [c000000000284124] cpuset_hotplug_workfn+0x294/0xf10
      [c00000005596fc60] [c000000000175040] process_one_work+0x290/0x590
      [c00000005596fd00] [c0000000001753c8] worker_thread+0x88/0x620
      [c00000005596fda0] [c000000000181704] kthread+0x194/0x1a0
      [c00000005596fe10] [c00000000000ccec] ret_from_kernel_thread+0x5c/0x70
      Instruction dump:
      485af049 60000000 2fa30800 409e0028 80fe0000 e89a00f8 e86100e8 38da0120
      7f88e378 7ce53b78 4801fb91 60000000 <0fe00000> 39000000 38e00000 38c00000
      
      Fix this by updating cpu_cpu_map aka cpumask_of_node() on every CPU
      online/offline.
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
      Link: https://lore.kernel.org/r/20210826100521.412639-5-srikar@linux.vnet.ibm.comSigned-off-by: NChen Lifu <chenlifu@huawei.com>
      Reviewed-by: NZhang Jianhua <chris.zjh@huawei.com>
      Reviewed-by: NLinruizhe <linruizhe@huawei.com>
      Reviewed-by: NHe Ying <heying24@huawei.com>
      Reviewed-by: NYihang Xu <xuyihang@huawei.com>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      da40c308
  13. 21 10月, 2021 2 次提交
  14. 15 10月, 2021 2 次提交
  15. 13 10月, 2021 1 次提交
  16. 14 7月, 2021 8 次提交