1. 05 7月, 2023 8 次提交
  2. 04 7月, 2023 4 次提交
  3. 30 6月, 2023 1 次提交
  4. 28 6月, 2023 3 次提交
  5. 27 6月, 2023 2 次提交
  6. 26 6月, 2023 6 次提交
    • Z
      config: enable set the max iova mag size to 128 · 6eece591
      Zhang Zekun 提交于
      hulk inclusion
      category: feature
      bugzilla: https://gitee.com/openeuler/kernel/issues/I7ASVH
      CVE: NA
      
      ---------------------------------------
      
      iova mag size will be iova_rcache size to 128, to support more
      concurrency in iova allocation, and can fix the problem dixcribe
      in bugzilla.
      Signed-off-by: NZhang Zekun <zhangzekun11@huawei.com>
      6eece591
    • Z
      iommu/iova: increase the iova_rcache depot max size · 5bda0e7a
      Zhang Zekun 提交于
      hulk inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I7ASVH
      CVE: NA
      
      ---------------------------------------
      
      In fio test with iodepth=256 with allowd cpus to 0-255, we observe a
      serve performance decrease. The statistic of cache hit rate are
      relatively low. Here are some statistics about the iova_cpu_rcahe of
      all cpus:
      
      iova alloc order		0	1	2	3	4	5
      ----------------------------------------------------------------------
      average cpu_rcache hit rate	0.9941	0.7408	0.8109	0.8854	0.9082	0.8887
      
      Jobs: 12 (f=12): [R(12)][20.0%][r=1091MiB/s][r=279k IOPS][eta 00m:28s]
      Jobs: 12 (f=12): [R(12)][22.2%][r=1426MiB/s][r=365k IOPS][eta 00m:28s]
      Jobs: 12 (f=12): [R(12)][25.0%][r=1607MiB/s][r=411k IOPS][eta 00m:27s]
      Jobs: 12 (f=12): [R(12)][27.8%][r=1501MiB/s][r=384k IOPS][eta 00m:26s]
      Jobs: 12 (f=12): [R(12)][30.6%][r=1486MiB/s][r=380k IOPS][eta 00m:25s]
      Jobs: 12 (f=12): [R(12)][33.3%][r=1393MiB/s][r=357k IOPS][eta 00m:24s]
      Jobs: 12 (f=12): [R(12)][36.1%][r=1550MiB/s][r=397k IOPS][eta 00m:23s]
      Jobs: 12 (f=12): [R(12)][38.9%][r=1485MiB/s][r=380k IOPS][eta 00m:22s]
      
      The under lying hisi sas driver has 16 thread irqs to free iova, but
      these irq call back function will only free iovas on 16 certain cpus(cpu{0,
      16,32...,240}). For example, thread irq which smp affinity is 0-15, will
      only free iova on cpu 0. However, the driver will alloc iova on all
      cpus(cpu{0-255}), cpus without free iova in local cpu_rcache need to get
      free iovas from iova_rcache->depot. The current size of
      iova_rcache->depot max size is 32, and it seems to be too small for 256
      users (16 cpus will put iovas to iova_rcache->depot and 240 cpus will
      try to get iova from it). Set iova_rcache->depot to 128 can fix the
      performance issue, and the performance can return to normal.
      
      iova alloc order		0	1	2	3	4	5
      ----------------------------------------------------------------------
      average cpu_rcache hit rate	0.9925	0.9736	0.9789	0.9867	0.9889	0.9906
      
      Jobs: 12 (f=12): [R(12)][12.9%][r=7526MiB/s][r=1927k IOPS][eta 04m:30s]
      Jobs: 12 (f=12): [R(12)][13.2%][r=7527MiB/s][r=1927k IOPS][eta 04m:29s]
      Jobs: 12 (f=12): [R(12)][13.5%][r=7529MiB/s][r=1927k IOPS][eta 04m:28s]
      Jobs: 12 (f=12): [R(12)][13.9%][r=7531MiB/s][r=1928k IOPS][eta 04m:27s]
      Jobs: 12 (f=12): [R(12)][14.2%][r=7529MiB/s][r=1928k IOPS][eta 04m:26s]
      Jobs: 12 (f=12): [R(12)][14.5%][r=7528MiB/s][r=1927k IOPS][eta 04m:25s]
      Jobs: 12 (f=12): [R(12)][14.8%][r=7527MiB/s][r=1927k IOPS][eta 04m:24s]
      Jobs: 12 (f=12): [R(12)][15.2%][r=7525MiB/s][r=1926k IOPS][eta 04m:23s]
      Signed-off-by: NZhang Zekun <zhangzekun11@huawei.com>
      5bda0e7a
    • Z
      relayfs: fix out-of-bounds access in relay_file_read · 4d3fb2ad
      Zhang Zhengming 提交于
      stable inclusion
      from stable-v5.10.180
      commit 1b0df44753bf9e45eaf5cee34f87597193f862e8
      category: bugfix
      bugzilla: https://gitee.com/src-openeuler/kernel/issues/I7E5C1
      CVE: CVE-2023-3268
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=1b0df44753bf9e45eaf5cee34f87597193f862e8
      
      ----------------------------------------
      
      commit 43ec16f1 upstream.
      
      There is a crash in relay_file_read, as the var from
      point to the end of last subbuf.
      
      The oops looks something like:
      pc : __arch_copy_to_user+0x180/0x310
      lr : relay_file_read+0x20c/0x2c8
      Call trace:
       __arch_copy_to_user+0x180/0x310
       full_proxy_read+0x68/0x98
       vfs_read+0xb0/0x1d0
       ksys_read+0x6c/0xf0
       __arm64_sys_read+0x20/0x28
       el0_svc_common.constprop.3+0x84/0x108
       do_el0_svc+0x74/0x90
       el0_svc+0x1c/0x28
       el0_sync_handler+0x88/0xb0
       el0_sync+0x148/0x180
      
      We get the condition by analyzing the vmcore:
      
      1). The last produced byte and last consumed byte
          both at the end of the last subbuf
      
      2). A softirq calls function(e.g __blk_add_trace)
          to write relay buffer occurs when an program is calling
          relay_file_read_avail().
      
              relay_file_read
                      relay_file_read_avail
                              relay_file_read_consume(buf, 0, 0);
                              //interrupted by softirq who will write subbuf
                              ....
                              return 1;
                      //read_start point to the end of the last subbuf
                      read_start = relay_file_read_start_pos
                      //avail is equal to subsize
                      avail = relay_file_read_subbuf_avail
                      //from  points to an invalid memory address
                      from = buf->start + read_start
                      //system is crashed
                      copy_to_user(buffer, from, avail)
      
      Link: https://lkml.kernel.org/r/20230419040203.37676-1-zhang.zhengming@h3c.com
      Fixes: 8d62fdeb ("relay file read: start-pos fix")
      Signed-off-by: NZhang Zhengming <zhang.zhengming@h3c.com>
      Reviewed-by: NZhao Lei <zhao_lei1@hoperun.com>
      Reviewed-by: NZhou Kete <zhou.kete@h3c.com>
      Reviewed-by: NPengcheng Yang <yangpc@wangsu.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NGONG, Ruiqi <gongruiqi1@huawei.com>
      (cherry picked from commit 6b2322db)
      4d3fb2ad
    • M
      config: Disable EFI_FAKE_MEMMAP support for arm64 by default · 31478980
      Ma Wupeng 提交于
      hulk inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I7F3NP
      CVE: NA
      
      --------------------------------
      
      EFI_FAKE_MEMMAP is used specific memory range by updating original
      (firmware provided) EFI memmap. This can only be used for debug
      propose. Disable it by default.
      Signed-off-by: NMa Wupeng <mawupeng1@huawei.com>
      (cherry picked from commit 13ecd6fc)
      31478980
    • M
      efi: Fix UAF for arm64 when enable efi_fake_mem · 505f1957
      Ma Wupeng 提交于
      hulk inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I7F3NP
      CVE: NA
      
      --------------------------------
      
      Efi fake mem support for arm64 is introduced for debug propose
      only. However efi_memmap_init_late in arm_enable_runtime_services
      will free this memory which will lead to UAF on efi.memmap.map.
      
      In order to slove this, clear efi.memmap.flags to skip free.
      Since efi map is never freed in arm64, this will not lead to
      memroy leak.
      Signed-off-by: NMa Wupeng <mawupeng1@huawei.com>
      (cherry picked from commit 6b455c10)
      505f1957
    • D
      mm/memory_hotplug: extend offline_and_remove_memory() to handle more than one memory block · 705ec131
      David Hildenbrand 提交于
      mainline inclusion
      from mainline-v5.11-rc1
      commit 8dc4bb58
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I7F3HQ
      CVE: NA
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=8dc4bb58a146655eb057247d7c9d19e73928715b
      
      --------------------------------
      
      virtio-mem soon wants to use offline_and_remove_memory() memory that
      exceeds a single Linux memory block (memory_block_size_bytes()). Let's
      remove that restriction.
      
      Let's remember the old state and try to restore that if anything goes
      wrong. While re-onlining can, in general, fail, it's highly unlikely to
      happen (usually only when a notifier fails to allocate memory, and these
      are rather rare).
      
      This will be used by virtio-mem to offline+remove memory ranges that are
      bigger than a single memory block - for example, with a device block
      size of 1 GiB (e.g., gigantic pages in the hypervisor) and a Linux memory
      block size of 128MB.
      
      While we could compress the state into 2 bit, using 8 bit is much
      easier.
      
      This handling is similar, but different to acpi_scan_try_to_offline():
      
      a) We don't try to offline twice. I am not sure if this CONFIG_MEMCG
      optimization is still relevant - it should only apply to ZONE_NORMAL
      (where we have no guarantees). If relevant, we can always add it.
      
      b) acpi_scan_try_to_offline() simply onlines all memory in case
      something goes wrong. It doesn't restore previous online type. Let's do
      that, so we won't overwrite what e.g., user space configured.
      Reviewed-by: NWei Yang <richard.weiyang@linux.alibaba.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Oscar Salvador <osalvador@suse.de>
      Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Link: https://lore.kernel.org/r/20201112133815.13332-28-david@redhat.comSigned-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NMa Wupeng <mawupeng1@huawei.com>
      (cherry picked from commit 9b7206bc)
      705ec131
  7. 25 6月, 2023 6 次提交
  8. 24 6月, 2023 1 次提交
  9. 21 6月, 2023 9 次提交