1. 13 8月, 2021 2 次提交
    • Y
      bitmap: extend comment to bitmap_print_bitmask/list_to_buf · 3b35f2a6
      Yury Norov 提交于
      Extend comment to new function to warn potential users about caveats.
      Signed-off-by: NYury Norov <yury.norov@gmail.com>
      Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com>
      Link: https://lore.kernel.org/r/20210806110251.560-6-song.bao.hua@hisilicon.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      3b35f2a6
    • T
      cpumask: introduce cpumap_print_list/bitmask_to_buf to support large bitmask and list · 1fae5629
      Tian Tao 提交于
      The existing cpumap_print_to_pagebuf() is used by cpu topology and other
      drivers to export hexadecimal bitmask and decimal list to userspace by
      sysfs ABI.
      
      Right now, those drivers are using a normal attribute for this kind of
      ABIs. A normal attribute typically has show entry as below:
      
      static ssize_t example_dev_show(struct device *dev,
                      struct device_attribute *attr, char *buf)
      {
      	...
      	return cpumap_print_to_pagebuf(true, buf, &pmu_mmdc->cpu);
      }
      show entry of attribute has no offset and count parameters and this
      means the file is limited to one page only.
      
      cpumap_print_to_pagebuf() API works terribly well for this kind of
      normal attribute with buf parameter and without offset, count:
      
      static inline ssize_t
      cpumap_print_to_pagebuf(bool list, char *buf, const struct cpumask *mask)
      {
      	return bitmap_print_to_pagebuf(list, buf, cpumask_bits(mask),
      				       nr_cpu_ids);
      }
      
      The problem is once we have many cpus, we have a chance to make bitmask
      or list more than one page. Especially for list, it could be as complex
      as 0,3,5,7,9,...... We have no simple way to know it exact size.
      
      It turns out bin_attribute is a way to break this limit. bin_attribute
      has show entry as below:
      static ssize_t
      example_bin_attribute_show(struct file *filp, struct kobject *kobj,
                   struct bin_attribute *attr, char *buf,
                   loff_t offset, size_t count)
      {
      	...
      }
      
      With the new offset and count parameters, this makes sysfs ABI be able
      to support file size more than one page. For example, offset could be
      >= 4096.
      
      This patch introduces cpumap_print_bitmask/list_to_buf() and their bitmap
      infrastructure bitmap_print_bitmask/list_to_buf() so that those drivers
      can move to bin_attribute to support large bitmask and list. At the same
      time, we have to pass those corresponding parameters such as offset, count
      from bin_attribute to this new API.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Stefano Brivio <sbrivio@redhat.com>
      Cc: Alexander Gordeev <agordeev@linux.ibm.com>
      Cc: "Ma, Jianpeng" <jianpeng.ma@intel.com>
      Cc: Yury Norov <yury.norov@gmail.com>
      Cc: Valentin Schneider <valentin.schneider@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
      Reviewed-by: NJonathan Cameron <Jonathan.Cameron@huawei.com>
      Signed-off-by: NTian Tao <tiantao6@hisilicon.com>
      Signed-off-by: NBarry Song <song.bao.hua@hisilicon.com>
      Link: https://lore.kernel.org/r/20210806110251.560-2-song.bao.hua@hisilicon.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      1fae5629
  2. 12 5月, 2021 1 次提交
  3. 11 5月, 2021 1 次提交
  4. 05 5月, 2021 2 次提交
  5. 09 3月, 2021 3 次提交
  6. 17 10月, 2020 1 次提交
  7. 13 8月, 2020 1 次提交
  8. 11 6月, 2020 1 次提交
  9. 18 5月, 2020 1 次提交
  10. 21 4月, 2020 1 次提交
  11. 04 2月, 2020 2 次提交
    • Y
      lib: rework bitmap_parse() · 2d626158
      Yury Norov 提交于
      bitmap_parse() is ineffective and full of opaque variables and opencoded
      parts.  It leads to hard understanding and usage of it.  This rework
      includes:
      
      - remove bitmap_shift_left() call from the cycle.  Now it makes the
        complexity of the algorithm as O(nbits^2).  In the suggested approach
        the input string is parsed in reverse direction, so no shifts needed;
      
      - relax requirement on a single comma and no white spaces between
        chunks.  It is considered useful in scripting, and it aligns with
        bitmap_parselist();
      
      - split bitmap_parse() to small readable helpers;
      
      - make an explicit calculation of the end of input line at the
        beginning, so users of the bitmap_parse() won't bother doing this.
      
      Link: http://lkml.kernel.org/r/20200102043031.30357-6-yury.norov@gmail.comSigned-off-by: NYury Norov <yury.norov@gmail.com>
      Cc: Amritha Nambiar <amritha.nambiar@intel.com>
      Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miklos Szeredi <mszeredi@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: "Tobin C . Harding" <tobin@kernel.org>
      Cc: Vineet Gupta <vineet.gupta1@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2d626158
    • Y
      lib: make bitmap_parse_user a wrapper on bitmap_parse · e66eda06
      Yury Norov 提交于
      Currently we parse user data byte after byte which leads to
      overcomplicating of parsing algorithm.  There are no performance critical
      users of bitmap_parse_user(), and so we can duplicate user data to kernel
      buffer and simply call bitmap_parselist().  This rework lets us unify and
      simplify bitmap_parse() and bitmap_parse_user(), which is done in the
      following patch.
      
      Link: http://lkml.kernel.org/r/20200102043031.30357-5-yury.norov@gmail.comSigned-off-by: NYury Norov <yury.norov@gmail.com>
      Reviewed-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com>
      Cc: Amritha Nambiar <amritha.nambiar@intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Chris Wilson <chris@chris-wilson.co.uk>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Miklos Szeredi <mszeredi@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Steffen Klassert <steffen.klassert@secunet.com>
      Cc: "Tobin C . Harding" <tobin@kernel.org>
      Cc: Vineet Gupta <vineet.gupta1@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Willem de Bruijn <willemb@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e66eda06
  12. 27 1月, 2020 1 次提交
  13. 05 12月, 2019 1 次提交
  14. 25 7月, 2019 1 次提交
  15. 19 6月, 2019 1 次提交
  16. 15 5月, 2019 4 次提交
  17. 04 1月, 2019 1 次提交
    • L
      Remove 'type' argument from access_ok() function · 96d4f267
      Linus Torvalds 提交于
      Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument
      of the user address range verification function since we got rid of the
      old racy i386-only code to walk page tables by hand.
      
      It existed because the original 80386 would not honor the write protect
      bit when in kernel mode, so you had to do COW by hand before doing any
      user access.  But we haven't supported that in a long time, and these
      days the 'type' argument is a purely historical artifact.
      
      A discussion about extending 'user_access_begin()' to do the range
      checking resulted this patch, because there is no way we're going to
      move the old VERIFY_xyz interface to that model.  And it's best done at
      the end of the merge window when I've done most of my merges, so let's
      just get this done once and for all.
      
      This patch was mostly done with a sed-script, with manual fix-ups for
      the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form.
      
      There were a couple of notable cases:
      
       - csky still had the old "verify_area()" name as an alias.
      
       - the iter_iov code had magical hardcoded knowledge of the actual
         values of VERIFY_{READ,WRITE} (not that they mattered, since nothing
         really used it)
      
       - microblaze used the type argument for a debug printout
      
      but other than those oddities this should be a total no-op patch.
      
      I tried to fix up all architectures, did fairly extensive grepping for
      access_ok() uses, and the changes are trivial, but I may have missed
      something.  Any missed conversion should be trivially fixable, though.
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      96d4f267
  18. 31 10月, 2018 3 次提交
  19. 23 8月, 2018 1 次提交
  20. 02 8月, 2018 1 次提交
  21. 08 6月, 2018 1 次提交
  22. 06 4月, 2018 1 次提交
  23. 07 2月, 2018 2 次提交
    • Y
      bitmap: replace bitmap_{from,to}_u32array · 3aa56885
      Yury Norov 提交于
      with bitmap_{from,to}_arr32 over the kernel. Additionally to it:
      * __check_eq_bitmap() now takes single nbits argument.
      * __check_eq_u32_array is not used in new test but may be used in
        future. So I don't remove it here, but annotate as __used.
      
      Tested on arm64 and 32-bit BE mips.
      
      [arnd@arndb.de: perf: arm_dsu_pmu: convert to bitmap_from_arr32]
        Link: http://lkml.kernel.org/r/20180201172508.5739-2-ynorov@caviumnetworks.com
      [ynorov@caviumnetworks.com: fix net/core/ethtool.c]
        Link: http://lkml.kernel.org/r/20180205071747.4ekxtsbgxkj5b2fz@yury-thinkpad
      Link: http://lkml.kernel.org/r/20171228150019.27953-2-ynorov@caviumnetworks.comSigned-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: David Decotigny <decot@googlers.com>,
      Cc: David S. Miller <davem@davemloft.net>,
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Heiner Kallweit <hkallweit1@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3aa56885
    • Y
      bitmap: new bitmap_copy_safe and bitmap_{from,to}_arr32 · c724f193
      Yury Norov 提交于
      This patchset replaces bitmap_{to,from}_u32array with more simple and
      standard looking copy-like functions.
      
      bitmap_from_u32array() takes 4 arguments (bitmap_to_u32array is similar):
       - unsigned long *bitmap, which is destination;
       - unsigned int nbits, the length of destination bitmap, in bits;
       - const u32 *buf, the source; and
       - unsigned int nwords, the length of source buffer in ints.
      
      In description to the function it is detailed like:
      * copy min(nbits, 32*nwords) bits from @buf to @bitmap, remaining
      * bits between nword and nbits in @bitmap (if any) are cleared.
      
      Having two size arguments looks unneeded and potentially dangerous.
      
      It is unneeded because normally user of copy-like function should take
      care of the size of destination and make it big enough to fit source
      data.
      
      And it is dangerous because function may hide possible error if user
      doesn't provide big enough bitmap, and data becomes silently dropped.
      
      That's why all copy-like functions have 1 argument for size of copying
      data, and I don't see any reason to make bitmap_from_u32array()
      different.
      
      One exception that comes in mind is strncpy() which also provides size
      of destination in arguments, but it's strongly argued by the possibility
      of taking broken strings in source.  This is not the case of
      bitmap_{from,to}_u32array().
      
      There is no many real users of bitmap_{from,to}_u32array(), and they all
      very clearly provide size of destination matched with the size of
      source, so additional functionality is not used in fact. Like this:
      bitmap_from_u32array(to->link_modes.supported,
      		__ETHTOOL_LINK_MODE_MASK_NBITS,
      		link_usettings.link_modes.supported,
      		__ETHTOOL_LINK_MODE_MASK_NU32);
      Where:
      #define __ETHTOOL_LINK_MODE_MASK_NU32 \
      	DIV_ROUND_UP(__ETHTOOL_LINK_MODE_MASK_NBITS, 32)
      
      In this patch, bitmap_copy_safe and bitmap_{from,to}_arr32 are introduced.
      
      'Safe' in bitmap_copy_safe() stands for clearing unused bits in bitmap
      beyond last bit till the end of last word. It is useful for hardening
      API when bitmap is assumed to be exposed to userspace.
      
      bitmap_{from,to}_arr32 functions are replacements for
      bitmap_{from,to}_u32array. They don't take unneeded nwords argument, and
      so simpler in implementation and understanding.
      
      This patch suggests optimization for 32-bit systems - aliasing
      bitmap_{from,to}_arr32 to bitmap_copy_safe.
      
      Other possible optimization is aliasing 64-bit LE bitmap_{from,to}_arr32 to
      more generic function(s). But I didn't end up with the function that would
      be helpful by itself, and can be used to alias 64-bit LE
      bitmap_{from,to}_arr32, like bitmap_copy_safe() does. So I preferred to
      leave things as is.
      
      The following patch switches kernel to new API and introduces test for it.
      
      Discussion is here: https://lkml.org/lkml/2017/11/15/592
      
      [ynorov@caviumnetworks.com: rename bitmap_copy_safe to bitmap_copy_clear_tail]
        Link: http://lkml.kernel.org/r/20180201172508.5739-3-ynorov@caviumnetworks.com
      Link: http://lkml.kernel.org/r/20171228150019.27953-1-ynorov@caviumnetworks.comSigned-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: David Decotigny <decot@googlers.com>,
      Cc: David S. Miller <davem@davemloft.net>,
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c724f193
  24. 20 10月, 2017 1 次提交
  25. 09 9月, 2017 1 次提交
    • Y
      lib/bitmap.c: make bitmap_parselist() thread-safe and much faster · 0a5ce083
      Yury Norov 提交于
      Current implementation of bitmap_parselist() uses a static variable to
      save local state while setting bits in the bitmap.  It is obviously wrong
      if we assume execution in multiprocessor environment.  Fortunately, it's
      possible to rewrite this portion of code to avoid using the static
      variable.
      
      It is also possible to set bits in the mask per-range with bitmap_set(),
      not per-bit, as it is implemented now, with set_bit(); which is way
      faster.
      
      The important side effect of this change is that setting bits in this
      function from now is not per-bit atomic and less memory-ordered.  This is
      because set_bit() guarantees the order of memory accesses, while
      bitmap_set() does not.  I think that it is the advantage of the new
      approach, because the bitmap_parselist() is intended to initialise bit
      arrays, and user should protect the whole bitmap during initialisation if
      needed.  So protecting individual bits looks expensive and useless.  Also,
      other range-oriented functions in lib/bitmap.c don't worry much about
      atomicity.
      
      With all that, setting 2k bits in map with the pattern like 0-2047:128/256
      becomes ~50 times faster after applying the patch in my testing
      environment (arm64 hosted on qemu).
      
      The second patch of the series adds the test for bitmap_parselist().  It's
      not intended to cover all tricky cases, just to make sure that I didn't
      screw up during rework.
      
      Link: http://lkml.kernel.org/r/20170807225438.16161-1-ynorov@caviumnetworks.comSigned-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Cc: Noam Camus <noamca@mellanox.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0a5ce083
  26. 11 7月, 2017 1 次提交
  27. 03 4月, 2017 1 次提交
  28. 12 10月, 2016 1 次提交
    • N
      lib/bitmap.c: enhance bitmap syntax · 2d13e6ca
      Noam Camus 提交于
      Today there are platforms with many CPUs (up to 4K).  Trying to boot only
      part of the CPUs may result in too long string.
      
      For example lets take NPS platform that is part of arch/arc.  This
      platform have SMP system with 256 cores each with 16 HW threads (SMT
      machine) where HW thread appears as CPU to the kernel.  In this example
      there is total of 4K CPUs.  When one tries to boot only part of the HW
      threads from each core the string representing the map may be long...  For
      example if for sake of performance we decided to boot only first half of
      HW threads of each core the map will look like:
      0-7,16-23,32-39,...,4080-4087
      
      This patch introduce new syntax to accommodate with such use case.  I
      added an optional postfix to a range of CPUs which will choose according
      to given modulo the desired range of reminders i.e.:
      
          <cpus range>:sed_size/group_size
      
      For example, above map can be described in new syntax like this:
      0-4095:8/16
      
      Note that this patch is backward compatible with current syntax.
      
      [akpm@linux-foundation.org: rework documentation]
      Link: http://lkml.kernel.org/r/1473579629-4283-1-git-send-email-noamca@mellanox.comSigned-off-by: NNoam Camus <noamca@mellanox.com>
      Cc: David Decotigny <decot@googlers.com>
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Pan Xinhui <xinhui@linux.vnet.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2d13e6ca
  29. 15 7月, 2016 1 次提交