1. 17 2月, 2020 4 次提交
    • M
      openvswitch: add TTL decrement action · 744676e7
      Matteo Croce 提交于
      New action to decrement TTL instead of setting it to a fixed value.
      This action will decrement the TTL and, in case of expired TTL, drop it
      or execute an action passed via a nested attribute.
      The default TTL expired action is to drop the packet.
      
      Supports both IPv4 and IPv6 via the ttl and hop_limit fields, respectively.
      
      Tested with a corresponding change in the userspace:
      
          # ovs-dpctl dump-flows
          in_port(2),eth(),eth_type(0x0800), packets:0, bytes:0, used:never, actions:dec_ttl{ttl<=1 action:(drop)},1
          in_port(1),eth(),eth_type(0x0800), packets:0, bytes:0, used:never, actions:dec_ttl{ttl<=1 action:(drop)},2
          in_port(1),eth(),eth_type(0x0806), packets:0, bytes:0, used:never, actions:2
          in_port(2),eth(),eth_type(0x0806), packets:0, bytes:0, used:never, actions:1
      
          # ping -c1 192.168.0.2 -t 42
          IP (tos 0x0, ttl 41, id 61647, offset 0, flags [DF], proto ICMP (1), length 84)
              192.168.0.1 > 192.168.0.2: ICMP echo request, id 386, seq 1, length 64
          # ping -c1 192.168.0.2 -t 120
          IP (tos 0x0, ttl 119, id 62070, offset 0, flags [DF], proto ICMP (1), length 84)
              192.168.0.1 > 192.168.0.2: ICMP echo request, id 388, seq 1, length 64
          # ping -c1 192.168.0.2 -t 1
          #
      Co-developed-by: NBindiya Kurle <bindiyakurle@gmail.com>
      Signed-off-by: NBindiya Kurle <bindiyakurle@gmail.com>
      Signed-off-by: NMatteo Croce <mcroce@redhat.com>
      Acked-by: NPravin B Shelar <pshelar@ovn.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      744676e7
    • A
      tcp-zerocopy: Return sk_err (if set) along with tcp receive zerocopy. · 33946518
      Arjun Roy 提交于
      This patchset is intended to reduce the number of extra system calls
      imposed by TCP receive zerocopy. For ping-pong RPC style workloads,
      this patchset has demonstrated a system call reduction of about 30%
      when coupled with userspace changes.
      
      For applications using epoll, returning sk_err along with the result
      of tcp receive zerocopy could remove the need to call
      recvmsg()=-EAGAIN after a spurious wakeup.
      
      Consider a multi-threaded application using epoll. A thread may awaken
      with EPOLLIN but another thread may already be reading. The
      spuriously-awoken thread does not necessarily know that another thread
      'won'; rather, it may be possible that it was woken up due to the
      presence of an error if there is no data. A zerocopy read receiving 0
      bytes thus would need to be followed up by recvmsg to be sure.
      
      Instead, we return sk_err directly with zerocopy, so the application
      can avoid this extra system call.
      Signed-off-by: NArjun Roy <arjunroy@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      33946518
    • A
      tcp-zerocopy: Return inq along with tcp receive zerocopy. · c8856c05
      Arjun Roy 提交于
      This patchset is intended to reduce the number of extra system calls
      imposed by TCP receive zerocopy. For ping-pong RPC style workloads,
      this patchset has demonstrated a system call reduction of about 30%
      when coupled with userspace changes.
      
      For applications using edge-triggered epoll, returning inq along with
      the result of tcp receive zerocopy could remove the need to call
      recvmsg()=-EAGAIN after a successful zerocopy. Generally speaking,
      since normally we would need to perform a recvmsg() call for every
      successful small RPC read via TCP receive zerocopy, returning inq can
      reduce the number of system calls performed by approximately half.
      Signed-off-by: NArjun Roy <arjunroy@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c8856c05
    • Y
      ptp_qoriq: drop the code of alarm · d71151a3
      Yangbo Lu 提交于
      The alarm function hadn't been supported by PTP clock driver.
      The recommended solution PHC + phc2sys + nanosleep provides
      best performance. So drop the code of alarm in ptp_qoriq driver.
      Signed-off-by: NYangbo Lu <yangbo.lu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d71151a3
  2. 14 2月, 2020 6 次提交
  3. 13 2月, 2020 1 次提交
  4. 12 2月, 2020 1 次提交
  5. 11 2月, 2020 2 次提交
    • R
      ACPI: PM: s2idle: Avoid possible race related to the EC GPE · e3728b50
      Rafael J. Wysocki 提交于
      It is theoretically possible for the ACPI EC GPE to be set after the
      s2idle_ops->wake() called from s2idle_loop() has returned and before
      the subsequent pm_wakeup_pending() check is carried out.  If that
      happens, the resulting wakeup event will cause the system to resume
      even though it may be a spurious one.
      
      To avoid that race, first make the ->wake() callback in struct
      platform_s2idle_ops return a bool value indicating whether or not
      to let the system resume and rearrange s2idle_loop() to use that
      value instad of the direct pm_wakeup_pending() call if ->wake() is
      present.
      
      Next, rework acpi_s2idle_wake() to process EC events and check
      pm_wakeup_pending() before re-arming the SCI for system wakeup
      to prevent it from triggering prematurely and add comments to
      that function to explain the rationale for the new code flow.
      
      Fixes: 56b99184 ("PM: sleep: Simplify suspend-to-idle control flow")
      Cc: 5.4+ <stable@vger.kernel.org> # 5.4+
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      e3728b50
    • T
      tracing: Consolidate trace() functions · 7276531d
      Tom Zanussi 提交于
      Move the checking, buffer reserve and buffer commit code in
      synth_event_trace_start/end() into inline functions
      __synth_event_trace_start/end() so they can also be used by
      synth_event_trace() and synth_event_trace_array(), and then have all
      those functions use them.
      
      Also, change synth_event_trace_state.enabled to disabled so it only
      needs to be set if the event is disabled, which is not normally the
      case.
      
      Link: http://lkml.kernel.org/r/b1f3108d0f450e58192955a300e31d0405ab4149.1581374549.git.zanussi@kernel.orgSigned-off-by: NTom Zanussi <zanussi@kernel.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      7276531d
  6. 09 2月, 2020 1 次提交
    • L
      pipe: use exclusive waits when reading or writing · 0ddad21d
      Linus Torvalds 提交于
      This makes the pipe code use separate wait-queues and exclusive waiting
      for readers and writers, avoiding a nasty thundering herd problem when
      there are lots of readers waiting for data on a pipe (or, less commonly,
      lots of writers waiting for a pipe to have space).
      
      While this isn't a common occurrence in the traditional "use a pipe as a
      data transport" case, where you typically only have a single reader and
      a single writer process, there is one common special case: using a pipe
      as a source of "locking tokens" rather than for data communication.
      
      In particular, the GNU make jobserver code ends up using a pipe as a way
      to limit parallelism, where each job consumes a token by reading a byte
      from the jobserver pipe, and releases the token by writing a byte back
      to the pipe.
      
      This pattern is fairly traditional on Unix, and works very well, but
      will waste a lot of time waking up a lot of processes when only a single
      reader needs to be woken up when a writer releases a new token.
      
      A simplified test-case of just this pipe interaction is to create 64
      processes, and then pass a single token around between them (this
      test-case also intentionally passes another token that gets ignored to
      test the "wake up next" logic too, in case anybody wonders about it):
      
          #include <unistd.h>
      
          int main(int argc, char **argv)
          {
              int fd[2], counters[2];
      
              pipe(fd);
              counters[0] = 0;
              counters[1] = -1;
              write(fd[1], counters, sizeof(counters));
      
              /* 64 processes */
              fork(); fork(); fork(); fork(); fork(); fork();
      
              do {
                      int i;
                      read(fd[0], &i, sizeof(i));
                      if (i < 0)
                              continue;
                      counters[0] = i+1;
                      write(fd[1], counters, (1+(i & 1)) *sizeof(int));
              } while (counters[0] < 1000000);
              return 0;
          }
      
      and in a perfect world, passing that token around should only cause one
      context switch per transfer, when the writer of a token causes a
      directed wakeup of just a single reader.
      
      But with the "writer wakes all readers" model we traditionally had, on
      my test box the above case causes more than an order of magnitude more
      scheduling: instead of the expected ~1M context switches, "perf stat"
      shows
      
              231,852.37 msec task-clock                #   15.857 CPUs utilized
              11,250,961      context-switches          #    0.049 M/sec
                 616,304      cpu-migrations            #    0.003 M/sec
                   1,648      page-faults               #    0.007 K/sec
       1,097,903,998,514      cycles                    #    4.735 GHz
         120,781,778,352      instructions              #    0.11  insn per cycle
          27,997,056,043      branches                  #  120.754 M/sec
             283,581,233      branch-misses             #    1.01% of all branches
      
            14.621273891 seconds time elapsed
      
             0.018243000 seconds user
             3.611468000 seconds sys
      
      before this commit.
      
      After this commit, I get
      
                5,229.55 msec task-clock                #    3.072 CPUs utilized
               1,212,233      context-switches          #    0.232 M/sec
                 103,951      cpu-migrations            #    0.020 M/sec
                   1,328      page-faults               #    0.254 K/sec
          21,307,456,166      cycles                    #    4.074 GHz
          12,947,819,999      instructions              #    0.61  insn per cycle
           2,881,985,678      branches                  #  551.096 M/sec
              64,267,015      branch-misses             #    2.23% of all branches
      
             1.702148350 seconds time elapsed
      
             0.004868000 seconds user
             0.110786000 seconds sys
      
      instead. Much better.
      
      [ Note! This kernel improvement seems to be very good at triggering a
        race condition in the make jobserver (in GNU make 4.2.1) for me. It's
        a long known bug that was fixed back in June 2017 by GNU make commit
        b552b0525198 ("[SV 51159] Use a non-blocking read with pselect to
        avoid hangs.").
      
        But there wasn't a new release of GNU make until 4.3 on Jan 19 2020,
        so a number of distributions may still have the buggy version. Some
        have backported the fix to their 4.2.1 release, though, and even
        without the fix it's quite timing-dependent whether the bug actually
        is hit. ]
      
      Josh Triplett says:
       "I've been hammering on your pipe fix patch (switching to exclusive
        wait queues) for a month or so, on several different systems, and I've
        run into no issues with it. The patch *substantially* improves
        parallel build times on large (~100 CPU) systems, both with parallel
        make and with other things that use make's pipe-based jobserver.
      
        All current distributions (including stable and long-term stable
        distributions) have versions of GNU make that no longer have the
        jobserver bug"
      Tested-by: NJosh Triplett <josh@joshtriplett.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ddad21d
  7. 08 2月, 2020 13 次提交
  8. 07 2月, 2020 11 次提交
    • M
      nl80211: add src and dst addr attributes for control port tx/rx · 8c3ed7aa
      Markus Theil 提交于
      When using control port over nl80211 in AP mode with
      pre-authentication, APs need to forward frames to other
      APs defined by their MAC address. Before this patch,
      pre-auth frames reaching user space over nl80211 control
      port  have no longer any information about the dest attached,
      which can be used for forwarding to a controller or injecting
      the frame back to a ethernet interface over a AF_PACKET
      socket.
      Analog problems exist, when forwarding pre-auth frames from
      AP -> STA.
      
      This patch therefore adds the NL80211_ATTR_DST_MAC and
      NL80211_ATTR_SRC_MAC attributes to provide more context
      information when forwarding.
      The respective arguments are optional on tx and included on rx.
      Therefore unaware existing software is not affected.
      
      Software which wants to detect this feature, can do so
      by checking against:
        NL80211_EXT_FEATURE_CONTROL_PORT_OVER_NL80211_MAC_ADDRS
      Signed-off-by: NMarkus Theil <markus.theil@tu-ilmenau.de>
      Link: https://lore.kernel.org/r/20200115125522.3755-1-markus.theil@tu-ilmenau.de
      [split into separate cfg80211/mac80211 patches]
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      8c3ed7aa
    • S
      mac80211: parse also the RSNXE IE · c0058df7
      Shaul Triebitz 提交于
      Parse also the RSN Extension IE when parsing the rest of the IEs.
      It will be used in a later patch.
      Signed-off-by: NShaul Triebitz <shaul.triebitz@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      Link: https://lore.kernel.org/r/20200131111300.891737-21-luca@coelho.fiSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
      c0058df7
    • Z
      ieee80211: fix 'the' doubling in comments · f93d6b21
      Zvika Yehudai 提交于
      Remove redundant 'the' where 'the the' was written.
      Signed-off-by: NZvika Yehudai <zvikayeh@gmail.com>
      Link: https://lore.kernel.org/r/20200203080823.24949-1-zvikayeh@gmail.comSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
      f93d6b21
    • V
      cfg80211: Enhance the AKM advertizement to support per interface. · d6039a34
      Veerendranath Jakkam 提交于
      Commit ab4dfa20 ("cfg80211: Allow drivers to advertise supported AKM
      suites") introduces the support to advertize supported AKMs to userspace.
      
      This needs an enhancement to advertize the AKM support per interface type,
      specifically for the cfg80211-based drivers that implement SME and use
      different mechanisms to support the AKM's for each interface type (e.g.,
      the support for SAE, OWE AKM's take different paths for such drivers on
      STA/AP mode).
      
      This commit aims the same and enhances the earlier mechanism of advertizing
      the AKMs per wiphy. Add new nl80211 attributes and data structure to
      provide supported AKMs per interface type to userspace.
      
      the AKMs advertized in akm_suites are default capabilities if not
      advertized for a specific interface type in iftype_akm_suites.
      Signed-off-by: NVeerendranath Jakkam <vjakkam@codeaurora.org>
      Link: https://lore.kernel.org/r/20200126203032.21934-1-vjakkam@codeaurora.orgSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
      d6039a34
    • H
      cfg80211: add no HE indication to the channel flag · 1e61d82c
      Haim Dreyfuss 提交于
      The regulatory domain might forbid HE operation.  Certain regulatory
      domains may restrict it for specific channels whereas others may do it
      for the whole regulatory domain.
      
      Add an option to indicate it in the channel flag.
      Signed-off-by: NHaim Dreyfuss <haim.dreyfuss@intel.com>
      Signed-off-by: NLuca Coelho <luciano.coelho@intel.com>
      Link: https://lore.kernel.org/r/20200121081213.733757-1-luca@coelho.fiSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
      1e61d82c
    • J
      mac80211: use more bits for ack_frame_id · f2b18bac
      Johannes Berg 提交于
      It turns out that this wasn't a good idea, I hit a test failure in
      hwsim due to this. That particular failure was easily worked around,
      but it raised questions: if an AP needs to, for example, send action
      frames to each connected station, the current limit is nowhere near
      enough (especially if those stations are sleeping and the frames are
      queued for a while.)
      
      Shuffle around some bits to make more room for ack_frame_id to allow
      up to 8192 queued up frames, that's enough for queueing 4 frames to
      each connected station, even at the maximum of 2007 stations on a
      single AP.
      
      We take the bits from band (which currently only 2 but I leave 3 in
      case we add another band) and from the hw_queue, which can only need
      4 since it has a limit of 16 queues.
      
      Fixes: 6912daed ("mac80211: Shrink the size of ack_frame_id to make room for tx_time_est")
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/r/20200115122549.b9a4ef9f4980.Ied52ed90150220b83a280009c590b65d125d087c@changeidSigned-off-by: NJohannes Berg <johannes.berg@intel.com>
      f2b18bac
    • D
      fs: New zonefs file system · 8dcc1a9d
      Damien Le Moal 提交于
      zonefs is a very simple file system exposing each zone of a zoned block
      device as a file. Unlike a regular file system with zoned block device
      support (e.g. f2fs), zonefs does not hide the sequential write
      constraint of zoned block devices to the user. Files representing
      sequential write zones of the device must be written sequentially
      starting from the end of the file (append only writes).
      
      As such, zonefs is in essence closer to a raw block device access
      interface than to a full featured POSIX file system. The goal of zonefs
      is to simplify the implementation of zoned block device support in
      applications by replacing raw block device file accesses with a richer
      file API, avoiding relying on direct block device file ioctls which may
      be more obscure to developers. One example of this approach is the
      implementation of LSM (log-structured merge) tree structures (such as
      used in RocksDB and LevelDB) on zoned block devices by allowing SSTables
      to be stored in a zone file similarly to a regular file system rather
      than as a range of sectors of a zoned device. The introduction of the
      higher level construct "one file is one zone" can help reducing the
      amount of changes needed in the application as well as introducing
      support for different application programming languages.
      
      Zonefs on-disk metadata is reduced to an immutable super block to
      persistently store a magic number and optional feature flags and
      values. On mount, zonefs uses blkdev_report_zones() to obtain the device
      zone configuration and populates the mount point with a static file tree
      solely based on this information. E.g. file sizes come from the device
      zone type and write pointer offset managed by the device itself.
      
      The zone files created on mount have the following characteristics.
      1) Files representing zones of the same type are grouped together
         under a common sub-directory:
           * For conventional zones, the sub-directory "cnv" is used.
           * For sequential write zones, the sub-directory "seq" is used.
        These two directories are the only directories that exist in zonefs.
        Users cannot create other directories and cannot rename nor delete
        the "cnv" and "seq" sub-directories.
      2) The name of zone files is the number of the file within the zone
         type sub-directory, in order of increasing zone start sector.
      3) The size of conventional zone files is fixed to the device zone size.
         Conventional zone files cannot be truncated.
      4) The size of sequential zone files represent the file's zone write
         pointer position relative to the zone start sector. Truncating these
         files is allowed only down to 0, in which case, the zone is reset to
         rewind the zone write pointer position to the start of the zone, or
         up to the zone size, in which case the file's zone is transitioned
         to the FULL state (finish zone operation).
      5) All read and write operations to files are not allowed beyond the
         file zone size. Any access exceeding the zone size is failed with
         the -EFBIG error.
      6) Creating, deleting, renaming or modifying any attribute of files and
         sub-directories is not allowed.
      7) There are no restrictions on the type of read and write operations
         that can be issued to conventional zone files. Buffered, direct and
         mmap read & write operations are accepted. For sequential zone files,
         there are no restrictions on read operations, but all write
         operations must be direct IO append writes. mmap write of sequential
         files is not allowed.
      
      Several optional features of zonefs can be enabled at format time.
      * Conventional zone aggregation: ranges of contiguous conventional
        zones can be aggregated into a single larger file instead of the
        default one file per zone.
      * File ownership: The owner UID and GID of zone files is by default 0
        (root) but can be changed to any valid UID/GID.
      * File access permissions: the default 640 access permissions can be
        changed.
      
      The mkzonefs tool is used to format zoned block devices for use with
      zonefs. This tool is available on Github at:
      
      git@github.com:damien-lemoal/zonefs-tools.git.
      
      zonefs-tools also includes a test suite which can be run against any
      zoned block device, including null_blk block device created with zoned
      mode.
      
      Example: the following formats a 15TB host-managed SMR HDD with 256 MB
      zones with the conventional zones aggregation feature enabled.
      
      $ sudo mkzonefs -o aggr_cnv /dev/sdX
      $ sudo mount -t zonefs /dev/sdX /mnt
      $ ls -l /mnt/
      total 0
      dr-xr-xr-x 2 root root     1 Nov 25 13:23 cnv
      dr-xr-xr-x 2 root root 55356 Nov 25 13:23 seq
      
      The size of the zone files sub-directories indicate the number of files
      existing for each type of zones. In this example, there is only one
      conventional zone file (all conventional zones are aggregated under a
      single file).
      
      $ ls -l /mnt/cnv
      total 137101312
      -rw-r----- 1 root root 140391743488 Nov 25 13:23 0
      
      This aggregated conventional zone file can be used as a regular file.
      
      $ sudo mkfs.ext4 /mnt/cnv/0
      $ sudo mount -o loop /mnt/cnv/0 /data
      
      The "seq" sub-directory grouping files for sequential write zones has
      in this example 55356 zones.
      
      $ ls -lv /mnt/seq
      total 14511243264
      -rw-r----- 1 root root 0 Nov 25 13:23 0
      -rw-r----- 1 root root 0 Nov 25 13:23 1
      -rw-r----- 1 root root 0 Nov 25 13:23 2
      ...
      -rw-r----- 1 root root 0 Nov 25 13:23 55354
      -rw-r----- 1 root root 0 Nov 25 13:23 55355
      
      For sequential write zone files, the file size changes as data is
      appended at the end of the file, similarly to any regular file system.
      
      $ dd if=/dev/zero of=/mnt/seq/0 bs=4K count=1 conv=notrunc oflag=direct
      1+0 records in
      1+0 records out
      4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452219 s, 9.1 MB/s
      
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 4096 Nov 25 13:23 /mnt/seq/0
      
      The written file can be truncated to the zone size, preventing any
      further write operation.
      
      $ truncate -s 268435456 /mnt/seq/0
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 268435456 Nov 25 13:49 /mnt/seq/0
      
      Truncation to 0 size allows freeing the file zone storage space and
      restart append-writes to the file.
      
      $ truncate -s 0 /mnt/seq/0
      $ ls -l /mnt/seq/0
      -rw-r----- 1 root root 0 Nov 25 13:49 /mnt/seq/0
      
      Since files are statically mapped to zones on the disk, the number of
      blocks of a file as reported by stat() and fstat() indicates the size
      of the file zone.
      
      $ stat /mnt/seq/0
        File: /mnt/seq/0
        Size: 0       Blocks: 524288     IO Block: 4096   regular empty file
      Device: 870h/2160d      Inode: 50431       Links: 1
      Access: (0640/-rw-r-----)  Uid: (    0/    root)   Gid: (    0/  root)
      Access: 2019-11-25 13:23:57.048971997 +0900
      Modify: 2019-11-25 13:52:25.553805765 +0900
      Change: 2019-11-25 13:52:25.553805765 +0900
       Birth: -
      
      The number of blocks of the file ("Blocks") in units of 512B blocks
      gives the maximum file size of 524288 * 512 B = 256 MB, corresponding
      to the device zone size in this example. Of note is that the "IO block"
      field always indicates the minimum IO size for writes and corresponds
      to the device physical sector size.
      
      This code contains contributions from:
      * Johannes Thumshirn <jthumshirn@suse.de>,
      * Darrick J. Wong <darrick.wong@oracle.com>,
      * Christoph Hellwig <hch@lst.de>,
      * Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> and
      * Ting Yao <tingyao@hust.edu.cn>.
      Signed-off-by: NDamien Le Moal <damien.lemoal@wdc.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      8dcc1a9d
    • A
      fold struct fs_parameter_enum into struct constant_table · 5eede625
      Al Viro 提交于
      no real difference now
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5eede625
    • A
      fs_parse: get rid of ->enums · 2710c957
      Al Viro 提交于
      Don't do a single array; attach them to fsparam_enum() entry
      instead.  And don't bother trying to embed the names into those -
      it actually loses memory, with no real speedup worth mentioning.
      
      Simplifies validation as well.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2710c957
    • A
      Pass consistent param->type to fs_parse() · 0f89589a
      Al Viro 提交于
      As it is, vfs_parse_fs_string() makes "foo" and "foo=" indistinguishable;
      both get fs_value_is_string for ->type and NULL for ->string.  To make
      it even more unpleasant, that combination is impossible to produce with
      fsconfig().
      
      Much saner rules would be
              "foo"           => fs_value_is_flag, NULL
      	"foo="          => fs_value_is_string, ""
      	"foo=bar"       => fs_value_is_string, "bar"
      All cases are distinguishable, all results are expressable by fsconfig(),
      ->has_value checks are much simpler that way (to the point of the field
      being useless) and quite a few regressions go away (gfs2 has no business
      accepting -o nodebug=, for example).
      
      Partially based upon patches from Miklos.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      0f89589a
    • T
      net/mlx5: Deprecate usage of generic TLS HW capability bit · 61c00cca
      Tariq Toukan 提交于
      Deprecate the generic TLS cap bit, use the new TX-specific
      TLS cap bit instead.
      
      Fixes: a12ff35e ("net/mlx5: Introduce TLS TX offload hardware bits and structures")
      Signed-off-by: NTariq Toukan <tariqt@mellanox.com>
      Reviewed-by: NEran Ben Elisha <eranbe@mellanox.com>
      Signed-off-by: NSaeed Mahameed <saeedm@mellanox.com>
      61c00cca
  9. 06 2月, 2020 1 次提交
    • Q
      skbuff: fix a data race in skb_queue_len() · 86b18aaa
      Qian Cai 提交于
      sk_buff.qlen can be accessed concurrently as noticed by KCSAN,
      
       BUG: KCSAN: data-race in __skb_try_recv_from_queue / unix_dgram_sendmsg
      
       read to 0xffff8a1b1d8a81c0 of 4 bytes by task 5371 on cpu 96:
        unix_dgram_sendmsg+0x9a9/0xb70 include/linux/skbuff.h:1821
      				 net/unix/af_unix.c:1761
        ____sys_sendmsg+0x33e/0x370
        ___sys_sendmsg+0xa6/0xf0
        __sys_sendmsg+0x69/0xf0
        __x64_sys_sendmsg+0x51/0x70
        do_syscall_64+0x91/0xb47
        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
       write to 0xffff8a1b1d8a81c0 of 4 bytes by task 1 on cpu 99:
        __skb_try_recv_from_queue+0x327/0x410 include/linux/skbuff.h:2029
        __skb_try_recv_datagram+0xbe/0x220
        unix_dgram_recvmsg+0xee/0x850
        ____sys_recvmsg+0x1fb/0x210
        ___sys_recvmsg+0xa2/0xf0
        __sys_recvmsg+0x66/0xf0
        __x64_sys_recvmsg+0x51/0x70
        do_syscall_64+0x91/0xb47
        entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Since only the read is operating as lockless, it could introduce a logic
      bug in unix_recvq_full() due to the load tearing. Fix it by adding
      a lockless variant of skb_queue_len() and unix_recvq_full() where
      READ_ONCE() is on the read while WRITE_ONCE() is on the write similar to
      the commit d7d16a89 ("net: add skb_queue_empty_lockless()").
      Signed-off-by: NQian Cai <cai@lca.pw>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      86b18aaa
新手
引导
客服 返回
顶部