1. 21 11月, 2020 1 次提交
    • V
      net/tls: missing received data after fast remote close · 20ffc7ad
      Vadim Fedorenko 提交于
      In case when tcp socket received FIN after some data and the
      parser haven't started before reading data caller will receive
      an empty buffer. This behavior differs from plain TCP socket and
      leads to special treating in user-space.
      The flow that triggers the race is simple. Server sends small
      amount of data right after the connection is configured to use TLS
      and closes the connection. In this case receiver sees TLS Handshake
      data, configures TLS socket right after Change Cipher Spec record.
      While the configuration is in process, TCP socket receives small
      Application Data record, Encrypted Alert record and FIN packet. So
      the TCP socket changes sk_shutdown to RCV_SHUTDOWN and sk_flag with
      SK_DONE bit set. The received data is not parsed upon arrival and is
      never sent to user-space.
      
      Patch unpauses parser directly if we have unparsed data in tcp
      receive queue.
      
      Fixes: fcf4793e ("tls: check RCV_SHUTDOWN in tls_wait_data")
      Signed-off-by: NVadim Fedorenko <vfedorenko@novek.ru>
      Link: https://lore.kernel.org/r/1605801588-12236-1-git-send-email-vfedorenko@novek.ruSigned-off-by: NJakub Kicinski <kuba@kernel.org>
      20ffc7ad
  2. 17 11月, 2020 1 次提交
  3. 25 9月, 2020 1 次提交
    • R
      net/tls: race causes kernel panic · 38f7e1c0
      Rohit Maheshwari 提交于
      BUG: kernel NULL pointer dereference, address: 00000000000000b8
       #PF: supervisor read access in kernel mode
       #PF: error_code(0x0000) - not-present page
       PGD 80000008b6fef067 P4D 80000008b6fef067 PUD 8b6fe6067 PMD 0
       Oops: 0000 [#1] SMP PTI
       CPU: 12 PID: 23871 Comm: kworker/12:80 Kdump: loaded Tainted: G S
       5.9.0-rc3+ #1
       Hardware name: Supermicro X10SRA-F/X10SRA-F, BIOS 2.1 03/29/2018
       Workqueue: events tx_work_handler [tls]
       RIP: 0010:tx_work_handler+0x1b/0x70 [tls]
       Code: dc fe ff ff e8 16 d4 a3 f6 66 0f 1f 44 00 00 0f 1f 44 00 00 55 53 48 8b
       6f 58 48 8b bd a0 04 00 00 48 85 ff 74 1c 48 8b 47 28 <48> 8b 90 b8 00 00 00 83
       e2 02 75 0c f0 48 0f ba b0 b8 00 00 00 00
       RSP: 0018:ffffa44ace61fe88 EFLAGS: 00010286
       RAX: 0000000000000000 RBX: ffff91da9e45cc30 RCX: dead000000000122
       RDX: 0000000000000001 RSI: ffff91da9e45cc38 RDI: ffff91d95efac200
       RBP: ffff91da133fd780 R08: 0000000000000000 R09: 000073746e657665
       R10: 8080808080808080 R11: 0000000000000000 R12: ffff91dad7d30700
       R13: ffff91dab6561080 R14: 0ffff91dad7d3070 R15: ffff91da9e45cc38
       FS:  0000000000000000(0000) GS:ffff91dad7d00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 00000000000000b8 CR3: 0000000906478003 CR4: 00000000003706e0
       DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
       Call Trace:
        process_one_work+0x1a7/0x370
        worker_thread+0x30/0x370
        ? process_one_work+0x370/0x370
        kthread+0x114/0x130
        ? kthread_park+0x80/0x80
        ret_from_fork+0x22/0x30
      
      tls_sw_release_resources_tx() waits for encrypt_pending, which
      can have race, so we need similar changes as in commit
      0cada332 here as well.
      
      Fixes: a42055e8 ("net/tls: Add support for async encryption of records for performance")
      Signed-off-by: NRohit Maheshwari <rohitm@chelsio.com>
      Acked-by: NJakub Kicinski <kuba@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      38f7e1c0
  4. 08 8月, 2020 1 次提交
    • R
      net/tls: allow MSG_CMSG_COMPAT in sendmsg · 1c3b63f1
      Rouven Czerwinski 提交于
      Trying to use ktls on a system with 32-bit userspace and 64-bit kernel
      results in a EOPNOTSUPP message during sendmsg:
      
        setsockopt(3, SOL_TLS, TLS_TX, …, 40) = 0
        sendmsg(3, …, msg_flags=0}, 0) = -1 EOPNOTSUPP (Operation not supported)
      
      The tls_sw implementation does strict flag checking and does not allow
      the MSG_CMSG_COMPAT flag, which is set if the message comes in through
      the compat syscall.
      
      This patch adds MSG_CMSG_COMPAT to the flag check to allow the usage of
      the TLS SW implementation on systems using the compat syscall path.
      
      Note that the same check is present in the sendmsg path for the TLS
      device implementation, however the flag hasn't been added there for lack
      of testing hardware.
      Signed-off-by: NRouven Czerwinski <r.czerwinski@pengutronix.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c3b63f1
  5. 17 7月, 2020 1 次提交
  6. 02 6月, 2020 1 次提交
    • J
      bpf: Fix running sk_skb program types with ktls · e91de6af
      John Fastabend 提交于
      KTLS uses a stream parser to collect TLS messages and send them to
      the upper layer tls receive handler. This ensures the tls receiver
      has a full TLS header to parse when it is run. However, when a
      socket has BPF_SK_SKB_STREAM_VERDICT program attached before KTLS
      is enabled we end up with two stream parsers running on the same
      socket.
      
      The result is both try to run on the same socket. First the KTLS
      stream parser runs and calls read_sock() which will tcp_read_sock
      which in turn calls tcp_rcv_skb(). This dequeues the skb from the
      sk_receive_queue. When this is done KTLS code then data_ready()
      callback which because we stacked KTLS on top of the bpf stream
      verdict program has been replaced with sk_psock_start_strp(). This
      will in turn kick the stream parser again and eventually do the
      same thing KTLS did above calling into tcp_rcv_skb() and dequeuing
      a skb from the sk_receive_queue.
      
      At this point the data stream is broke. Part of the stream was
      handled by the KTLS side some other bytes may have been handled
      by the BPF side. Generally this results in either missing data
      or more likely a "Bad Message" complaint from the kTLS receive
      handler as the BPF program steals some bytes meant to be in a
      TLS header and/or the TLS header length is no longer correct.
      
      We've already broke the idealized model where we can stack ULPs
      in any order with generic callbacks on the TX side to handle this.
      So in this patch we do the same thing but for RX side. We add
      a sk_psock_strp_enabled() helper so TLS can learn a BPF verdict
      program is running and add a tls_sw_has_ctx_rx() helper so BPF
      side can learn there is a TLS ULP on the socket.
      
      Then on BPF side we omit calling our stream parser to avoid
      breaking the data stream for the KTLS receiver. Then on the
      KTLS side we call BPF_SK_SKB_STREAM_VERDICT once the KTLS
      receiver is done with the packet but before it posts the
      msg to userspace. This gives us symmetry between the TX and
      RX halfs and IMO makes it usable again. On the TX side we
      process packets in this order BPF -> TLS -> TCP and on
      the receive side in the reverse order TCP -> TLS -> BPF.
      
      Discovered while testing OpenSSL 3.0 Alpha2.0 release.
      
      Fixes: d829e9c4 ("tls: convert to generic sk_msg interface")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/159079361946.5745.605854335665044485.stgit@john-Precision-5820-TowerSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      e91de6af
  7. 26 5月, 2020 1 次提交
    • V
      net/tls: fix race condition causing kernel panic · 0cada332
      Vinay Kumar Yadav 提交于
      tls_sw_recvmsg() and tls_decrypt_done() can be run concurrently.
      // tls_sw_recvmsg()
      	if (atomic_read(&ctx->decrypt_pending))
      		crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
      	else
      		reinit_completion(&ctx->async_wait.completion);
      
      //tls_decrypt_done()
        	pending = atomic_dec_return(&ctx->decrypt_pending);
      
        	if (!pending && READ_ONCE(ctx->async_notify))
        		complete(&ctx->async_wait.completion);
      
      Consider the scenario tls_decrypt_done() is about to run complete()
      
      	if (!pending && READ_ONCE(ctx->async_notify))
      
      and tls_sw_recvmsg() reads decrypt_pending == 0, does reinit_completion(),
      then tls_decrypt_done() runs complete(). This sequence of execution
      results in wrong completion. Consequently, for next decrypt request,
      it will not wait for completion, eventually on connection close, crypto
      resources freed, there is no way to handle pending decrypt response.
      
      This race condition can be avoided by having atomic_read() mutually
      exclusive with atomic_dec_return(),complete().Intoduced spin lock to
      ensure the mutual exclution.
      
      Addressed similar problem in tx direction.
      
      v1->v2:
      - More readable commit message.
      - Corrected the lock to fix new race scenario.
      - Removed barrier which is not needed now.
      
      Fixes: a42055e8 ("net/tls: Add support for async encryption of records for performance")
      Signed-off-by: NVinay Kumar Yadav <vinay.yadav@chelsio.com>
      Reviewed-by: NJakub Kicinski <kuba@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0cada332
  8. 22 5月, 2020 2 次提交
  9. 28 4月, 2020 2 次提交
    • X
      net/tls: Fix sk_psock refcnt leak when in tls_data_ready() · 62b4011f
      Xiyu Yang 提交于
      tls_data_ready() invokes sk_psock_get(), which returns a reference of
      the specified sk_psock object to "psock" with increased refcnt.
      
      When tls_data_ready() returns, local variable "psock" becomes invalid,
      so the refcount should be decreased to keep refcount balanced.
      
      The reference counting issue happens in one exception handling path of
      tls_data_ready(). When "psock->ingress_msg" is empty but "psock" is not
      NULL, the function forgets to decrease the refcnt increased by
      sk_psock_get(), causing a refcnt leak.
      
      Fix this issue by calling sk_psock_put() on all paths when "psock" is
      not NULL.
      Signed-off-by: NXiyu Yang <xiyuyang19@fudan.edu.cn>
      Signed-off-by: NXin Tan <tanxin.ctf@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      62b4011f
    • X
      net/tls: Fix sk_psock refcnt leak in bpf_exec_tx_verdict() · 095f5614
      Xiyu Yang 提交于
      bpf_exec_tx_verdict() invokes sk_psock_get(), which returns a reference
      of the specified sk_psock object to "psock" with increased refcnt.
      
      When bpf_exec_tx_verdict() returns, local variable "psock" becomes
      invalid, so the refcount should be decreased to keep refcount balanced.
      
      The reference counting issue happens in one exception handling path of
      bpf_exec_tx_verdict(). When "policy" equals to NULL but "psock" is not
      NULL, the function forgets to decrease the refcnt increased by
      sk_psock_get(), causing a refcnt leak.
      
      Fix this issue by calling sk_psock_put() on this error path before
      bpf_exec_tx_verdict() returns.
      Signed-off-by: NXiyu Yang <xiyuyang19@fudan.edu.cn>
      Signed-off-by: NXin Tan <tanxin.ctf@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      095f5614
  10. 16 1月, 2020 3 次提交
    • J
      bpf: Sockmap/tls, fix pop data with SK_DROP return code · 7361d448
      John Fastabend 提交于
      When user returns SK_DROP we need to reset the number of copied bytes
      to indicate to the user the bytes were dropped and not sent. If we
      don't reset the copied arg sendmsg will return as if those bytes were
      copied giving the user a positive return value.
      
      This works as expected today except in the case where the user also
      pops bytes. In the pop case the sg.size is reduced but we don't correctly
      account for this when copied bytes is reset. The popped bytes are not
      accounted for and we return a small positive value potentially confusing
      the user.
      
      The reason this happens is due to a typo where we do the wrong comparison
      when accounting for pop bytes. In this fix notice the if/else is not
      needed and that we have a similar problem if we push data except its not
      visible to the user because if delta is larger the sg.size we return a
      negative value so it appears as an error regardless.
      
      Fixes: 7246d8ed ("bpf: helper to pop data from messages")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJonathan Lemon <jonathan.lemon@gmail.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/bpf/20200111061206.8028-9-john.fastabend@gmail.com
      7361d448
    • J
      bpf: Sockmap/tls, skmsg can have wrapped skmsg that needs extra chaining · 9aaaa568
      John Fastabend 提交于
      Its possible through a set of push, pop, apply helper calls to construct
      a skmsg, which is just a ring of scatterlist elements, with the start
      value larger than the end value. For example,
      
            end       start
        |_0_|_1_| ... |_n_|_n+1_|
      
      Where end points at 1 and start points and n so that valid elements is
      the set {n, n+1, 0, 1}.
      
      Currently, because we don't build the correct chain only {n, n+1} will
      be sent. This adds a check and sg_chain call to correctly submit the
      above to the crypto and tls send path.
      
      Fixes: d3b18ad3 ("tls: add bpf support to sk_msg handling")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJonathan Lemon <jonathan.lemon@gmail.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/bpf/20200111061206.8028-8-john.fastabend@gmail.com
      9aaaa568
    • J
      bpf: Sockmap/tls, tls_sw can create a plaintext buf > encrypt buf · d468e477
      John Fastabend 提交于
      It is possible to build a plaintext buffer using push helper that is larger
      than the allocated encrypt buffer. When this record is pushed to crypto
      layers this can result in a NULL pointer dereference because the crypto
      API expects the encrypt buffer is large enough to fit the plaintext
      buffer. Kernel splat below.
      
      To resolve catch the cases this can happen and split the buffer into two
      records to send individually. Unfortunately, there is still one case to
      handle where the split creates a zero sized buffer. In this case we merge
      the buffers and unmark the split. This happens when apply is zero and user
      pushed data beyond encrypt buffer. This fixes the original case as well
      because the split allocated an encrypt buffer larger than the plaintext
      buffer and the merge simply moves the pointers around so we now have
      a reference to the new (larger) encrypt buffer.
      
      Perhaps its not ideal but it seems the best solution for a fixes branch
      and avoids handling these two cases, (a) apply that needs split and (b)
      non apply case. The are edge cases anyways so optimizing them seems not
      necessary unless someone wants later in next branches.
      
      [  306.719107] BUG: kernel NULL pointer dereference, address: 0000000000000008
      [...]
      [  306.747260] RIP: 0010:scatterwalk_copychunks+0x12f/0x1b0
      [...]
      [  306.770350] Call Trace:
      [  306.770956]  scatterwalk_map_and_copy+0x6c/0x80
      [  306.772026]  gcm_enc_copy_hash+0x4b/0x50
      [  306.772925]  gcm_hash_crypt_remain_continue+0xef/0x110
      [  306.774138]  gcm_hash_crypt_continue+0xa1/0xb0
      [  306.775103]  ? gcm_hash_crypt_continue+0xa1/0xb0
      [  306.776103]  gcm_hash_assoc_remain_continue+0x94/0xa0
      [  306.777170]  gcm_hash_assoc_continue+0x9d/0xb0
      [  306.778239]  gcm_hash_init_continue+0x8f/0xa0
      [  306.779121]  gcm_hash+0x73/0x80
      [  306.779762]  gcm_encrypt_continue+0x6d/0x80
      [  306.780582]  crypto_gcm_encrypt+0xcb/0xe0
      [  306.781474]  crypto_aead_encrypt+0x1f/0x30
      [  306.782353]  tls_push_record+0x3b9/0xb20 [tls]
      [  306.783314]  ? sk_psock_msg_verdict+0x199/0x300
      [  306.784287]  bpf_exec_tx_verdict+0x3f2/0x680 [tls]
      [  306.785357]  tls_sw_sendmsg+0x4a3/0x6a0 [tls]
      
      test_sockmap test signature to trigger bug,
      
      [TEST]: (1, 1, 1, sendmsg, pass,redir,start 1,end 2,pop (1,2),ktls,):
      
      Fixes: d3b18ad3 ("tls: add bpf support to sk_msg handling")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJonathan Lemon <jonathan.lemon@gmail.com>
      Cc: stable@vger.kernel.org
      Link: https://lore.kernel.org/bpf/20200111061206.8028-7-john.fastabend@gmail.com
      d468e477
  11. 11 1月, 2020 2 次提交
  12. 07 12月, 2019 1 次提交
  13. 29 11月, 2019 4 次提交
  14. 20 11月, 2019 1 次提交
  15. 07 11月, 2019 2 次提交
    • J
      net/tls: add a TX lock · 79ffe608
      Jakub Kicinski 提交于
      TLS TX needs to release and re-acquire the socket lock if send buffer
      fills up.
      
      TLS SW TX path currently depends on only allowing one thread to enter
      the function by the abuse of sk_write_pending. If another writer is
      already waiting for memory no new ones are allowed in.
      
      This has two problems:
       - writers don't wake other threads up when they leave the kernel;
         meaning that this scheme works for single extra thread (second
         application thread or delayed work) because memory becoming
         available will send a wake up request, but as Mallesham and
         Pooja report with larger number of threads it leads to threads
         being put to sleep indefinitely;
       - the delayed work does not get _scheduled_ but it may _run_ when
         other writers are present leading to crashes as writers don't
         expect state to change under their feet (same records get pushed
         and freed multiple times); it's hard to reliably bail from the
         work, however, because the mere presence of a writer does not
         guarantee that the writer will push pending records before exiting.
      
      Ensuring wakeups always happen will make the code basically open
      code a mutex. Just use a mutex.
      
      The TLS HW TX path does not have any locking (not even the
      sk_write_pending hack), yet it uses a per-socket sg_tx_data
      array to push records.
      
      Fixes: a42055e8 ("net/tls: Add support for async encryption of records for performance")
      Reported-by: NMallesham  Jatharakonda <mallesh537@gmail.com>
      Reported-by: NPooja Trivedi <poojatrivedi@gmail.com>
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NSimon Horman <simon.horman@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      79ffe608
    • J
      net/tls: don't pay attention to sk_write_pending when pushing partial records · 02b1fa07
      Jakub Kicinski 提交于
      sk_write_pending being not zero does not guarantee that partial
      record will be pushed. If the thread waiting for memory times out
      the pending record may get stuck.
      
      In case of tls_device there is no path where parial record is
      set and writer present in the first place. Partial record is
      set only in tls_push_sg() and tls_push_sg() will return an
      error immediately. All tls_device callers of tls_push_sg()
      will return (and not wait for memory) if it failed.
      
      Fixes: a42055e8 ("net/tls: Add support for async encryption of records for performance")
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NSimon Horman <simon.horman@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      02b1fa07
  16. 07 10月, 2019 3 次提交
  17. 06 10月, 2019 1 次提交
  18. 05 9月, 2019 1 次提交
  19. 22 7月, 2019 3 次提交
  20. 08 7月, 2019 1 次提交
  21. 13 6月, 2019 1 次提交
    • J
      net: tls, correctly account for copied bytes with multiple sk_msgs · 648ee6ce
      John Fastabend 提交于
      tls_sw_do_sendpage needs to return the total number of bytes sent
      regardless of how many sk_msgs are allocated. Unfortunately, copied
      (the value we return up the stack) is zero'd before each new sk_msg
      is allocated so we only return the copied size of the last sk_msg used.
      
      The caller (splice, etc.) of sendpage will then believe only part
      of its data was sent and send the missing chunks again. However,
      because the data actually was sent the receiver will get multiple
      copies of the same data.
      
      To reproduce this do multiple sendfile calls with a length close to
      the max record size. This will in turn call splice/sendpage, sendpage
      may use multiple sk_msg in this case and then returns the incorrect
      number of bytes. This will cause splice to resend creating duplicate
      data on the receiver. Andre created a C program that can easily
      generate this case so we will push a similar selftest for this to
      bpf-next shortly.
      
      The fix is to _not_ zero the copied field so that the total sent
      bytes is returned.
      Reported-by: NSteinar H. Gunderson <steinar+kernel@gunderson.no>
      Reported-by: NAndre Tomt <andre@tomt.net>
      Tested-by: NAndre Tomt <andre@tomt.net>
      Fixes: d829e9c4 ("tls: convert to generic sk_msg interface")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      648ee6ce
  22. 12 6月, 2019 3 次提交
    • J
      net/tls: add kernel-driven TLS RX resync · f953d33b
      Jakub Kicinski 提交于
      TLS offload device may lose sync with the TCP stream if packets
      arrive out of order.  Drivers can currently request a resync at
      a specific TCP sequence number.  When a record is found starting
      at that sequence number kernel will inform the device of the
      corresponding record number.
      
      This requires the device to constantly scan the stream for a
      known pattern (constant bytes of the header) after sync is lost.
      
      This patch adds an alternative approach which is entirely under
      the control of the kernel.  Kernel tracks records it had to fully
      decrypt, even though TLS socket is in TLS_HW mode.  If multiple
      records did not have any decrypted parts - it's a pretty strong
      indication that the device is out of sync.
      
      We choose the min number of fully encrypted records to be 2,
      which should hopefully be more than will get retransmitted at
      a time.
      
      After kernel decides the device is out of sync it schedules a
      resync request.  If the TCP socket is empty the resync gets
      performed immediately.  If socket is not empty we leave the
      record parser to resync when next record comes.
      
      Before resync in message parser we peek at the TCP socket and
      don't attempt the sync if the socket already has some of the
      next record queued.
      
      On resync failure (encrypted data continues to flow in) we
      retry with exponential backoff, up to once every 128 records
      (with a 16k record thats at most once every 2M of data).
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f953d33b
    • J
      net/tls: rename handle_device_resync() · fe58a5a0
      Jakub Kicinski 提交于
      handle_device_resync() doesn't describe the function very well.
      The function checks if resync should be issued upon parsing of
      a new record.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fe58a5a0
    • J
      net/tls: pass record number as a byte array · 89fec474
      Jakub Kicinski 提交于
      TLS offload code casts record number to a u64.  The buffer
      should be aligned to 8 bytes, but its actually a __be64, and
      the rest of the TLS code treats it as big int.  Make the
      offload callbacks take a byte array, drivers can make the
      choice to do the ugly cast if they want to.
      
      Prepare for copying the record number onto the stack by
      defining a constant for max size of the byte array.
      Signed-off-by: NJakub Kicinski <jakub.kicinski@netronome.com>
      Reviewed-by: NDirk van der Merwe <dirk.vandermerwe@netronome.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      89fec474
  23. 05 6月, 2019 2 次提交
  24. 27 5月, 2019 1 次提交