1. 29 4月, 2016 30 次提交
    • E
      IB/hfi1: Remove module presence check outside pre-LNI checks · 623bba2d
      Easwar Hariharan 提交于
      The pre-LNI SerDes and channel tuning algorithm already checks for
      module presence assertion for the relevant port types. The extraneous
      check removed in this patch blocks link up for port types for which
      the module presence assertion is not relevant.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NEaswar Hariharan <easwar.hariharan@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      623bba2d
    • E
      IB/hfi1: Always turn on CDRs for low power QSFP modules · 145dd2b3
      Easwar Hariharan 提交于
      Clock and data recovery mechanisms (CDRs) in active QSFP modules
      can be turned on or off to improve the bit error rate observed on
      the channel. Signal integrity and bit error rate requirements require
      us to always turn on any CDRs present in low power cables (power
      dissipation 2.5W or lower). However, we adhere to the platform
      designer's settings (provided in the platform configuration) for
      higher power cables (dissipation 3.5W or higher) if the platform
      designer has determined that the platform requires the CDRs to be
      turned on (or off) and is capable of supplying and cooling the higher
      power modules.
      
      This patch also introduces the get_qsfp_power_class function to
      centralize the bit twiddling required to determine the QSFP power class
      across the code. Reusing this function improves the readability of code
      that depends on knowing the power class of the cable, such as the
      active and optical channel tuning algorithm.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NEaswar Hariharan <easwar.hariharan@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      145dd2b3
    • S
      IB/hfi1: Check P_KEY for all sent packets from user mode · e38d1e4f
      Sebastian Sanchez 提交于
      Add the P_KEY check for user-context mechanism for
      both PIO and SDMA. For PIO, the
      SendCtxtCheckEnable.DisallowKDETHPackets is set by
      default. When the P_KEY is set,
      SendCtxtCheckEnable.DisallowKDETHPackets is cleared.
      For SDMA, a software check was included. This change
      requires user processes to set the P_KEY before sending
      any packets, otherwise, the sent packet will fail. The
      original submission didn't have this check but it's
      required.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NMikto Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e38d1e4f
    • S
      IB/hfi1: Adjust default MTU to be 10KB · ef699e84
      Sebastian Sanchez 提交于
      Increasing the default MTU size to 10KB improves performance
      for PSM. Change the default MTU to 10KB but constrain
      Verbs MTU to 8KB. Also update default MTU module parameter
      description to be HFI1_DEFAULT_MAX_MTU.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Reviewed-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Reviewed-by: NJubin John <jubin.john@intel.com>
      Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com>
      Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      ef699e84
    • D
      IB/hfi1: Simplify init_qpmap_table() · 60d585ad
      Dean Luick 提交于
      Make init_qpmap_table() easier to understand by simplifying
      the loop indexing and writing each register when it is "full",
      removing the need for a follow-on register write.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      60d585ad
    • D
      IB/hfi1: Correctly obtain the full service class · de882ff5
      Dean Luick 提交于
      The function hdr2sc was using an unshifted mask to obtain
      the 5th bit of the service class.  Correct the issue by using
      the shifted mask.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      de882ff5
    • D
      IB/hfi1: Fix QOS rule mappings · 33a9eb52
      Dean Luick 提交于
      The QOS RSM rule mappings are off by one, referencing a kernel receive
      context that does not exist.
      
      Correctly start the QOS RSM map entries at FIRST_KERNEL_CONTEXT rather
      than MIN_KERNEL_KCTXTS.  Remove the cruft that hid this.
      
      Change the QP map table so all traffic not caught by QOS RSM goes to
      the control context rather than the first QOS context.
      
      Correct comments to match the actual code operation and intent.
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      33a9eb52
    • D
      IB/hfi1: Remove invalid QOS check · 35969d9b
      Dean Luick 提交于
      Remove an invalid compare of the number of QOS RSM map table entries
      against the number of physical receive contexts.  The RSM map table
      has its own size and has no relation to the number of physical receive
      contexts.
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      35969d9b
    • D
      IB/hfi1: Fix QOS num_vl bit width · 153d58cd
      Dean Luick 提交于
      The bit width for num_vls, n, needs to be calculated based on
      the pow2 rounded up of the number of vls.  Otherwise num_vls of 3,
      5, 6, and 7 will have misplaced QOS RSM map entries.
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      153d58cd
    • D
      IB/hfi1: Fix i2c resource reservation checks · f9c82a0b
      Dean Luick 提交于
      The i2c and qsfp read/write routines should check for the resource
      reservation of the incoming argument target rather than the implicit
      target of the hardware HFI.
      Reviewed-by: NEaswar Hariharan <easwar.hariharan@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NJubin John <jubin.john@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      f9c82a0b
    • D
      IB/hfi1: Fix sysfs file offset usage · 4ee15859
      Dean Luick 提交于
      Two sysfs files do not pay attention to the file offset when
      reading data. Fix that.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NJubin John <jubin.john@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      4ee15859
    • J
      IB/rdmavt,hfi1,qib: Fix memory leak · ea0e4ce3
      Jubin John 提交于
      rdi->ports has memory allocated in rvt_alloc_device(), but does not get
      freed because the hfi1 and qib drivers drivers call ib_dealloc_device()
      directly instead of going through rdmavt. Add a rvt_dealloc_device()
      that frees rdi->ports and then calls ib_dealloc_device(). Switch hfi1
      and qib drivers to calling rvt_dealloc_device() instead of
      ib_dealloc_device() directly.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NBrian Welty <brian.welty@intel.com>
      Signed-off-by: NJubin John <jubin.john@intel.com>
      Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      ea0e4ce3
    • M
      IB/hfi1: Fix buffer cache races which may cause corruption · e88c9271
      Mitko Haralanov 提交于
      There are two possible causes for node/memory corruption both
      of which are related to the cache eviction algorithm. One way
      to cause corruption is due to the asynchronous nature of the
      MMU invalidation and the locking used when invalidating node.
      
      The MMU invalidation routine would temporarily release the
      RB tree lock to avoid a deadlock. However, this would allow
      the eviction function to take the lock resulting in the removal
      of cache nodes.
      
      If the node being removed by the eviction code is the same as
      the node being invalidated, the result is use after free.
      
      The same is true in the other direction due to the temporary
      release of the eviction list lock in the eviction loop.
      
      Another corner case exists when dealing with the SDMA buffer
      cache that could cause memory corruption of kernel memory.
      The most common way, in which this corruption exhibits itself
      is a linked list node corruption. In that case, the kernel will
      complain that a node with poisoned pointers is being removed.
      The fact that the pointers are already poisoned means that the
      node has already been removed from the list.
      
      To root cause of this corruption was a mishandling of the
      eviction list maintained by the driver. In order for this
      to happen four conditions need to be satisfied:
      
         1. A node describing a user buffer already exists in the
            interval RB tree,
         2. The beginning of the current user buffer matches that
            node but is bigger. This will cause the node to be
            extended.
         3. The amount of cached buffers is close or at the limit
            of the buffer cache size.
         4. The node has dropped close to the end of the eviction
            list. This will cause the node to be considered for
            eviction.
      
      If all of the above conditions have been satisfied, it is
      possible for the eviction algorithm to evict the current node,
      which will free the node without the driver knowing.
      
      To solve both issues described above:
         - the locking around the MMU invalidation loop and cache
           eviction loop has been improved so locks are not released in
           the loop body,
         - a new RB function is introduced which will "atomically" find
           and remove the matching node from the RB tree, preventing the
           MMU invalidation loop from touching it, and
         - the node being extended by the pin_vector_pages() function is
           removed from the eviction list prior to calling the eviction
           function.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e88c9271
    • M
      IB/hfi1: Extract and reinsert MMU RB node on lookup · f53af85e
      Mitko Haralanov 提交于
      The page pinning function, which also maintains the pin cache,
      behaves one of two ways when an exact buffer match is not found:
        1. If no node is not found (a buffer with the same starting address
           is not found in the cache), a new node is created, the buffer
           pages are pinned, and the node is inserted into the RB tree, or
        2. If a node is found but the buffer in that node is a subset of
           the new user buffer, the node is extended with the new buffer
           pages.
      
      Both modes of operation require (re-)insertion into the interval RB
      tree.
      
      When the node being inserted is a new node, the operations are pretty
      simple. However, when the node is already existing and is being
      extended, special care must be taken.
      
      First, we want to guard against an asynchronous attempt to
      delete the node by the MMU invalidation notifier. The simplest way to
      do this is to remove the node from the RB tree, preventing the search
      algorithm from finding it.
      
      Second, the node needs to be re-inserted so it lands in the proper place
      in the tree and the tree is correctly re-balanced. This also requires
      the node to be removed from the RB tree.
      
      This commit adds the hfi1_mmu_rb_extract() function, which will search
      for a node in the interval RB tree matching an address and length and
      remove it from the RB tree if found. This allows for both of the above
      special cases be handled in a single step.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      f53af85e
    • M
      IB/hfi1: Correctly compute node interval · de79093b
      Mitko Haralanov 提交于
      The computation of the interval of an interval RB node
      was incorrect leading to data corruption due to the RB
      search algorithm not properly finding the all RB nodes
      in an MMU invalidation interval.
      
      The problem stemmed from the fact that the beginning
      address of the node's range was being aligned to a page
      boundary. For certain buffer sizes, this would lead to
      a end address calculation that was off by 1 page.
      
      An important aspect of keeping the RB same is also
      updating the node's range in the case it's being extended.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      de79093b
    • M
      IB/hfi1: Protect the interval RB tree when cleaning up · 782f6697
      Mitko Haralanov 提交于
      The current implementation of the clean up function for
      the interval RB trees has two flaws which may cause
      problems in cases of concurrent executing of the function
      and MMU notifier.
      
      The flaws were due to the fact that deregistration of the
      MMU callbacks was done after the tree was emptied and,
      furthermore, the tree was not being locked.
      
      This commit fixes both of these flaws by, first, switch the
      order of operations, and, second, locking the tree while
      traversing it to prevent any other operations.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      782f6697
    • M
      IB/hfi1: Fix memory leak in user ExpRcv and SDMA · 0ad2d3d0
      Mitko Haralanov 提交于
      The driver had two memory leaks - one in the user
      expected receive code and one in SDMA buffer cache.
      
      The leak in the expected receive code only showed up
      when the user/admin had set ulimit sufficiently low
      and the driver did not have enough room in the cache
      before hitting the limit of allowed cachable memory.
      
      When this condition occurred, the driver returned
      early signaling userland that it needed to free some
      buffers to free up room in the cache.
      
      The bug was that the driver was not cleaning up
      allocated memory prior to returning early.
      
      The leak in the SDMA buffer cache could occur (even
      though it never did), when the insertion of a buffer
      node in the interval RB tree failed. In this case, the
      driver failed to unpin the pages of the node instead
      erroneously returning success.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      0ad2d3d0
    • M
      IB/hfi1: Don't remove list entries if they are not in a list · 4787bc5e
      Mitko Haralanov 提交于
      The SDMA cache logic maintains an eviction list which is ordered
      by most recently used user buffers. Upon errors or buffer freeing,
      the list nodes were unconditionally being deleted. This would lead
      to list corruption warnings if the nodes were never inserted in the
      eviction list to begin with.
      
      This commit prevents this by checking that the nodes are already
      part of the eviction list.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      4787bc5e
    • M
      IB/qib, IB/hfi1: Fix up UD loopback use of irq flags · 747f4d7a
      Mike Marciniszyn 提交于
      The dual lock patch moved locking around and missed an issue
      with handling irq flags when processing UD loopback
      packets.  This issue was revealed by smatch.
      
      Fix for both qib and hfi1 to pass the saved flags to the UD request
      builder and handle the changes correctly.
      
      Fixes: 46a80d62 ("IB/qib, staging/rdma/hfi1: add s_hlock for use in post send")
      Reported-by: NDan Carpenter <dan.carpenter@oracle.com>
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      747f4d7a
    • M
      IB/rdmavt: Fix adaptive pio hang · f39cc34d
      Mike Marciniszyn 提交于
      The RVT_S_WAIT_PIO_DRAIN flag was missing from
      the set of flags indicating a qp is waiting
      on a resource.
      
      This caused the sleep/wakeup for adaptive pio
      drain to lose a wakeup "hanging" a QP.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      f39cc34d
    • D
      Merge branch 'k.o/for-4.6-rc' into testing/4.6 · e29bff46
      Doug Ledford 提交于
      e29bff46
    • D
      Merge branch 'master' of... · d53e181c
      Doug Ledford 提交于
      Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into testing/4.6
      d53e181c
    • J
      IB/security: Restrict use of the write() interface · e6bd18f5
      Jason Gunthorpe 提交于
      The drivers/infiniband stack uses write() as a replacement for
      bi-directional ioctl().  This is not safe. There are ways to
      trigger write calls that result in the return structure that
      is normally written to user space being shunted off to user
      specified kernel memory instead.
      
      For the immediate repair, detect and deny suspicious accesses to
      the write API.
      
      For long term, update the user space libraries and the kernel API
      to something that doesn't present the same security vulnerabilities
      (likely a structured ioctl() interface).
      
      The impacted uAPI interfaces are generally only available if
      hardware from drivers/infiniband is installed in the system.
      Reported-by: NJann Horn <jann@thejh.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      [ Expanded check to all known write() entry points ]
      Cc: stable@vger.kernel.org
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e6bd18f5
    • D
      IB/hfi1: Use kernel default llseek for ui device · 7723d8c2
      Dean Luick 提交于
      The ui device llseek had a mistake with SEEK_END and did
      not fully follow seek semantics.  Correct all this by
      using a kernel supplied function for fixed size devices.
      
      Cc: Al Viro <viro@ZenIV.linux.org.uk>
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      7723d8c2
    • M
      IB/hfi1: Don't attempt to free resources if initialization failed · 94158442
      Mitko Haralanov 提交于
      Attempting to free resources which have not been allocated and
      initialized properly led to the following kernel backtrace:
      
          BUG: unable to handle kernel NULL pointer dereference at           (null)
          IP: [<ffffffffa09658fe>] unlock_exp_tids.isra.8+0x2e/0x120 [hfi1]
          PGD 852a43067 PUD 85d4a6067 PMD 0
          Oops: 0000 [#1] SMP
          CPU: 0 PID: 2831 Comm: osu_bw Tainted: G          IO 3.12.18-wfr+ #1
          task: ffff88085b15b540 ti: ffff8808588fe000 task.ti: ffff8808588fe000
          RIP: 0010:[<ffffffffa09658fe>]  [<ffffffffa09658fe>] unlock_exp_tids.isra.8+0x2e/0x120 [hfi1]
          RSP: 0018:ffff8808588ffde0  EFLAGS: 00010282
          RAX: 0000000000000000 RBX: ffff880858a31800 RCX: 0000000000000000
          RDX: ffff88085d971bc0 RSI: ffff880858a318f8 RDI: ffff880858a318c0
          RBP: ffff8808588ffe20 R08: 0000000000000000 R09: 0000000000000000
          R10: ffff88087ffd6f40 R11: 0000000001100348 R12: ffff880852900000
          R13: ffff880858a318c0 R14: 0000000000000000 R15: ffff88085d971be8
          FS:  00007f4674e83740(0000) GS:ffff88087f400000(0000) knlGS:0000000000000000
          CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
          CR2: 0000000000000000 CR3: 000000085c377000 CR4: 00000000001407f0
          Stack:
           ffffffffa0941a71 ffff880858a318f8 ffff88085d971bc0 ffff880858a31800
           ffff880852900000 ffff880858a31800 00000000003ffff7 ffff88085d971bc0
           ffff8808588ffe60 ffffffffa09663fc ffff8808588ffe60 ffff880858a31800
          Call Trace:
           [<ffffffffa0941a71>] ? find_mmu_handler+0x51/0x70 [hfi1]
           [<ffffffffa09663fc>] hfi1_user_exp_rcv_free+0x6c/0x120 [hfi1]
           [<ffffffffa0932809>] hfi1_file_close+0x1a9/0x340 [hfi1]
           [<ffffffff8116c189>] __fput+0xe9/0x270
           [<ffffffff8116c35e>] ____fput+0xe/0x10
           [<ffffffff81065707>] task_work_run+0xa7/0xe0
           [<ffffffff81002969>] do_notify_resume+0x59/0x80
           [<ffffffff814ffc1a>] int_signal+0x12/0x17
      
      This commit re-arranges the context initialization code in a way that
      would allow for context event flags to be used to determine whether
      the context has been successfully initialized.
      
      In turn, this can be used to skip the resource de-allocation if they
      were never allocated in the first place.
      
      Fixes: 3abb33ac ("staging/hfi1: Add TID cache receive init and free funcs")
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Reviewed-by: Leon Romanovsky <leonro@mellanox.com.
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      94158442
    • M
      IB/hfi1: Fix missing lock/unlock in verbs drain callback · b9b06cb6
      Mike Marciniszyn 提交于
      The iowait_sdma_drained() callback lacked locking to
      protect the qp s_flags field.
      
      This causes the s_flags to be out of sync
      on multiple CPUs, potentially corrupting the s_flags.
      
      Fixes: a545f530 ("staging/rdma/hfi: fix CQ completion order issue")
      Reviewed-by: NSebastian Sanchez <sebastian.sanchez@intel.com>
      Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Reviewed-by: NLeon Romanovsky <leonro@mellanox.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      b9b06cb6
    • J
      IB/rdmavt: Fix send scheduling · e6d2e017
      Jubin John 提交于
      call_send is used to determine whether to send immediately or schedule
      a send for later. The current logic in rdmavt is inverted and has a
      negative impact on the latency of the hfi1 and qib drivers. Fix this
      regression by correctly calling send immediately when call_send is set.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com>
      Signed-off-by: NJubin John <jubin.john@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      e6d2e017
    • M
      IB/hfi1: Prevent unpinning of wrong pages · 849e3e93
      Mitko Haralanov 提交于
      The routine used by the SDMA cache to handle already
      cached nodes can extend an already existing node.
      
      In its error handling code, the routine will unpin pages
      when not all pages of the buffer extension were pinned.
      
      There was a bug in that part of the routine, which would
      mistakenly unpin pages from the original set rather than
      the newly pinned pages.
      
      This commit fixes that bug by offsetting the page array
      to the proper place pointing at the beginning of the newly
      pinned pages.
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      849e3e93
    • M
      IB/hfi1: Fix deadlock caused by locking with wrong scope · de82bdff
      Mitko Haralanov 提交于
      The locking around the interval RB tree is designed to prevent
      access to the tree while it's being modified. The locking in its
      current form is too overzealous, which is causing a deadlock in
      certain cases with the following backtrace:
      
          Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 0
          CPU: 0 PID: 5836 Comm: IMB-MPI1 Tainted: G           O 3.12.18-wfr+ #1
           0000000000000000 ffff88087f206c50 ffffffff814f1caa ffffffff817b53f0
           ffff88087f206cc8 ffffffff814ecd56 0000000000000010 ffff88087f206cd8
           ffff88087f206c78 0000000000000000 0000000000000000 0000000000001662
          Call Trace:
           <NMI>  [<ffffffff814f1caa>] dump_stack+0x45/0x56
           [<ffffffff814ecd56>] panic+0xc2/0x1cb
           [<ffffffff810d4370>] ? restart_watchdog_hrtimer+0x50/0x50
           [<ffffffff810d4432>] watchdog_overflow_callback+0xc2/0xd0
           [<ffffffff81109b4e>] __perf_event_overflow+0x8e/0x2b0
           [<ffffffff8110a714>] perf_event_overflow+0x14/0x20
           [<ffffffff8101c906>] intel_pmu_handle_irq+0x1b6/0x390
           [<ffffffff814f927b>] perf_event_nmi_handler+0x2b/0x50
           [<ffffffff814f8ad8>] nmi_handle.isra.3+0x88/0x180
           [<ffffffff814f8d39>] do_nmi+0x169/0x310
           [<ffffffff814f8177>] end_repeat_nmi+0x1e/0x2e
           [<ffffffff81272600>] ? unmap_single+0x30/0x30
           [<ffffffff814f780d>] ? _raw_spin_lock_irqsave+0x2d/0x40
           [<ffffffff814f780d>] ? _raw_spin_lock_irqsave+0x2d/0x40
           [<ffffffff814f780d>] ? _raw_spin_lock_irqsave+0x2d/0x40
           <<EOE>>  <IRQ>  [<ffffffffa056c4a8>] hfi1_mmu_rb_search+0x38/0x70 [hfi1]
           [<ffffffffa05919cb>] user_sdma_free_request+0xcb/0x120 [hfi1]
           [<ffffffffa0593393>] user_sdma_txreq_cb+0x263/0x350 [hfi1]
           [<ffffffffa057fad7>] ? sdma_txclean+0x27/0x1c0 [hfi1]
           [<ffffffffa0593130>] ? user_sdma_send_pkts+0x1710/0x1710 [hfi1]
           [<ffffffffa057fdd6>] sdma_make_progress+0x166/0x480 [hfi1]
           [<ffffffff810762c9>] ? ttwu_do_wakeup+0x19/0xd0
           [<ffffffffa0581c7e>] sdma_engine_interrupt+0x8e/0x100 [hfi1]
           [<ffffffffa0546bdd>] sdma_interrupt+0x5d/0xa0 [hfi1]
           [<ffffffff81097e57>] handle_irq_event_percpu+0x47/0x1d0
           [<ffffffff81098017>] handle_irq_event+0x37/0x60
           [<ffffffff8109aa5f>] handle_edge_irq+0x6f/0x120
           [<ffffffff810044af>] handle_irq+0xbf/0x150
           [<ffffffff8104c9b7>] ? irq_enter+0x17/0x80
           [<ffffffff8150168d>] do_IRQ+0x4d/0xc0
           [<ffffffff814f7c6a>] common_interrupt+0x6a/0x6a
           <EOI>  [<ffffffff81073524>] ? finish_task_switch+0x54/0xe0
           [<ffffffff814f56c6>] __schedule+0x3b6/0x7e0
           [<ffffffff810763a6>] __cond_resched+0x26/0x30
           [<ffffffff814f5eda>] _cond_resched+0x3a/0x50
           [<ffffffff814f4f82>] down_write+0x12/0x30
           [<ffffffffa0591619>] hfi1_release_user_pages+0x69/0x90 [hfi1]
           [<ffffffffa059173a>] sdma_rb_remove+0x9a/0xc0 [hfi1]
           [<ffffffffa056c00d>] __mmu_rb_remove.isra.5+0x5d/0x70 [hfi1]
           [<ffffffffa056c536>] hfi1_mmu_rb_remove+0x56/0x70 [hfi1]
           [<ffffffffa059427b>] hfi1_user_sdma_process_request+0x74b/0x1160 [hfi1]
           [<ffffffffa055c763>] hfi1_aio_write+0xc3/0x100 [hfi1]
           [<ffffffff8116a14c>] do_sync_readv_writev+0x4c/0x80
           [<ffffffff8116b58b>] do_readv_writev+0xbb/0x230
           [<ffffffff811a9da1>] ? fsnotify+0x241/0x320
           [<ffffffff81073524>] ? finish_task_switch+0x54/0xe0
           [<ffffffff8116b795>] vfs_writev+0x35/0x60
           [<ffffffff8116b8c9>] SyS_writev+0x49/0xc0
           [<ffffffff810cd876>] ? __audit_syscall_exit+0x1f6/0x2a0
           [<ffffffff814ff992>] system_call_fastpath+0x16/0x1b
      
      As evident from the backtrace above, the process was being put to sleep
      while holding the lock.
      
      Limiting the scope of the lock only to the RB tree operation fixes the
      above error allowing for proper locking and the process being put to
      sleep when needed.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      de82bdff
    • M
      IB/hfi1: Prevent NULL pointer deferences in caching code · f19bd643
      Mitko Haralanov 提交于
      There is a potential kernel crash when the MMU notifier calls the
      invalidation routines in the hfi1 pinned page caching code for sdma.
      
      The invalidation routine could call the remove callback
      for the node, which in turn ends up dereferencing the
      current task_struct to get a pointer to the mm_struct.
      However, the mm_struct pointer could be NULL resulting in
      the following backtrace:
      
          BUG: unable to handle kernel NULL pointer dereference at 00000000000000a8
          IP: [<ffffffffa041f75a>] sdma_rb_remove+0xaa/0x100 [hfi1]
          15
          task: ffff88085e66e080 ti: ffff88085c244000 task.ti: ffff88085c244000
          RIP: 0010:[<ffffffffa041f75a>]  [<ffffffffa041f75a>] sdma_rb_remove+0xaa/0x100 [hfi1]
          RSP: 0000:ffff88085c245878  EFLAGS: 00010002
          RAX: 0000000000000000 RBX: ffff88105b9bbd40 RCX: ffffea003931a830
          RDX: 0000000000000004 RSI: ffff88105754a9c0 RDI: ffff88105754a9c0
          RBP: ffff88085c245890 R08: ffff88105b9bbd70 R09: 00000000fffffffb
          R10: ffff88105b9bbd58 R11: 0000000000000013 R12: ffff88105754a9c0
          R13: 0000000000000001 R14: 0000000000000001 R15: ffff88105b9bbd40
          FS:  0000000000000000(0000) GS:ffff88107ef40000(0000) knlGS:0000000000000000
          CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
          CR2: 00000000000000a8 CR3: 0000000001a0b000 CR4: 00000000001407e0
          Stack:
           ffff88105b9bbd40 ffff88080ec481a8 ffff88080ec481b8 ffff88085c2458c0
           ffffffffa03fa00e ffff88080ec48190 ffff88080ed9cd00 0000000001024000
           0000000000000000 ffff88085c245920 ffffffffa03fa0e7 0000000000000282
          Call Trace:
           [<ffffffffa03fa00e>] __mmu_rb_remove.isra.5+0x5e/0x70 [hfi1]
           [<ffffffffa03fa0e7>] mmu_notifier_mem_invalidate+0xc7/0xf0 [hfi1]
           [<ffffffffa03fa143>] mmu_notifier_page+0x13/0x20 [hfi1]
           [<ffffffff81156dd0>] __mmu_notifier_invalidate_page+0x50/0x70
           [<ffffffff81140bbb>] try_to_unmap_one+0x20b/0x470
           [<ffffffff81141ee7>] try_to_unmap_anon+0xa7/0x120
           [<ffffffff81141fad>] try_to_unmap+0x4d/0x60
           [<ffffffff8111fd7b>] shrink_page_list+0x2eb/0x9d0
           [<ffffffff81120ab3>] shrink_inactive_list+0x243/0x490
           [<ffffffff81121491>] shrink_lruvec+0x4c1/0x640
           [<ffffffff81121641>] shrink_zone+0x31/0x100
           [<ffffffff81121b0f>] kswapd_shrink_zone.constprop.62+0xef/0x1c0
           [<ffffffff811229e3>] kswapd+0x403/0x7e0
           [<ffffffff811225e0>] ? shrink_all_memory+0xf0/0xf0
           [<ffffffff81068ac0>] kthread+0xc0/0xd0
           [<ffffffff81068a00>] ? insert_kthread_work+0x40/0x40
           [<ffffffff814ff8ec>] ret_from_fork+0x7c/0xb0
           [<ffffffff81068a00>] ? insert_kthread_work+0x40/0x40
      
      To correct this, the mm_struct passed to us by the MMU notifier is
      used (which is what should have been done to begin with). This avoids
      the broken derefences and ensures that the correct mm_struct is used.
      Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com>
      Reviewed-by: NDean Luick <dean.luick@intel.com>
      Signed-off-by: NMitko Haralanov <mitko.haralanov@intel.com>
      Signed-off-by: NDoug Ledford <dledford@redhat.com>
      f19bd643
  2. 28 4月, 2016 7 次提交
  3. 27 4月, 2016 3 次提交