- 29 8月, 2017 1 次提交
-
-
由 Michael J. Ruhl 提交于
Performance analysis shows that the cache callback function sdma_kmem_cache_ctor contributes to 1/2 of the kmem_cache_allocs time. Since all of the fields in the allocated data structure are initialized in the code path, remove the _ctor function. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 23 8月, 2017 1 次提交
-
-
由 Don Hiatt 提交于
PIO/SDMA send logic now uses the hdr_type field to determine the type of packet that has been constructed. Based on the hdr_type, certain things such as PBC flags, padding count and the LT extra trailing bytes are determined. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com> Signed-off-by: NDon Hiatt <don.hiatt@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 01 8月, 2017 3 次提交
-
-
由 Michael J. Ruhl 提交于
The allocate_ctxt() function adds the context to the fd data structure. Since the context is not completely initialized, this can cause confusion as to whether the context is valid or not. Move the fd reference from allocate_ctxt() to setup_base_ctxt(). Update the necessary functions to be aware of this move. Reviewed-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Michael J. Ruhl 提交于
Several data members of the user context have become unused over time. Cleaning them up. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Kaike Wan 提交于
When an egress resource(SDMA descriptors, pio credits) is not available, a sending thread will be put on the resource's wait queue. When the resource becomes available again, up to a fixed number of sending threads can be awakened sequentially and removed from the wait queue, depending on the number of waiting threads and the number of free resources. Since each awakened sending thread will send as many packets as possible, it is highly likely that the first sending thread will consume all the egress resources. Subsequently, it will be put back to the end of the wait queue. Depending on the timing when the later sending threads wake up, they may not be able to send any packet and be again put back to the end of the wait queue sequentially, right behind the first sending thread. This starvation cycle continues until some sending threads exceed their retry limit and consequently fail. This patch fixes the issue by two simple approaches: (1) Any starved sending thread will be put to the head of the wait queue while a served sending thread will be put to the tail; (2) The most starved sending thread will be served first. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NKaike Wan <kaike.wan@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 28 6月, 2017 6 次提交
-
-
由 Mike Marciniszyn 提交于
Declarations and code in common between verbs and PSM are now moved to exp_rcv.[ch]. Reviewed-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: NMitko Haralanov <mitko.haralanov@intel.com> Reviewed-by: NAshutosh Dixit <ashutosh.dixit@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
Atomic bit tests are used to single errors and the completion of request submissions. These operations don't need to be atomic and show to be expensive on the profile. Replace each atomic bit operation with a bool type and a READ_ONCE/WRITE_ONCE pairing. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
The atomic SDMA_REQ_SEND_DONE bit is set by the process-level code, and then the same process-level code uses the bit to test that all packets have been submitted incurring a costly atomic read. Use a bool type with a READ_ONCE/WRITE_ONCE pairing for this bit, and use the same condition that is used to set the bit to test that all packets have been submitted. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
The current user SDMA request structure layout has holes. The cachelines can be reduced to improve cacheline trading. Separate fields in the following categories: mostly read, writable and shared with interrupt. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
An RB tree is used for the SDMA pinning cache. Cache entries are extracted and reinserted from the tree in case the address range for it changes. However, if the address range for the entry doesn't change, deleting the entry from the RB tree is not necessary. This affects performance since the tree needs to be rebalanced for each insertion, and this happens in the hot path. Optimize RB search by not removing entries when it's not needed. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NMitko Haralanov <mitko.haralanov@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
The tx request is unnecessarily initialized in the hot code path with memset(), however, there's no need to do this as most fields are initialized later on. this initialization shows to be costly in the profile. Remove unnecessary initialization from tx request and make sure all variables are initialized properly. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 05 5月, 2017 5 次提交
-
-
由 Michael J. Ruhl 提交于
The error path for context initialization is not consistent. Cleanup all resources on failure. Removed unused variable user_event_mask. Add the _BASE_FAILED bit to the event flags so that a base context can notify waiting sub contexts that they cannot continue. Running out of sub contexts is an EBUSY result, not EINVAL. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
The AHG index is only accessed in the request call from user space, so there's no need for atomic semantics. Replace atomic operations for SDMA_REQ_HAVE_AHG bit with a test of the AHG index. Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Michael J. Ruhl 提交于
Since almost all functions that use the hfi1_filedata get the pointer from the file pointer, simplify by only passing the hfi1_filedata pointer. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Michael J. Ruhl 提交于
To improve the readability of function prototypes, give the parameters names. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sebastian Sanchez 提交于
Div instructions show costly in profiles when the tx request header is set. Using right shift instead of a divide operation reduces the cycles spent in the function that sets the tx request header as shown in the profile. Use right shift operation instead. Profile before change: 43.24% 009 | |--23.41%-- user_sdma_send_pkts | | | |--99.90%-- hfi1_user_sdma_process_requestAfter: Profile after change: 45.75% 009 | |--14.81%-- user_sdma_send_pkts | | | |--99.95%-- hfi1_user_sdma_process_request Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 21 4月, 2017 3 次提交
-
-
由 Markus Elfring 提交于
Replace the specification of a data structure by a reference to the desired member as the parameter for the operator "sizeof" to make the corresponding size determination a bit safer according to the Linux coding style convention. Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Markus Elfring 提交于
* Pass a product for a call of the function "vmalloc_user" without storing it in an intermediate variable. * Delete the local variable "memsize" which became unnecessary with this refactoring. Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Markus Elfring 提交于
* Multiplications for the size determination of memory allocations indicated that array data structures should be processed. Thus reuse the corresponding function "kcalloc". This issue was detected by using the Coccinelle software. * Replace the specification of a data type by a pointer dereference to make the corresponding size determination a bit safer according to the Linux coding style convention. Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 06 4月, 2017 1 次提交
-
-
由 Michael J. Ruhl 提交于
Set the errcode before the state and add the smb_wmb() to avoid a potential race condition with the user. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NMichael Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 19 2月, 2017 1 次提交
-
-
由 Michael J. Ruhl 提交于
Update several usages of kmalloc/user_copy to memdup_copy and memdup_copy_nul. Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NMichael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 12 12月, 2016 1 次提交
-
-
由 Jakub Pawlak 提交于
For the received packets with payload less or equal 8DWS RxDmaDataFifoRdUncErr is not reported. There is set RHF.EccErr if the header is not suppressed. When such packet is detected on the send side the header suppression mechanism is disabled by clearing SH bit in the packet header. Reviewed-by: NMitko Haralanov <mitko.haralanov@intel.com> Signed-off-by: NJakub Pawlak <jakub.pawlak@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 16 11月, 2016 1 次提交
-
-
由 Dennis Dalessandro 提交于
Remove IS_ERR check from caching code as the function being called does not actually return error pointers. Fixes: f19bd643: "IB/hfi1: Prevent NULL pointer deferences in caching code" Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 02 10月, 2016 2 次提交
-
-
由 Tadeusz Struk 提交于
Some users want more control over which cpu cores are being used by the driver. For example, users might want to restrict the driver to some specified subset of the cores so that they can appropriately partition processes, irq handlers, and work threads. To allow the user to fine tune system affinity settings new sysfs attributes are introduced per sdma engine. This patch adds a new attribute type for sdma engine and a new cpu_list attribute. When the user writes a cpu range to the cpu_list attribute the driver will create an internal cpu->sdma map, which will be used later as a look-up table to choose an optimal engine for a user requests. Reviewed-by: NDean Luick <dean.luick@intel.com> Reviewed-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: NSebastian Sanchez <sebastian.sanchez@intel.com> Reviewed-by: NJianxin Xiong <jianxin.xiong@intel.com> Signed-off-by: NTadeusz Struk <tadeusz.struk@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Harish Chegondi 提交于
Each user SDMA request coming into the driver may contain multiple packets. Each user packet may use multiple SDMA descriptors to fill the send buffer. The field seqsubmitted in struct user_sdma_request counts the number of user packets submitted to an SDMA engine. Sometimes, the intermediate count may not be updated properly. However, once all the packets' descriptors are successfully submitted to the SDMA engine, the final count is updated correctly. But, if only some of the packets are submitted to the engine due to an error, the intermediate count doesn't reflect the partial number of packets submitted to the SDMA engine. This can cause a hang later in the code as the count of packets submitted to the SDMA engine doesn't match the the count of packets processed by the SDMA engine. Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NHarish Chegondi <harish.chegondi@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 03 9月, 2016 1 次提交
-
-
由 Jubin John 提交于
In the set_txreq_header_ahg(), The KDETH Intr bit is obtained from the header in the user sdma request using a KDETH_GET shift and mask macro. This value is then futher right shifted by 16 causing us to lose the value i.e it is shifted to zero, leading to the following smatch warning: drivers/infiniband/hw/hfi1/user_sdma.c:1482 set_txreq_header_ahg() warn: mask and shift to zero The Intr bit should be left shifted into its correct position in the KDETH header before the AHG update. Reported-by: NDan Carpenter <dan.carpenter@oracle.com> Reviewed-by: NMitko Haralanov <mitko.haralanov@intel.com> Reviewed-by: NHarish Chegondi <harish.chegondi@intel.com> Signed-off-by: NJubin John <jubin.john@intel.com> Signed-off-by: NDennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 03 8月, 2016 14 次提交
-
-
由 Dean Luick 提交于
The reworked mmu_rb interface allows the unused mm argument to be removed. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
The ops->remove() callback was called by hfi1_mmu_unregister() with a NULL mm argument while holding a spinlock. In the case of sdma_rb_remove() this caused it to pass current->mm to hfi1_release_user_pages() This had 2 problems. First this would attempt to acquire the mmap_sem under a spin lock. Second the use of current->mm is not always guaranteed to be the proper mm when the fd is being closed. Rather than depend on this implicit behavior we move all calls to ops->remove outside of the spinlock. This also allows the correct mm to be used in the remove callback without fear of deadlock. Because the MMU notifier is not guaranteed to hold mm->mmap_sem, but usually does, we must delay all remove callbacks until out of the notifier, when the callbacks can take the mmap_sem if they need to. Code comments were added to clarify what the expectations are for the users of the mmu rb tree. Suggested-by: NJim Foraker <foraker1@llnl.gov> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
Use the new cache evict operation in the SDMA code. This allows the cache to properly coordinate evicts and removes, preventing any race. With this change, the separate list, lock, and race flag are not needed. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
The objects which use cache handling should reference their own handler object not the internal data structure it uses to track the nodes. Have the "users" of the mmu notifier code pass opaque objects which can then be properly used in the mmu callbacks depending on the owners needs. This patch has the additional benefit that operations no longer require a look up in a list to find the handlers. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Ira Weiny 提交于
The hfi1 driver registers a mmu_notifier callback when /dev/hfi1_* is opened, and unregisters it when the device is closed. The driver incorrectly assumes that the close will always happen from the same context as the open. In particular, closes due to SIGKILL or OOM killer activity may happen from a different context. In these cases, the wrong mm is passed to mmu_notifier_unregister(), which causes improper reference counting for the victim mm, and eventual memory corruption. Preserve the mm for all open file descriptors and use this mm rather than current->mm for memory operations for the lifetime of that fd. Note: this patch leaves 1 use of current->mm in place. This use is removed in a follow on patch because other functional changes were required prior to that use being removed. If registration fails, there is no reason to keep the handler object around. Free the handler object rather than add it to the list to prevent any mmu_notifier operations, including unregister, when registration fails. Suggested-by: NJim Foraker <foraker1@llnl.gov> Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
The user SDMA in-use claim bit is in the structure that gets zeroed out once the claim is made. Move the request in-use flag into its own bit array and use that for atomic claims. This cleans up the claim code and removes any race possibility. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
If input validation fails, properly free the request before returning. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
If unable to insert node into the RB tree cache, node will be freed before returning from the function. Null out iovec's pointer to node so iovec does not try to free it later. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Ira Weiny 提交于
If a context has not been assigned or assignment failed, pq may be NULL. Move the unregister within the protection of the null check. Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Dean Luick 提交于
Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Ira Weiny 提交于
For bool parameters "false" should be used Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Ira Weiny 提交于
Brackets should be on the next line of a function Reviewed-by: NDean Luick <dean.luick@intel.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-