- 10 12月, 2015 1 次提交
-
-
由 Jan Stancek 提交于
We encountered a panic on boot in ipmi_si on a dell per320 due to an uninitialized timer as follows. static int smi_start_processing(void *send_info, ipmi_smi_t intf) { /* Try to claim any interrupts. */ if (new_smi->irq_setup) new_smi->irq_setup(new_smi); --> IRQ arrives here and irq handler tries to modify uninitialized timer which triggers BUG_ON(!timer->function) in __mod_timer(). Call Trace: <IRQ> [<ffffffffa0532617>] start_new_msg+0x47/0x80 [ipmi_si] [<ffffffffa053269e>] start_check_enables+0x4e/0x60 [ipmi_si] [<ffffffffa0532bd8>] smi_event_handler+0x1e8/0x640 [ipmi_si] [<ffffffff810f5584>] ? __rcu_process_callbacks+0x54/0x350 [<ffffffffa053327c>] si_irq_handler+0x3c/0x60 [ipmi_si] [<ffffffff810efaf0>] handle_IRQ_event+0x60/0x170 [<ffffffff810f245e>] handle_edge_irq+0xde/0x180 [<ffffffff8100fc59>] handle_irq+0x49/0xa0 [<ffffffff8154643c>] do_IRQ+0x6c/0xf0 [<ffffffff8100ba53>] ret_from_intr+0x0/0x11 /* Set up the timer that drives the interface. */ setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi); The following patch fixes the problem. To: Openipmi-developer@lists.sourceforge.net To: Corey Minyard <minyard@acm.org> CC: linux-kernel@vger.kernel.org Signed-off-by: NJan Stancek <jstancek@redhat.com> Signed-off-by: NTony Camuso <tcamuso@redhat.com> Signed-off-by: NCorey Minyard <cminyard@mvista.com> Cc: stable@vger.kernel.org # Applies cleanly to 3.10-, needs small rework before
-
- 09 12月, 2015 8 次提交
-
-
由 Carlo Caione 提交于
of_irq_find_parent was made static since it had no users outside of of_irq.c. Export it again since we are going to use it again. Signed-off-by: NCarlo Caione <carlo@endlessm.com> [robh: move of_irq_find_parent to correct ifdef section] Signed-off-by: NRob Herring <robh@kernel.org>
-
由 Leon Romanovsky 提交于
The remove_keys() logic is performed as garbage collection task. Such task is intended to be run when no other active processes are running. The need_resched() will return TRUE if there are user tasks to be activated in near future. In such case, we don't execute remove_keys() and postpone the garbage collection work to try to run in next cycle, in order to free CPU resources to other tasks. The possible pseudo-code to trigger such scenario: 1. Allocate a lot of MR to fill the cache above the limit. 2. Wait a small amount of time "to calm" the system. 3. Start CPU extensive operations on multi-node cluster. 4. Expect performance degradation during MR cache shrink operation. Signed-off-by: NLeon Romanovsky <leonro@mellanox.com> Signed-off-by: NEli Cohen <eli@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Wengang Wang 提交于
There are several hits that WR buffer allocation(kmalloc) failed. It failed at order 3 and/or 4 contigous pages allocation. At the same time there are actually 100MB+ free memory but well fragmented. So try vmalloc when kmalloc failed. Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com> Acked-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Wengang Wang 提交于
There is a mis-order in mlx4 log. Fix it. Signed-off-by: NWengang Wang <wen.gang.wang@oracle.com> Acked-by: NOr Gerlitz <ogerlitz@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
The driver now exposes sufficient limits so we can avoid having mlx4 specific work-around. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
mlx4 devices (ConnectX-2, ConnectX-3) has a limitation where rdma read work queue entries cannot exceed 512 bytes. A rdma_read wqe needs to fit in 512 bytes: - wqe control segment (16 bytes) - rdma segment (16 bytes) - scatter elements (16 bytes each) So max_sge_rd should be: (512 - 16 - 16) / 16 = 30. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NSteve Wise <swise@opengridcomputing.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Hal Rosenstock 提交于
Receipt of CM MAD with other than the Send method for an attribute other than the ClassPortInfo attribute is invalid. CM attributes other than ClassPortInfo only use the send method. The SRP initiator does not maintain a timeout policy for CM connect requests relies on the CM layer to do that. The result was that the SRP initiator hung as the connect request never completed. A new SRP target has been observed to respond to Send CM REQ with GetResp of CM REQ with bad status. This is non conformant with IBA spec but exposes a vulnerability in the current MAD/CM code which will respond to the incoming GetResp of CM REQ as if it was a valid incoming Send of CM REQ rather than tossing this on the floor. It also causes the MAD layer not to retransmit the original REQ even though it has not received a REP. Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NHal Rosenstock <hal@mellanox.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
Ensure that validate_ipv4_net_dev() calls rcu_read_unlock() if fib_lookup() fails. Detected by sparse. Compile-tested only. Fixes: "IB/cma: Validate routing of incoming requests" (commit f887f2ac). Cc: Haggai Eran <haggaie@mellanox.com> Cc: stable <stable@vger.kernel.org> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NHaggai Eran <haggaie@mellanox.com> Reviewed-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 08 12月, 2015 13 次提交
-
-
由 Guenter Roeck 提交于
__unflatten_device_tree() calls unflatten_dt_node(), which declares a static variable. It is therefore not reentrant. One of the callers of __unflatten_device_tree(), unflatten_device_tree(), is only called once during early initialization and does not need to be protected. The other caller, of_fdt_unflatten_tree(), can be called at any time, possibly multiple times in parallel. This can happen, for example, if multiple devicetree overlays have to be loaded and installed. Without this protection, errors such as the following may be seen. kernel: End of tree marker overwritten: e6a3a458 kernel: find_target_node: Failed to find target-indirect node at /fragment@0 kernel: __of_overlay_create: of_build_overlay_info() failed for tree@/ Add a mutex to of_fdt_unflatten_tree() to make the call reentrant. Cc: Pantelis Antoniou <pantelis.antoniou@konsulko.com> Signed-off-by: NGuenter Roeck <linux@roeck-us.net> Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: NRob Herring <robh@kernel.org>
-
由 Bart Van Assche 提交于
On 12/03/2015 01:18 AM, Christoph Hellwig wrote: > The patch looks good to me, but while we touch this area, how about > throwing in a few cosmetic fixes as well? How about the patch below ? In that version of the ib_sg_to_pages() fix these concerns have been addressed and additionally to more bugs have been fixed. ------------ [PATCH] IB core: Fix ib_sg_to_pages() Fix the code for detecting gaps. A gap occurs not only if the second or later scatterlist element is not aligned but also if any scatterlist element other than the last does not end at a page boundary. In the code for coalescing contiguous elements, ensure that mr->length is correct and that last_page_addr is up-to-date. Ensure that this function returns a negative error code instead of zero if the first set_page() call fails. Fixes: commit 4c67e2bf ("IB/core: Introduce new fast registration API") Reported-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
After dma_map_sg() has been called the return value of that function must be used as the number of elements in the scatterlist instead of scsi_sg_count(). Fixes: commit f7f7aab1 ("IB/srp: Convert to new registration API") Reported-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Cc: stable <stable@vger.kernel.org> # v4.4+ Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
Detected by sparse. Fixes: commit 330179f2 ("IB/srp: Register the indirect data buffer descriptor") Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Cc: stable <stable@vger.kernel.org> # v4.3+ Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Christoph Hellwig 提交于
Without this sg_dma_len will return 0 on architectures tha have the dma_length field. Fixes: commit f7f7aab1 ("IB/srp: Convert to new registration API") Signed-off-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Sagi Grimberg 提交于
When using work request based memory registration (fast_reg) we must reserve SQ entries for registration and invalidation in addition to send operations. Each IO consumes 3 SQ entries (registration, send, invalidation) so we need to allocate 3x larger send-queue instead of 2x. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> CC: Stable <stable@vger.kernel.org> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Bart Van Assche 提交于
If srp_connect_ch() returns a positive value then that is considered by its caller as a connection failure but this does not result in a scsi_host_put() call and additionally causes the srp_create_target() function to return a positive value while it should return a negative value. Avoid all this confusion and additionally fix a memory leak by ensuring that srp_connect_ch() always returns a value that is <= 0. This patch avoids that a rejected login triggers the following memory leak: unreferenced object 0xffff88021b24a220 (size 8): comm "srp_daemon", pid 56421, jiffies 4295006762 (age 4240.750s) hex dump (first 8 bytes): 68 6f 73 74 35 38 00 a5 host58.. backtrace: [<ffffffff8151014a>] kmemleak_alloc+0x7a/0xc0 [<ffffffff81165c1e>] __kmalloc_track_caller+0xfe/0x160 [<ffffffff81260d2b>] kvasprintf+0x5b/0x90 [<ffffffff81260e2d>] kvasprintf_const+0x8d/0xb0 [<ffffffff81254b0c>] kobject_set_name_vargs+0x3c/0xa0 [<ffffffff81337e3c>] dev_set_name+0x3c/0x40 [<ffffffff81355757>] scsi_host_alloc+0x327/0x4b0 [<ffffffffa03edc8e>] srp_create_target+0x4e/0x8a0 [ib_srp] [<ffffffff8133778b>] dev_attr_store+0x1b/0x20 [<ffffffff811f27fa>] sysfs_kf_write+0x4a/0x60 [<ffffffff811f1e8e>] kernfs_fop_write+0x14e/0x180 [<ffffffff81176eef>] __vfs_write+0x2f/0xf0 [<ffffffff811771e4>] vfs_write+0xa4/0x100 [<ffffffff81177c64>] SyS_write+0x54/0xc0 [<ffffffff8151b257>] entry_SYSCALL_64_fastpath+0x12/0x6f Signed-off-by: NBart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Cc: Sebastian Parschauer <sebastian.riemer@profitbricks.com> Cc: stable <stable@vger.kernel.org> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Kaike Wan 提交于
It was found by Saurabh Sengar that the netlink code tried to allocate memory with GFP_KERNEL while holding a spinlock. While it is possible to fix the issue by replacing GFP_KERNEL with GFP_ATOMIC, it is better to get rid of the spinlock while sending the packet. However, in order to protect against a race condition that a quick response may be received before the request is put on the request list, we need to put the request on the list first. Signed-off-by: NKaike Wan <kaike.wan@intel.com> Reviewed-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Reviewed-by: NIra Weiny <ira.weiny@intel.com> Reported-by: NSaurabh Sengar <saurabh.truth@gmail.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Arnd Bergmann 提交于
do_div is the wrong way to divide a sector_t, as it is less efficient when sector_t is 32-bit wide. With the upcoming do_div optimizations, the kernel starts warning about this: drivers/infiniband/ulp/iser/iser_verbs.c:1296:4: note: in expansion of macro 'do_div' include/asm-generic/div64.h:224:22: warning: passing argument 1 of '__div64_32' from incompatible pointer type This changes the code to use sector_div instead, which always produces optimal code. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Reviewed-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Marciniszyn 提交于
The current implementation gets a spin_lock, and at any scale with qib and hfi1 post send, the lock contention grows exponentially with the number of QPs. idr_find() is RCU compatibile, so read doesn't need the lock. Change to use rcu_read_lock() and rcu_read_unlock() in __idr_get_uobj(). kfree_rcu() is used to insure a grace period between the idr removal and actual free. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-By: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Easwar Hariharan 提交于
Minor errors found via code inspection during future development. SFF 8636 defines bit position 2 to hold the status indication of QSFP memory paging. The mask used to test for the value was incorrect and is fixed in this patch. Additionally, the dump function had a mismatch between the field being printed out and the field used to source the data which was fixed. Reviewed-by: NMitko Haralanov <mitko.haralanov@intel.com> Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reported-by: NEaswar Hariharan <easwar.hariharan@intel.com> Signed-off-by: NEaswar Hariharan <easwar.hariharan@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Mike Marciniszyn 提交于
Commit e622f2f4 ("IB: split struct ib_send_wr") introduced a regression for HCAs whose user mode post sends go through ib_uverbs_post_send(). The code didn't account for the fact that the first sge is offset by an operation dependent length. The allocation did, but the pointer to the destination sge list is computed without that knowledge. The sge list copy_from_user() then corrupts fields in the work request Store the operation dependent length in a local variable and compute the sge list copy_from_user() destination using that length. Reviewed-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
由 Ira Weiny 提交于
struct qib_mr requires the mr member be the last because struct qib_mregion contains a dynamic array at the end. The additions of members should have been placed before this structure as the comment noted. Failure to do so was causing random memory corruption. Reproducing this bug was easy to do by running the client and server of ib_write_bw -s 8 -n 5 on the same node. This BUG() was tripped in a slab debug kernel: kernel BUG at mm/slab.c:2572! Fixes: 38071a46 ("IB/qib: Support the new memory registration API") Reviewed-by: NMike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: NIra Weiny <ira.weiny@intel.com> Signed-off-by: NDoug Ledford <dledford@redhat.com>
-
- 07 12月, 2015 9 次提交
-
-
由 Venkatesh Srinivas 提交于
Improves cacheline transfer flow of available ring header. Virtqueues are implemented as a pair of rings, one producer->consumer avail ring and one consumer->producer used ring; preceding the avail ring in memory are two contiguous u16 fields -- avail->flags and avail->idx. A producer posts work by writing to avail->idx and a consumer reads avail->idx. The flags and idx fields only need to be written by a producer CPU and only read by a consumer CPU; when the producer and consumer are running on different CPUs and the virtio_ring code is structured to only have source writes/sink reads, we can continuously transfer the avail header cacheline between 'M' states between cores. This flow optimizes core -> core bandwidth on certain CPUs. (see: "Software Optimization Guide for AMD Family 15h Processors", Section 11.6; similar language appears in the 10h guide and should apply to CPUs w/ exclusive caches, using LLC as a transfer cache) Unfortunately the existing virtio_ring code issued reads to the avail->idx and read-modify-writes to avail->flags on the producer. This change shadows the flags and index fields in producer memory; the vring code now reads from the shadows and only ever writes to avail->flags and avail->idx, allowing the cacheline to transfer core -> core optimally. In a concurrent version of vring_bench, the time required for 10,000,000 buffer checkout/returns was reduced by ~2% (average across many runs) on an AMD Piledriver (15h) CPU: (w/o shadowing): Performance counter stats for './vring_bench': 5,451,082,016 L1-dcache-loads ... 2.221477739 seconds time elapsed (w/ shadowing): Performance counter stats for './vring_bench': 5,405,701,361 L1-dcache-loads ... 2.168405376 seconds time elapsed The further away (in a NUMA sense) virtio producers and consumers are from each other, the more we expect to benefit. Physical implementations of virtio devices and implementations of virtio where the consumer polls vring avail indexes (vhost) should also benefit. Signed-off-by: NVenkatesh Srinivas <venkateshs@google.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michal Hocko 提交于
b92b1b89 ("virtio: force vring descriptors to be allocated from lowmem") tried to exclude highmem pages for descriptors so it cleared __GFP_HIGHMEM from a given gfp mask. The patch also cleared __GFP_HIGH which doesn't make much sense for this fix because __GFP_HIGH only controls access to memory reserves and it doesn't have any influence on the zone selection. Some of the call paths use GFP_ATOMIC and dropping __GFP_HIGH will reduce their changes for success because the lack of access to memory reserves. Signed-off-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Acked-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMel Gorman <mgorman@techsingularity.net>
-
由 Michael S. Tsirkin 提交于
We know vring num is a power of 2, so use & to mask the high bits. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Suman Anna 提交于
The virtio core uses a static ida named virtio_index_ida for assigning index numbers to virtio devices during registration. The ida core may allocate some internal idr cache layers and an ida bitmap upon any ida allocation, and all these layers are truely freed only upon the ida destruction. The virtio_index_ida is not destroyed at present, leading to a memory leak when using the virtio core as a module and atleast one virtio device is registered and unregistered. Fix this by invoking ida_destroy() in the virtio core module exit. Cc: stable@vger.kernel.org Signed-off-by: NSuman Anna <s-anna@ti.com> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Michael S. Tsirkin 提交于
commit 5d9a07b0 ("vhost: relax used address alignment") fixed the alignment for the used virtual address, but not for the physical address used for logging. That's a mistake: alignment should clearly be the same for virtual and physical addresses, Cc: stable@vger.kernel.org Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
由 Andreas Werner 提交于
Every attempt to issue a read log page command lockup the controller. The command is currently sent if the sata device includes the devlsp feature to read out the timing data. This attempt to read the data, locks up the controller and the device is not recognzied correctly (failed to set xfermode) and cannot be accessed. This was found on Freescale P1013/P1022 and T4240 CPUs using a ATP IG mSATA 4GB with the devslp feature. fsl-sata ff718000.sata: Sata FSL Platform/CSB Driver init [ 1.254195] scsi0 : sata_fsl [ 1.256004] ata1: SATA max UDMA/133 irq 74 [ 1.370666] fsl-gianfar ethernet.3: enabled errata workarounds, flags: 0x4 [ 1.470671] fsl-gianfar ethernet.4: enabled errata workarounds, flags: 0x4 [ 1.775584] ata1: Signature Update detected @ 504 msecs [ 1.947594] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 1.948366] ata1.00: ATA-8: ATP IG mSATA, 20150311, max UDMA/133 [ 1.948371] ata1.00: 7732368 sectors, multi 0: LBA [ 1.948843] ata1.00: failed to get Identify Device Data, Emask 0x1 [ 1.948857] ata1.00: failed to set xfermode (err_mask=0x40) [ 7.467557] ata1: Signature Update detected @ 504 msecs [ 7.639560] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 7.651320] ata1.00: failed to get Identify Device Data, Emask 0x1 [ 7.651360] ata1.00: failed to set xfermode (err_mask=0x40) [ 7.655628] ata1: limiting SATA link speed to 1.5 Gbps [ 7.659458] ata1.00: limiting speed to UDMA/133:PIO3 [ 13.163554] ata1: Signature Update detected @ 504 msecs [ 13.335558] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [ 13.347298] ata1.00: failed to get Identify Device Data, Emask 0x1 [ 13.347334] ata1.00: failed to set xfermode (err_mask=0x40) [ 13.351601] ata1.00: disabled [ 13.353278] ata1: exception Emask 0x50 SAct 0x0 SErr 0x800 action 0x6 frozen t4 [ 13.359281] ata1: SError: { HostInt } [ 13.361644] ata1: hard resetting link Signed-off-by: NAndreas Werner <andreas.werner@men.de> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Andreas Werner 提交于
Some controller lockup on a ata_read_log_page. Add new ata port flag ATA_FLAG_NO_LOG_PAGE which can used to blacklist a controller. If this flag is set, any attempt to read a log page returns an error without actually issuing the command. Signed-off-by: NAndreas Werner <andreas.werner@men.de> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Michael S. Tsirkin 提交于
Once virtio starts using the DMA API, we won't be able to safely DMA from the stack. virtio-net does a couple of config DMA requests from small stack buffers -- switch to using dynamically-allocated memory. This should have no effect on any performance-critical code paths. Reported-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Tested-by: NAndy Lutomirski <luto@kernel.org>
-
由 James Simmons 提交于
The ioctl IOC_LIBCFS_PING_TEST has not been used in ages. The recent nidstring changes which moved all the nidstring operations from libcfs to the LNet layer but this ioctl code was still using an nidstring operation that was causing a circular dependency loop between libcfs and LNet. Signed-off-by: NJames Simmons <jsimmons@infradead.org> Signed-off-by: NOleg Drokin <green@linuxhacker.ru> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 12月, 2015 9 次提交
-
-
由 Ley Foon Tan 提交于
PCI interrupt lines start at 1, not at 0. So, creates additional one interrupt when register for irq domain. Error when PCIe devices have 4 INTx: WARNING: CPU: 1 PID: 1 at kernel/irq/irqdomain.c:280 irq_domain_associate+0x17c/0x1cc() error: hwirq 0x4 is too large for dummy Tested on Ethernet adapter card with multi-functions. Signed-off-by: NLey Foon Tan <lftan@altera.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Ley Foon Tan 提交于
Check TLP packet successful completion status. This fix the issue when accessing multi-function devices in enumeration process, TLP will return error when accessing non-exist function number. Returns PCI error code instead of generic errno. Tested on Ethernet adapter card with multi-functions. [bhelgaas: simplify completion status checking code] Signed-off-by: NLey Foon Tan <lftan@altera.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Ley Foon Tan 提交于
The Requester ID should use the Root Port devfn and it should be always 0. Previously we constructed the Requester ID using the *Completer* devfn, i.e., the devfn of the Function we expect to respond to the config access. This causes issues when accessing configuration space for devices other than the Root Port. Build the Requester ID using the Root Port devfn. Tested on Ethernet adapter card with multi-functions. Signed-off-by: NLey Foon Tan <lftan@altera.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Dan Carpenter 提交于
TLP_LOOP is 500 and the "loop" variable was a u8 so "loop < TLP_LOOP" is always true. We only need this condition to work if there is a problem so it would have been easy to miss this in testing. Make it a normal for loop with "int i" instead of over thinking things and making it complicated. Fixes: 6bb4dd154ae8 ("PCI: altera: Add Altera PCIe host controller driver") Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com> Acked-by: NLey Foon Tan <lftan@altera.com>
-
由 Alex Deucher 提交于
commit 4dfd6486 "drm: Use vblank timestamps to guesstimate how many vblanks were missed" introduced in Linux 4.4-rc1 makes the drm core more fragile to drivers which don't update hw vblank counters and vblank timestamps in sync with firing of the vblank irq and essentially at leading edge of vblank. This exposed a problem with radeon-kms/amdgpu-kms which do not satisfy above requirements: The vblank irq fires a few scanlines before start of vblank, but programmed pageflips complete at start of vblank and vblank timestamps update at start of vblank, whereas the hw vblank counter increments only later, at start of vsync. This leads to problems like off by one errors for vblank counter updates, vblank counters apparently going backwards or vblank timestamps apparently having time going backwards. The net result is stuttering of graphics in games, or little hangs, as well as total failure of timing sensitive applications. See bug #93147 for an example of the regression on Linux 4.4-rc: https://bugs.freedesktop.org/show_bug.cgi?id=93147 This patch tries to align all above events better from the viewpoint of the drm core / of external callers to fix the problem: 1. The apparent start of vblank is shifted a few scanlines earlier, so the vblank irq now always happens after start of this extended vblank interval and thereby drm_update_vblank_count() always samples the updated vblank count and timestamp of the new vblank interval. To achieve this, the reporting of scanout positions by radeon_get_crtc_scanoutpos() now operates as if the vblank starts radeon_crtc->lb_vblank_lead_lines before the real start of the hw vblank interval. This means that the vblank timestamps which are based on these scanout positions will now update at this earlier start of vblank. 2. The driver->get_vblank_counter() function will bump the returned vblank count as read from the hw by +1 if the query happens after the shifted earlier start of the vblank, but before the real hw increment at start of vsync, so the counter appears to increment at start of vblank in sync with the timestamp update. 3. Calls from vblank irq-context and regular non-irq calls are now treated identical, always simulating the shifted vblank start, to avoid inconsistent results for queries happening from vblank irq vs. happening from drm_vblank_enable() or vblank_disable_fn(). 4. The radeon_flip_work_func will delay mmio programming a pageflip until the start of the real vblank iff it happens to execute inside the shifted earlier start of the vblank, so pageflips now also appear to execute at start of the shifted vblank, in sync with vblank counter and timestamp updates. This to avoid some races between updates of vblank count and timestamps that are used for swap scheduling and pageflip execution which could cause pageflips to execute before the scheduled target vblank. The lb_vblank_lead_lines "fudge" value is calculated as the size of the display controllers line buffer in scanlines for the given video mode: Vblank irq's are triggered by the line buffer logic when the line buffer refill for a video frame ends, ie. when the line buffer source read position enters the hw vblank. This means that a vblank irq could fire at most as many scanlines before the current reported scanout position of the crtc timing generator as the number of scanlines the line buffer can maximally hold for a given video mode. This patch has been successfully tested on a RV730 card with DCE-3 display engine and on a evergreen card with DCE-4 display engine, in single-display and dual-display configuration, with different video modes. A similar patch is needed for amdgpu-kms to fix the same problem. Limitations: - Maybe replace the udelay() in the flip_work_func() by a suitable usleep_range() for a bit better efficiency? Will try that. - Line buffer sizes in pixels are hard-coded on < DCE-4 to a value i just guessed to be high enough to work ok, lacking info on the true sizes atm. Probably fixes: fdo#93147 Port of Mario's radeon fix to amdgpu. Signed-off-by: NAlex Deucher <alexander.deucher@amd.com> (v1) Reviewed-by: Mario Kleiner <mario.kleiner.de@gmail.com> (v2) Refine amdgpu_flip_work_func() for better efficiency. In amdgpu_flip_work_func, replace the busy waiting udelay(5) with event lock held by a more performance and energy efficient usleep_range() until at least predicted true start of hw vblank, with some slack for scheduler happiness. Release the event lock during waits to not delay other outputs in doing their stuff, as the waiting can last up to 200 usecs in some cases. Also small fix to code comment and formatting in that function. (v2) Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> (v3) Fix crash in crtc disabled case
-
由 Mario Kleiner 提交于
commit 4dfd6486 "drm: Use vblank timestamps to guesstimate how many vblanks were missed" introduced in Linux 4.4-rc1 makes the drm core more fragile to drivers which don't update hw vblank counters and vblank timestamps in sync with firing of the vblank irq and essentially at leading edge of vblank. This exposed a problem with radeon-kms/amdgpu-kms which do not satisfy above requirements: The vblank irq fires a few scanlines before start of vblank, but programmed pageflips complete at start of vblank and vblank timestamps update at start of vblank, whereas the hw vblank counter increments only later, at start of vsync. This leads to problems like off by one errors for vblank counter updates, vblank counters apparently going backwards or vblank timestamps apparently having time going backwards. The net result is stuttering of graphics in games, or little hangs, as well as total failure of timing sensitive applications. See bug #93147 for an example of the regression on Linux 4.4-rc: https://bugs.freedesktop.org/show_bug.cgi?id=93147 This patch tries to align all above events better from the viewpoint of the drm core / of external callers to fix the problem: 1. The apparent start of vblank is shifted a few scanlines earlier, so the vblank irq now always happens after start of this extended vblank interval and thereby drm_update_vblank_count() always samples the updated vblank count and timestamp of the new vblank interval. To achieve this, the reporting of scanout positions by radeon_get_crtc_scanoutpos() now operates as if the vblank starts radeon_crtc->lb_vblank_lead_lines before the real start of the hw vblank interval. This means that the vblank timestamps which are based on these scanout positions will now update at this earlier start of vblank. 2. The driver->get_vblank_counter() function will bump the returned vblank count as read from the hw by +1 if the query happens after the shifted earlier start of the vblank, but before the real hw increment at start of vsync, so the counter appears to increment at start of vblank in sync with the timestamp update. 3. Calls from vblank irq-context and regular non-irq calls are now treated identical, always simulating the shifted vblank start, to avoid inconsistent results for queries happening from vblank irq vs. happening from drm_vblank_enable() or vblank_disable_fn(). 4. The radeon_flip_work_func will delay mmio programming a pageflip until the start of the real vblank iff it happens to execute inside the shifted earlier start of the vblank, so pageflips now also appear to execute at start of the shifted vblank, in sync with vblank counter and timestamp updates. This to avoid some races between updates of vblank count and timestamps that are used for swap scheduling and pageflip execution which could cause pageflips to execute before the scheduled target vblank. The lb_vblank_lead_lines "fudge" value is calculated as the size of the display controllers line buffer in scanlines for the given video mode: Vblank irq's are triggered by the line buffer logic when the line buffer refill for a video frame ends, ie. when the line buffer source read position enters the hw vblank. This means that a vblank irq could fire at most as many scanlines before the current reported scanout position of the crtc timing generator as the number of scanlines the line buffer can maximally hold for a given video mode. This patch has been successfully tested on a RV730 card with DCE-3 display engine and on a evergreen card with DCE-4 display engine, in single-display and dual-display configuration, with different video modes. A similar patch is needed for amdgpu-kms to fix the same problem. Limitations: - Line buffer sizes in pixels are hard-coded on < DCE-4 to a value i just guessed to be high enough to work ok, lacking info on the true sizes atm. Fixes: fdo#93147 Signed-off-by: NMario Kleiner <mario.kleiner.de@gmail.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Michel Dänzer <michel.daenzer@amd.com> Cc: Harry Wentland <Harry.Wentland@amd.com> Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1) Tested-by: Dave Witbrodt <dawitbro@sbcglobal.net> (v2) Refine radeon_flip_work_func() for better efficiency: In radeon_flip_work_func, replace the busy waiting udelay(5) with event lock held by a more performance and energy efficient usleep_range() until at least predicted true start of hw vblank, with some slack for scheduler happiness. Release the event lock during waits to not delay other outputs in doing their stuff, as the waiting can last up to 200 usecs in some cases. Retested on DCE-3 and DCE-4 to verify it still works nicely. (v2) Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 Lyude 提交于
HPD signals on DVI ports can be fired off before the pins required for DDC probing actually make contact, due to the pins for HPD making contact first. This results in a HPD signal being asserted but DDC probing failing, resulting in hotplugging occasionally failing. This is somewhat rare on most cards (depending on what angle you plug the DVI connector in), but on some cards it happens constantly. The Radeon R5 on the machine used for testing this patch for instance, runs into this issue just about every time I try to hotplug a DVI monitor and as a result hotplugging almost never works. Rescheduling the hotplug work for a second when we run into an HPD signal with a failing DDC probe usually gives enough time for the rest of the connector's pins to make contact, and fixes this issue. Reviewed-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NLyude <cpaul@redhat.com> Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-
由 jimqu 提交于
there is a protection fault about freed list when OCL test. add a spin lock to protect it. v2: drop changes in vm_fini Signed-off-by: NJimQu <jim.qu@amd.com> Reviewed-by: NChristian König <christian.koenig@amd.com>
-
由 Christian König 提交于
The gtt_end is already inclusive, we don't need to subtract one here. v2 (chk): keep the fix for the VM code, cause here it really applies. Signed-off-by: NChristian König <christian.koenig@amd.com> Signed-off-by: NAnatoli Antonovitch <anatoli.antonovitch@amd.com> Reviewed-by: NAlex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org Signed-off-by: NAlex Deucher <alexander.deucher@amd.com>
-