- 29 1月, 2015 2 次提交
-
-
由 Akinobu Mita 提交于
The owner module reference of the pata_of_platform's scsi_host is initialized to pata_platform's one, because pata_of_platform driver use a scsi_host_template defined in pata_platform. So this drivers can be unloaded even if the scsi device is being accessed. This fixes it by propagating the scsi_host_template to pata_of_platform driver. The scsi_host_template is passed through a new argument of __pata_platform_probe(). Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: linux-ide@vger.kernel.org Cc: linux-scsi@vger.kernel.org
-
由 Akinobu Mita 提交于
The owner module reference of the ahci platform's scsi_host is initialized to libahci_platform's one, because these drivers use a scsi_host_template defined in libahci_platform. So these drivers can be unloaded even if the scsi device is being accessed. This fixes it by pushing the scsi_host_template from libahci_platform to all leaf drivers. The scsi_host_template is passed through a new argument of ahci_platform_init_host(). Signed-off-by: NAkinobu Mita <akinobu.mita@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org> Cc: Hans de Goede <hdegoede@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "James E.J. Bottomley" <JBottomley@parallels.com> Cc: linux-ide@vger.kernel.org Cc: linux-scsi@vger.kernel.org
-
- 19 1月, 2015 2 次提交
-
-
由 Gregory CLEMENT 提交于
The current implementation of the libahci allows using multiple PHYs but not multiple regulators. This patch adds the support of multiple regulators. Until now it was mandatory to have a PHY under a subnode, now a port subnode can contain either a regulator or a PHY (or both). In order to be able to asociate a port with a regulator the port are now a platform device in the device tree case. There was only one driver which used directly the regulator field of the ahci_host_priv structure. To preserve the bisectability the change in the ahci_imx driver was done in the same patch. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
由 Dan Williams 提交于
Ronny reports: https://bugzilla.kernel.org/show_bug.cgi?id=87101 "Since commit 8a4aeec8 "libata/ahci: accommodate tag ordered controllers" the access to the harddisk on the first SATA-port is failing on its first access. The access to the harddisk on the second port is working normal. When reverting the above commit, access to both harddisks is working fine again." Maintain tag ordered submission as the default, but allow sata_sil24 to continue with the old behavior. Cc: <stable@vger.kernel.org> Cc: Tejun Heo <tj@kernel.org> Reported-by: NRonny Hegewald <Ronny.Hegewald@online.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 08 1月, 2015 1 次提交
-
-
由 Martin K. Petersen 提交于
As defined, the DRAT (Deterministic Read After Trim) and RZAT (Return Zero After Trim) flags in the ATA Command Set are unreliable in the sense that they only define what happens if the device successfully executed the DSM TRIM command. TRIM is only advisory, however, and the device is free to silently ignore all or parts of the request. In practice this renders the DRAT and RZAT flags completely useless and because the results are unpredictable we decided to disable discard in MD for 3.18 to avoid the risk of data corruption. Hardware vendors in the real world obviously need better guarantees than what the standards bodies provide. Unfortuntely those guarantees are encoded in product requirements documents rather than somewhere we can key off of them programatically. So we are compelled to disabling discard_zeroes_data for all devices unless we explicitly have data to support whitelisting them. This patch whitelists SSDs from a few of the main vendors. None of the whitelists are based on written guarantees. They are purely based on empirical evidence collected from internal and external users that have tested or qualified these drives in RAID deployments. The whitelist is only meant as a starting point and is by no means comprehensive: - All intel SSD models except for 510 - Micron M5?0/M600 - Samsung SSDs - Seagate SSDs Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com> Reviewed-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 26 12月, 2014 1 次提交
-
-
由 Nicholas Krause 提交于
Changes the spelling typos of removeable to removable where ata_id_removeable is defined in ata.h and called in libata-scsi.c respectively. Signed-off-by: NNicholas Krause <xerofoify@gmail.com> Signed-off-by: NTejun Heo <tj@kernel.org>
-
- 20 12月, 2014 1 次提交
-
-
由 Rafael J. Wysocki 提交于
Having switched over all of the users of CONFIG_PM_RUNTIME to use CONFIG_PM directly, turn the latter into a user-selectable option and drop the former entirely from the tree. Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NKevin Hilman <khilman@linaro.org>
-
- 19 12月, 2014 1 次提交
-
-
由 Pintu Kumar 提交于
When the system boots up, in the dmesg logs we can see the memory statistics along with total reserved as below. Memory: 458840k/458840k available, 65448k reserved, 0K highmem When CMA is enabled, still the total reserved memory remains the same. However, the CMA memory is not considered as reserved. But, when we see /proc/meminfo, the CMA memory is part of free memory. This creates confusion. This patch corrects the problem by properly subtracting the CMA reserved memory from the total reserved memory in dmesg logs. Below is the dmesg snapshot from an arm based device with 512MB RAM and 12MB single CMA region. Before this change: Memory: 458840k/458840k available, 65448k reserved, 0K highmem After this change: Memory: 458840k/458840k available, 53160k reserved, 12288k cma-reserved, 0K highmem Signed-off-by: NPintu Kumar <pintu.k@samsung.com> Signed-off-by: NVishnu Pratap Singh <vishnu.ps@samsung.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Cc: Rafael Aquini <aquini@redhat.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 12月, 2014 12 次提交
-
-
由 Christian Borntraeger 提交于
ACCESS_ONCE does not work reliably on non-scalar types. For example gcc 4.6 and 4.7 might remove the volatile tag for such accesses during the SRA (scalar replacement of aggregates) step https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145) Let's provide READ_ONCE/ASSIGN_ONCE that will do all accesses via scalar types as suggested by Linus Torvalds. Accesses larger than the machines word size cannot be guaranteed to be atomic. These macros will use memcpy and emit a build warning. Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Paolo Bonzini 提交于
They are not used anymore by IA64, move them away. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> -
由 Ilya Dryomov 提交于
pagelist.h needs to include linux/types.h and asm/byteorder.h and not rely on other headers pulling yet another set of headers. Signed-off-by: NIlya Dryomov <idryomov@redhat.com> -
由 Yan, Zheng 提交于
Add a new parameter 'locked_page' to ceph_do_getattr(). If inline data in getattr reply will be copied to the page. Signed-off-by: NYan, Zheng <zyan@redhat.com> -
由 Yan, Zheng 提交于
Request reply and cap message can contain inline data. add inline data to the page cache if there is Fc cap. Signed-off-by: NYan, Zheng <zyan@redhat.com> -
由 Yan, Zheng 提交于
allow specifying position of extent operation in multi-operations osd request. This is required for cephfs to convert inline data to normal data (compare xattr, then write object). Signed-off-by: NYan, Zheng <zyan@redhat.com> Reviewed-by: NIlya Dryomov <idryomov@redhat.com>
-
由 Yan, Zheng 提交于
Signed-off-by: NYan, Zheng <zyan@redhat.com> Reviewed-by: NIlya Dryomov <idryomov@redhat.com>
-
由 Yan, Zheng 提交于
Signed-off-by: NYan, Zheng <zyan@redhat.com> Reviewed-by: NIlya Dryomov <idryomov@redhat.com>
-
由 John Spray 提交于
2 bytes of what was reserved space is now used by userspace for the compat_version field. Signed-off-by: NJohn Spray <john.spray@redhat.com> Reviewed-by: NSage Weil <sage@redhat.com>
-
由 Yan, Zheng 提交于
Signed-off-by: NYan, Zheng <zyan@redhat.com> -
由 Ilya Dryomov 提交于
Use kvfree() from linux/mm.h instead, which is identical. Also fix the ceph_buffer comment: we will allocate with kmalloc() up to 32k - the value of PAGE_ALLOC_COSTLY_ORDER, but that really is just an implementation detail so don't mention it at all. Signed-off-by: NIlya Dryomov <idryomov@redhat.com> -
由 Yan, Zheng 提交于
When a lock operation is interrupted, current code sends a unlock request to MDS to undo the lock operation. This method does not work as expected because the unlock request can drop locks that have already been acquired. The fix is use the newly introduced CEPH_LOCK_FCNTL_INTR/CEPH_LOCK_FLOCK_INTR requests to interrupt blocked file lock request. These requests do not drop locks that have alread been acquired, they only interrupt blocked file lock request. Signed-off-by: NYan, Zheng <zyan@redhat.com>
-
- 17 12月, 2014 3 次提交
-
-
由 Al Viro 提交于
the only instance this method has ever grown was one in kernfs - one that call ->migrate() of another vm_ops if it exists. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> -
由 David S. Miller 提交于
Otherwise we get things like: warning: (NET_DSA_BCM_SF2 && BCMGENET && SYSTEMPORT) selects FIXED_PHY which has unmet direct dependencies (NETDEVICES && PHYLIB=y) In order to make this work we have to rename fixed.c to fixed_phy.c because the regulator drivers already have a module named "fixed.o". Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 12月, 2014 13 次提交
-
-
由 Jiang Liu 提交于
Introduce acpi_ioapic_registered() to check whether an IOAPIC has already been registered, it will be used when enabling IOAPIC hotplug. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Acked-by: NPavel Machek <pavel@ucw.cz> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Len Brown <len.brown@intel.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: Prarit Bhargava <prarit@redhat.com> Link: http://lkml.kernel.org/r/1414387308-27148-18-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jiang Liu 提交于
To keep balance of IOAPIC pin reference count, we need to protect pirq_enable_irq(), acpi_pci_irq_enable() and intel_mid_pci_irq_enable() from reentrance. There are two cases which will cause reentrance. The first case is caused by suspend/hibernation. If pcibios_disable_irq is called during suspending/hibernating, we don't release the assigned IRQ number, otherwise it may break the suspend/hibernation. So late when pcibios_enable_irq is called during resume, we shouldn't allocate IRQ number again. The second case is that function acpi_pci_irq_enable() may be called twice for PCI devices present at boot time as below: 1) pci_acpi_init() --> acpi_pci_irq_enable() if pci_routeirq is true 2) pci_enable_device() --> pcibios_enable_device() --> acpi_pci_irq_enable() We can't kill kernel parameter pci_routeirq yet because it's still needed for debugging purpose. So flag irq_managed is introduced to track whether IRQ number is assigned by OS and to protect pirq_enable_irq(), acpi_pci_irq_enable() and intel_mid_pci_irq_enable() from reentrance. Signed-off-by: NJiang Liu <jiang.liu@linux.intel.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Yinghai Lu <yinghai@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Len Brown <lenb@kernel.org> Link: http://lkml.kernel.org/r/1414387308-27148-13-git-send-email-jiang.liu@linux.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Haggai Eran 提交于
This patch implement a page fault handler (leaving the pages pinned as of time being). The page fault handler handles initiator and responder page faults for UD/RC transports, for send/receive operations, as well as RDMA read/write initiator support. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
* Refactor MR registration and cleanup, and fix reg_pages accounting. * Create a work queue to handle page fault events in a kthread context. * Register a fault handler to get events from the core for each QP. The registered fault handler is empty in this patch, and only a later patch implements it. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
The new function allows updating the page tables of a memory region after it was created. This can be used to handle page faults and page invalidations. Since mlx5_ib_update_mtt will need to work from within page invalidation, so it must not block on memory allocation. It employs an atomic memory allocation mechanism that is used as a fallback when kmalloc(GFP_ATOMIC) fails. In order to reuse code from mlx5_ib_populate_pas, the patch splits this function and add the needed parameters. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
This patch wraps together several changes needed for on-demand paging support in the mlx5_ib_populate_pas function, and when registering memory regions. * Instead of accepting a UMR bit telling the function to enable all access flags, the function now accepts the access flags themselves. * For on-demand paging memory regions, fill the memory tables from the correct list, and enable/disable the access flags per-page according to whether the page is present. * A new bit is set to enable writing of access flags when using the firmware create_mkey command. * Disable contig pages when on-demand paging is enabled. In addition the patch changes the UMR code to use PTR_ALIGN instead of our own macro. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
* Add a handler function pointer in the mlx5_core_qp struct for page fault events. Handle page fault events by calling the handler function, if not NULL. * Add on-demand paging capability query command. * Export command for resuming QPs after page faults. * Add various constants related to paging support. Signed-off-by: NSagi Grimberg <sagig@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Roland Dreier 提交于
In commit 0c7aac85 ("net/mlx5_core: Remove unused dev cap enum fields"), the flag MLX5_DEV_CAP_FLAG_ON_DMND_PG was removed. Unfortunately the on-demand paging changes actually use it, so re-add the missing flag. Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
Add a helper function mlx5_ib_read_user_wqe to read information from user-space owned work queues. The function will be used in a later patch by the page-fault handling code in mlx5_ib. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> [ Add stub for ib_umem_copy_from() for CONFIG_INFINIBAND_USER_MEM=n - Roland ] Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Haggai Eran 提交于
The current UMR interface doesn't allow partial updates to a memory region's page tables. This patch changes the interface to allow that. It also changes the way the UMR operation validates the memory region's state. When set, IB_SEND_UMR_FAIL_IF_FREE will cause the UMR operation to fail if the MKEY is in the free state. When it is unchecked the operation will check that it isn't in the free state. Signed-off-by: NHaggai Eran <haggaie@mellanox.com> Signed-off-by: NShachar Raindel <raindel@mellanox.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Tero Kristo 提交于
While the change for determine_rate clock operation was merged, the OMAP counterpart using these calls was overlooked for some reason, and caused boot failures on at least OMAP4 platforms. Fixed by updating the DPLL API calls to use the new parameters. Signed-off-by: NTero Kristo <t-kristo@ti.com> Fixes: 646cafc6 ("clk: Change clk_ops->determine_rate") Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NPaul Walmsley <paul@pwsan.com> Tested-by: NKevin Hilman <khilman@linaro.org> Reported-by: NKevin Hilman <khilman@linaro.org> Signed-off-by: NMichael Turquette <mturquette@linaro.org>
-
由 Michael S. Tsirkin 提交于
When switching everything over to virtio 1.0 memory access APIs, I missed converting vringh. Fortunately, it's straight-forward. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> -
由 Michael S. Tsirkin 提交于
Pass u64 everywhere. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 15 12月, 2014 3 次提交
-
-
由 Steven Rostedt (Red Hat) 提交于
Add the kernel command line tp_printk option that will have tracepoints that are active sent to printk() as well as to the trace buffer. Passing "tp_printk" will activate this. To turn it off, the sysctl /proc/sys/kernel/tracepoint_printk can have '0' echoed into it. Note, this only works if the cmdline option is used. Echoing 1 into the sysctl file without the cmdline option will have no affect. Note, this is a dangerous option. Having high frequency tracepoints send their data to printk() can possibly cause a live lock. This is another reason why this is only active if the command line option is used. Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1412121539300.16494@nanosSuggested-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Steven Rostedt (Red Hat) 提交于
Enabling tracepoints at boot up can be very useful. The tracepoint can be initialized right after RCU has been. There's no need to wait for the early_initcall() to be called. That's too late for some things that can use tracepoints for debugging. Move the logic to enable tracepoints out of the initcalls and into init/main.c to right after rcu_init(). This also allows trace_printk() to be used early too. Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1412121539300.16494@nanos Link: http://lkml.kernel.org/r/20141214164104.307127356@goodmis.orgReviewed-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Suggested-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Michael S. Tsirkin 提交于
virtio 1.0 spec says: Drivers MUST NOT assume reads from fields greater than 32 bits wide are atomic, nor are reads from multiple fields: drivers SHOULD read device configuration space fields like so: u32 before, after; do { before = get_config_generation(device); // read config entry/entries. after = get_config_generation(device); } while (after != before); Do exactly this, for transports that support it. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
-
- 14 12月, 2014 1 次提交
-
-
由 Pavel Emelyanov 提交于
There are actually two issues this patch addresses. Let me start with the one I tried to solve in the beginning. So, in the checkpoint-restore project (criu) we try to dump tasks' state and restore one back exactly as it was. One of the tasks' state bits is rings set up with io_setup() call. There's (almost) no problems in dumping them, there's a problem restoring them -- if I dump a task with aio ring originally mapped at address A, I want to restore one back at exactly the same address A. Unfortunately, the io_setup() does not allow for that -- it mmaps the ring at whatever place mm finds appropriate (it calls do_mmap_pgoff() with zero address and without the MAP_FIXED flag). To make restore possible I'm going to mremap() the freshly created ring into the address A (under which it was seen before dump). The problem is that the ring's virtual address is passed back to the user-space as the context ID and this ID is then used as search key by all the other io_foo() calls. Reworking this ID to be just some integer doesn't seem to work, as this value is already used by libaio as a pointer using which this library accesses memory for aio meta-data. So, to make restore work we need to make sure that a) ring is mapped at desired virtual address b) kioctx->user_id matches this value Having said that, the patch makes mremap() on aio region update the kioctx's user_id and mmap_base values. Here appears the 2nd issue I mentioned in the beginning of this mail. If (regardless of the C/R dances I do) someone creates an io context with io_setup(), then mremap()-s the ring and then destroys the context, the kill_ioctx() routine will call munmap() on wrong (old) address. This will result in a) aio ring remaining in memory and b) some other vma get unexpectedly unmapped. What do you think? Signed-off-by: NPavel Emelyanov <xemul@parallels.com> Acked-by: NDmitry Monakhov <dmonakhov@openvz.org> Signed-off-by: NBenjamin LaHaise <bcrl@kvack.org>
-