- 17 3月, 2011 9 次提交
-
-
由 Andy Gospodarek 提交于
Only slaves that are up should transmit netpoll frames, so there is no need to check to see if a slave is up before enabling netpoll on it. This resolves a reported failure on active-backup bonds where a slave interface is down when netpoll was enabled. Signed-off-by: NAndy Gospodarek <andy@greyhouse.net> Tested-by: NWANG Cong <amwang@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
This patch allows rx_handlers to better signalize what to do next to it's caller. That makes skb->deliver_no_wcard no longer needed. kernel-doc for rx_handler_result is taken from Nicolas' patch. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Reviewed-by: NNicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Since bond-related code was moved from net/core/dev.c into bonding, IFF_SLAVE_INACTIVE is no longer needed. Replace is with flag "inactive" stored in slave structure Signed-off-by: NJiri Pirko <jpirko@redhat.com> Reviewed-by: NNicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
transfers slave->state into slave->backup (that it's going to transfer into bitfield. Introduce wrapper inlines to do the work with it. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Reviewed-by: NNicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Now when bond-related code is moved from net/core/dev.c into bonding code, multiple priv_flags are not needed anymore. So let them rot. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Reviewed-by: NNicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Register slave pointer as rx_handler data. That would eventually prevent need to loop over slave devices to find the right slave. Use synchronize_net to ensure that bond_handle_frame does not get slave structure freed when working with that. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Reviewed-by: NNicolas de Pesloüan <nicolas.2p.debian@free.fr> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ajit Khaparde 提交于
Signed-off-by: NAjit Khaparde <ajit.khaparde@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Ajit Khaparde 提交于
Signed-off-by: NAjit Khaparde <ajit.khaparde@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Eric Dumazet 提交于
ERROR: "crc32_le" [drivers/net/e1000e/e1000e.ko] undefined! Reported-by: NFrank Peters <frank.peters@comcast.net> Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Cc: Bruce Allan <bruce.w.allan@intel.com> Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 16 3月, 2011 11 次提交
-
-
由 Ian Campbell 提交于
netback is the host side counterpart to the frontend driver in drivers/net/xen-netfront.c. The PV protocol is also implemented by frontend drivers in other OSes too, such as the BSDs and even Windows. The patch is based on the driver from the xen.git pvops kernel tree but has been put through the checkpatch.pl wringer plus several manual cleanup passes and review iterations. The driver has been moved from drivers/xen/netback to drivers/net/xen-netback. One major change from xen.git is that the guest transmit path (i.e. what looks like receive to netback) has been significantly reworked to remove the dependency on the out of tree PageForeign page flag (a core kernel patch which enables a per page destructor callback on the final put_page). This page flag was used in order to implement a grant map based transmit path (where guest pages are mapped directly into SKB frags). Instead this version of netback uses grant copy operations into regular memory belonging to the backend domain. Reinstating the grant map functionality is something which I would like to revisit in the future. Note that this driver depends on 2e820f58 "xen/irq: implement bind_interdomain_evtchn_to_irqhandler for backend drivers" which is in linux next via the "xen-two" tree and is intended for the 2.6.39 merge window: git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/backends this branch has only that single commit since 2.6.38-rc2 and is safe for cross merging into the net branch. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Reviewed-by: NBen Hutchings <bhutchings@solarflare.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Phil Oester 提交于
When the bonding module is loaded, it creates bond0 by default. Then, when attempting to create bond0, the following messages are printed to syslog: kernel: bonding: bond0 is being created... kernel: bonding: Bond creation failed. Which seems to indicate a problem, when in reality there is no problem. Since the actual error code is passed down from bond_create, make use of it to print a bit less ominous message: kernel: bonding: bond0 is being created... kernel: bond0 already exists. Signed-off-by: NPhil Oester <kernel@linuxace.com> Signed-off-by: NAndy Gospodarek <andy@greyhouse.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Phil Oester 提交于
Bringing up a bond interface with all network cables disconnected does not properly set the interface as DOWN because the call to netif_carrier_off occurs too early in bond_init. The call needs to occur after register_netdevice has set dev->reg_state to NETREG_REGISTERED, so that netif_carrier_off will trigger the call to linkwatch_fire_event. Signed-off-by: NPhil Oester <kernel@linuxace.com> Signed-off-by: NAndy Gospodarek <andy@greyhouse.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Phil Oester 提交于
When packets come in from a device with >= 16 receive queues headed out a bonding interface, syslog gets filled with this: kernel: bond0 selects TX queue 16, but real number of TX queues is 16 because queue_mapping is offset by 1. Adjust return value to account for the offset. This is a revision of my earlier patch (which did not use the skb_rx_queue_* helpers - thanks to Ben for the suggestion). Andy submitted a similar patch which emits a pr_warning on invalid queue selection, but I believe the log spew is not useful. We can revisit that question in the future, but in the interim I believe fixing the core problem is worthwhile. Signed-off-by: NPhil Oester <kernel@linuxace.com> Signed-off-by: NAndy Gospodarek <andy@greyhouse.net> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Padmanabh Ratnakar 提交于
Status of UDP packet detection not getting populated in RX completion structure. This is required in csum_passed() routine. Signed-off-by: NPadmanabh Ratnakar <padmanabh.ratnakar@emulex.com> Signed-off-by: NSathya Perla <sathya.perla@emulex.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sony Chacko 提交于
o Enable setting speed and auto negotiation parameters for GbE ports. o Hardware do not support half duplex setting currently. David Miller: Amit please update your patch to silently reject link setting attempts that are unsupported by the device. Signed-off-by: NSony Chacko <sony.chacko@qlogic.com> Signed-off-by: NAmit Kumar Salecha <amit.salecha@qlogic.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jeongtae Park 提交于
This patch fixes build error when SMSC_TRACE() used. Signed-off-by: NJeongtae Park <jtp.park@samsung.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Sean Hefty 提交于
rdma_destroy_id currently uses the global rdma cm 'lock' to test if an rdma_cm_id has been bound to a device. This prevents an active address resolution callback handler from assigning a device to the rdma_cm_id after rdma_destroy_id checks for one. Instead, we can replace the use of the global lock around the check to the rdma_cm_id device pointer by setting the id state to destroying, then flushing all active callbacks. The latter is accomplished by acquiring and releasing the handler_mutex. Any active handler will complete first, and any newly scheduled handlers will find the rdma_cm_id in an invalid state. In addition to optimizing the current locking scheme, the use of the rdma_cm_id mutex is a more intuitive synchronization mechanism than that of the global lock. These changes are based on feedback from Doug Ledford <dledford@redhat.com> while he was trying to debug a crash in the rdma cm destroy path. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Sean Hefty 提交于
This problem was reported by Moni Shoua <monis@mellanox.com> and Amir Vadai <amirv@mellanox.com>: When destroying a cm_id from a context of a work queue and if the lap_state of this cm_id is IB_CM_LAP_SENT, we need to release the reference of this id that was taken upon the send of the LAP message. Otherwise, if the expected APR message gets lost, it is only after a long time that the reference will be released, while during that the work handler thread is not available to process other things. It turns out that we need to cancel any pending LAP messages whenever we transition out of the IB_CM_ESTABLISH state. This occurs when disconnecting - either sending or receiving a DREQ. It can also happen in a corner case where we receive a REJ message after sending an RTU, followed by a LAP. Add checks and cancel any outstanding LAP messages in these three cases. Canceling the LAP when sending a DREQ fixes the destroy problem reported by Moni. When a cm_id is destroyed in the IB_CM_ESTABLISHED state, it sends a DREQ to the remote side to notify the peer that the connection is going away. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Sean Hefty 提交于
When processing a SIDR REQ, the ib_cm allocates a new cm_id. The refcount of the cm_id is initialized to 1. However, cm_process_work will decrement the refcount after invoking all callbacks. The result is that the cm_id will end up with refcount set to 0 by the end of the sidr req handler. If a user tries to destroy the cm_id, the destruction will proceed, under the incorrect assumption that no other threads are referencing the cm_id. This can lead to a crash when the cm callback thread tries to access the cm_id. This problem was noticed as part of a larger investigation with kernel crashes in the rdma_cm when running on a real time OS. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Acked-by: NDoug Ledford <dledford@redhat.com> Cc: <stable@kernel.org> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Sean Hefty 提交于
Doug Ledford and Red Hat reported a crash when running the rdma_cm on a real-time OS. The crash has the following call trace: cm_process_work cma_req_handler cma_disable_callback rdma_create_id kzalloc init_completion cma_get_net_info cma_save_net_info cma_any_addr cma_zero_addr rdma_translate_ip rdma_copy_addr cma_acquire_dev rdma_addr_get_sgid ib_find_cached_gid cma_attach_to_dev ucma_event_handler kzalloc ib_copy_ah_attr_to_user cma_comp [ preempted ] cma_write copy_from_user ucma_destroy_id copy_from_user _ucma_find_context ucma_put_ctx ucma_free_ctx rdma_destroy_id cma_exch cma_cancel_operation rdma_node_get_transport rt_mutex_slowunlock bad_area_nosemaphore oops_enter They were able to reproduce the crash multiple times with the following details: Crash seems to always happen on the: mutex_unlock(&conn_id->handler_mutex); as conn_id looks to have been freed during this code path. An examination of the code shows that a race exists in the request handlers. When a new connection request is received, the rdma_cm allocates a new connection identifier. This identifier has a single reference count on it. If a user calls rdma_destroy_id() from another thread after receiving a callback, rdma_destroy_id will proceed to destroy the id and free the associated memory. However, the request handlers may still be in the process of running. When control returns to the request handlers, they can attempt to access the newly created identifiers. Fix this by holding a reference on the newly created rdma_cm_id until the request handler is through accessing it. Signed-off-by: NSean Hefty <sean.hefty@intel.com> Acked-by: NDoug Ledford <dledford@redhat.com> Cc: <stable@kernel.org> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
- 15 3月, 2011 20 次提交
-
-
由 Tejun Heo 提交于
1b4b:91a3 seems to be another PCI ID for marvell ahci. Add it. Reported and tested in the following thread. http://thread.gmane.org/gmane.linux.kernel/1068354Signed-off-by: NTejun Heo <tj@kernel.org> Reported-by: NBorislav Petkov <bp@alien8.de> Reported-by: NAlessandro Tagliapietra <tagliapietra.alessandro@gmail.com> Signed-off-by: NJeff Garzik <jgarzik@pobox.com>
-
由 Carolyn Wyborny 提交于
This feature adds messaging to the link status change to notify the user if the device returned from a downshift or power off event due to the Thermal Sensor feature in i350 parts. Feature is only available on internal copper ports. Signed-off-by: NCarolyn Wyborny <carolyn.wyborny@intel.com> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Anders Berggren 提交于
Hardware timestamping for Intel 82580 didn't work in either 2.6.36 or 2.6.37. Comparing it to Intel's igb-2.4.12 I found that the timecounter_init clock/counter initialization was done too early. Signed-off-by: NAnders Berggren <andfers@halon.se> Tested-by: NAaron Brown <aaron.f.brown@intel.com> Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
-
由 Eric Dumazet 提交于
commit e9a799ea (xen: netfront: ethtool stats fields should be unsigned long) made rx_gso_checksum_fixup an unsigned long. Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Daniel Lezcano 提交于
When the lower device has offloading capabilities, the packets checksums are not computed. That leads to have any macvlan port in bridge mode to not work because the packets are dropped due to a bad checksum. If the macvlan is in bridge mode, the packet is forwarded to another macvlan port and reach the network stack where it looks for a checksum but this one was not computed due to the offloading of the lower device. In this case, we have to set the packet with CHECKSUM_UNNECESSARY when it is forwarded to a bridged port and restore the previous value of ip_summed when the packet goes to the lowerdev. Signed-off-by: NDaniel Lezcano <daniel.lezcano@free.fr> Cc: Patrick McHardy <kaber@trash.net> Cc: Andrian Nord <nightnord@gmail.com> Acked-by: NEric Dumazet <eric.dumazet@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Jiri Pirko 提交于
Check for bonding master and refuse to use that. Signed-off-by: NJiri Pirko <jpirko@redhat.com> Acked-by: NRobert Love <robert.w.love@intel.com> Acked-by: NJames Bottomley <James.Bottomley@suse.de> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Domenico Andreoli 提交于
QQ2440 is only another non-ISA board using CS89x0. This patch adds the minimum bits required to make QQ2440 work with CS89x0. Signed-off-by: NDomenico Andreoli <cavokz@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Domenico Andreoli 提交于
CS89x0_NONISA_IRQ is selected by all those non-ISA boards which use CS89x0. This patch only cleans the last bits left after its introduction. Signed-off-by: NDomenico Andreoli <cavokz@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rafael J. Wysocki 提交于
Some subsystems need to carry out suspend/resume and shutdown operations with one CPU on-line and interrupts disabled. The only way to register such operations is to define a sysdev class and a sysdev specifically for this purpose which is cumbersome and inefficient. Moreover, the arguments taken by sysdev suspend, resume and shutdown callbacks are practically never necessary. For this reason, introduce a simpler interface allowing subsystems to register operations to be executed very late during system suspend and shutdown and very early during resume in the form of strcut syscore_ops objects. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Nishanth Menon 提交于
opp_find_freq_exact() documentation has is_available instead of available. This also fixes warning with the kernel-doc: scripts/kernel-doc drivers/base/power/opp.c >/dev/null Warning(drivers/base/power/opp.c:246): No description found for parameter 'available' Warning(drivers/base/power/opp.c:246): Excess function parameter 'is_available' description in 'opp_find_freq_exact' Signed-off-by: NNishanth Menon <nm@ti.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Rafael J. Wysocki 提交于
The code handling system-wide power transitions (eg. suspend-to-RAM) can in theory execute callbacks provided by the device's bus type, device type and class in each phase of the power transition. In turn, the runtime PM core code only calls one of those callbacks at a time, preferring bus type callbacks to device type or class callbacks and device type callbacks to class callbacks. It seems reasonable to make them both behave in the same way in that respect. Moreover, even though a device may belong to two subsystems (eg. bus type and device class) simultaneously, in practice power management callbacks for system-wide power transitions are always provided by only one of them (ie. if the bus type callbacks are defined, the device class ones are not and vice versa). Thus it is possible to modify the code handling system-wide power transitions so that it follows the core runtime PM code (ie. treats the subsystem callbacks as mutually exclusive). On the other hand, the core runtime PM code will choose to execute, for example, a runtime suspend callback provided by the device type even if the bus type's struct dev_pm_ops object exists, but the runtime_suspend pointer in it happens to be NULL. This is confusing, because it may lead to the execution of callbacks from different subsystems during different operations (eg. the bus type suspend callback may be executed during runtime suspend of the device, while the device type callback will be executed during system suspend). Make all of the power management code treat subsystem callbacks in a consistent way, such that: (1) If the device's type is defined (eg. dev->type is not NULL) and its pm pointer is not NULL, the callbacks from dev->type->pm will be used. (2) If dev->type is NULL or dev->type->pm is NULL, but the device's class is defined (eg. dev->class is not NULL) and its pm pointer is not NULL, the callbacks from dev->class->pm will be used. (3) If dev->type is NULL or dev->type->pm is NULL and dev->class is NULL or dev->class->pm is NULL, the callbacks from dev->bus->pm will be used provided that both dev->bus and dev->bus->pm are not NULL. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NKevin Hilman <khilman@ti.com> Reasoning-sounds-sane-to: Grant Likely <grant.likely@secretlab.ca> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Rafael J. Wysocki 提交于
The platform bus type is often used to handle Systems-on-a-Chip (SoC) where all devices are represented by objects of type struct platform_device. In those cases the same "platform" device driver may be used with multiple different system configurations, but the actions needed to put the devices it handles into a low-power state and back into the full-power state may depend on the design of the given SoC. The driver, however, cannot possibly include all the information necessary for the power management of its device on all the systems it is used with. Moreover, the device hierarchy in its current form also is not suitable for representing this kind of information. The patch below attempts to address this problem by introducing objects of type struct dev_power_domain that can be used for representing power domains within a SoC. Every struct dev_power_domain object provides a sets of device power management callbacks that can be used to perform what's needed for device power management in addition to the operations carried out by the device's driver and subsystem. Namely, if a struct dev_power_domain object is pointed to by the pwr_domain field in a struct device, the callbacks provided by its ops member will be executed in addition to the corresponding callbacks provided by the device's subsystem and driver during all power transitions. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Tested-and-acked-by: NKevin Hilman <khilman@ti.com>
-
由 Rafael J. Wysocki 提交于
The variable pm_flags is used to prevent APM from being enabled along with ACPI, which would lead to problems. However, acpi_init() is always called before apm_init() and after acpi_init() has returned, it is known whether or not ACPI will be used. Namely, if acpi_disabled is not set after acpi_init() has returned, this means that ACPI is enabled. Thus, it is sufficient to check acpi_disabled in apm_init() to prevent APM from being enabled in parallel with ACPI. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NLen Brown <len.brown@intel.com>
-
由 Rafael J. Wysocki 提交于
The dpm_prepare() function increments the runtime PM reference counters of all devices to prevent pm_runtime_suspend() from executing subsystem-level callbacks. However, this was supposed to guard against a specific race condition that cannot happen, because the power management workqueue is freezable, so pm_runtime_suspend() can only be called synchronously during system suspend and we can rely on subsystems and device drivers to avoid doing that unnecessarily. Make dpm_prepare() drop the runtime PM reference to each device after making sure that runtime resume is not pending for it. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NKevin Hilman <khilman@ti.com>
-
由 Rafael J. Wysocki 提交于
After redefining CONFIG_PM to depend on (CONFIG_PM_SLEEP || CONFIG_PM_RUNTIME) the CONFIG_PM_OPS option is redundant and can be replaced with CONFIG_PM. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Rafael J. Wysocki 提交于
If direct references to pm_flags are removed from drivers/acpi/bus.c, CONFIG_ACPI will not need to depend on CONFIG_PM any more. Make that happen. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NLen Brown <len.brown@intel.com>
-
由 Rafael J. Wysocki 提交于
Currently, wakeup sysfs attributes are created for all devices, regardless of whether or not they are wakeup-capable. This is excessive and complicates wakeup device identification from user space (i.e. to identify wakeup-capable devices user space has to read /sys/devices/.../power/wakeup for all devices and see if they are not empty). Fix this issue by avoiding to create wakeup sysfs files for devices that cannot wake up the system from sleep states (i.e. whose power.can_wakeup flags are unset during registration) and modify device_set_wakeup_capable() so that it adds (or removes) the relevant sysfs attributes if a device's wakeup capability status is changed. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Rafael J. Wysocki 提交于
A subsequent patch will modify device_set_wakeup_capable() in such a way that it will call functions which may sleep and therefore it shouldn't be called under spinlocks. In preparation to that, modify usb_set_device_state() to avoid calling device_set_wakeup_capable() under device_state_lock. Tested-by: NMinchan Kim <minchan.kim@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
由 Mandeep Singh Baines 提交于
printk()s without a priority level default to KERN_WARNING. To reduce noise at KERN_WARNING, this patch sets the priority level appriopriately for unleveled printks()s. This should be useful to folks that look at dmesg warnings closely. Changed these messages to pr_info(). Signed-off-by: NMandeep Singh Baines <msb@chromium.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Rafael J. Wysocki 提交于
Since pm_save_wakeup_count() has just been changed to clear events_check_enabled unconditionally before checking if there are any new wakeup events registered since the last read from /sys/power/wakeup_count, the detection of wakeup events during suspend may be disabled, after it's been enabled, by writing a "wrong" value back to /sys/power/wakeup_count. For this reason, it is not necessary to update events_check_enabled in pm_get_wakeup_count() any more. Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-