提交 f13d493d 编写于 作者: N Neil Horman 提交者: David S. Miller

netpoll: Revert napi_poll fix for bonding driver

In an erlier patch I modified napi_poll so that devices with IFF_MASTER polled
the per_cpu list instead of the device list for napi.  I did this because the
bonding driver has no napi instances to poll, it instead expects to check the
slave devices napi instances, which napi_poll was unaware of.  Looking at this
more closely however, I now see this isn't strictly needed.  As the bond driver
poll_controller calls the slaves poll_controller via netpoll_poll_dev, which
recursively calls poll_napi on each slave, allowing those napi instances to get
serviced.  The earlier patch isn't at all harmfull, its just not needed, so lets
revert it to make the code cleaner.  Sorry for the noise,
Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
Reviewed-by: NWANG Cong <amwang@redhat.com>
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
上级 9ff76c95
......@@ -156,15 +156,8 @@ static void poll_napi(struct net_device *dev)
{
struct napi_struct *napi;
int budget = 16;
struct softnet_data *sd = &__get_cpu_var(softnet_data);
struct list_head *nlist;
if (dev->flags & IFF_MASTER)
nlist = &sd->poll_list;
else
nlist = &dev->napi_list;
list_for_each_entry(napi, nlist, dev_list) {
list_for_each_entry(napi, &dev->napi_list, dev_list) {
if (napi->poll_owner != smp_processor_id() &&
spin_trylock(&napi->poll_lock)) {
budget = poll_one_napi(dev->npinfo, napi, budget);
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册