- 12 7月, 2007 1 次提交
-
-
由 Auke Kok 提交于
Instead of all drivers reading pci config space to get the revision ID, they can now use the pci_device->revision member. This exposes some issues where drivers where reading a word or a dword for the revision number, and adding useless error-handling around the read. Some drivers even just read it for no purpose of all. In devices where the revision ID is being copied over and used in what appears to be the equivalent of hotpath, I have left the copy code and the cached copy as not to influence the driver's performance. Compile tested with make all{yes,mod}config on x86_64 and i386. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Acked-by: NDave Jones <davej@redhat.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 03 6月, 2007 1 次提交
-
-
由 Auke Kok 提交于
To assure the symmetry of poll enable/disable in up/down, we should initialize the netdevice to be poll_disabled at load time. Doing this after register_netdevice leaves us open to another race, so lets move all the netif_* calls above register_netdevice so the stack starts out how we expect it to be. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Doug Chapman <doug.chapman@hp.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 30 5月, 2007 1 次提交
-
-
由 Herbert Xu 提交于
This restores the previously removed netif_poll_enable call in e1000_open. It's needed on all but the first call to e1000_open for a NIC as e1000_close always calls netif_poll_disable. netif_poll_enable can only be called safely if no polls have been scheduled. This should be the case as long as we don't enter our IRQ handler. In order to guarantee this we explicitly disable IRQs as early as possible when we're probing the NIC. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Cc: "Kok, Auke" <auke-jan.h.kok@intel.com> Cc: Jeff Garzik <jeff@garzik.org> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 22 5月, 2007 1 次提交
-
-
由 Auke Kok 提交于
Herbert Xu wrote: "netif_poll_enable can only be called if you've previously called netif_poll_disable. Otherwise a poll might already be in action and you may get a crash like this." Removing the call to netif_poll_enable in e1000_open should fix this issue, the only other call to netif_poll_enable is in e1000_up() which is only reached after a device reset or resume. Bugzilla: http://bugzilla.kernel.org/show_bug.cgi?id=8455 https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=240339 Tested by Doug Chapman <doug.chapman@hp.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Acked-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 18 5月, 2007 1 次提交
-
-
由 Auke Kok 提交于
pci_enable_msi failure is a normal event so we should not print any error. Going over the code I spotted a missing pci_disable_msi() leak when irq allocation fails. The whole code also needed a cleanup, so I combined the two different calls to pci_request_irq into a single call making this look a lot better. All #ifdef CONFIG_PCI_MSI's have been removed. Compile tested with both CONFIG_PCI_MSI enabled and disabled. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 10 5月, 2007 2 次提交
-
-
由 Oleg Nesterov 提交于
flush_work(wq, work) doesn't need the first parameter, we can use cwq->wq (this was possible from the very beginnig, I missed this). So we can unify flush_work_keventd and flush_work. Also, rename flush_work() to cancel_work_sync() and fix all callers. Perhaps this is not the best name, but "flush_work" is really bad. (akpm: this is why the earlier patches bypassed maintainers) Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru> Cc: Jeff Garzik <jeff@garzik.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Tejun Heo <htejun@gmail.com> Cc: Auke Kok <auke-jan.h.kok@intel.com>, Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrew Morton 提交于
Switch e1000 over to flush_work_keventd(). This probably fixes a netdev-close versus linkwatch rtnl_lock() deadlock which nobody knew about. (akpm: bypassed maintainers, sorry. There are other patches which depend on this) Cc: "Maciej W. Rozycki" <macro@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jeff Garzik <jeff@garzik.org> Acked-by: NAuke Kok <auke-jan.h.kok@intel.com> Cc: Oleg Nesterov <oleg@tv-sign.ru> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 4月, 2007 3 次提交
-
-
由 Milind Arun Choudhary 提交于
E1000_ROUNDUP macro cleanup, use ALIGN Signed-off-by: NMilind Arun Choudhary <milindchoudhary@gmail.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Yan Burman 提交于
Replace kmalloc+memsetout the driver. Slightly modified by Auke Kok. Signed-off-by: NYan Burman <burman.yan@gmail.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Arjan van de Ven 提交于
Use the round_jiffies() function in e1000. These timers all were of the "about once a second" or "about once every X seconds" variety and several showed up in the "what wakes the cpu up" profiles that the tickless patches provide. Some timers are highly dynamic based on network load; but even on low activity systems they still show up so the rounding is done only in cases of low activity, allowing higher frequency timers in the high activity case. The various hardware watchdogs are an obvious case; they run every 2 seconds but aren't otherwise specific of exactly when they need to run. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Acked-by: NAuke Kok <auke-jan.h.kok@intel.com> Cc: Jeff Garzik <jeff@garzik.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 26 4月, 2007 12 次提交
-
-
由 Mark Huth 提交于
Current e1000_xmit_frame spews raw interrupt disabled nag messages when used with RT kernel patches. This patch uses spin_trylock_irqsave, which allows RT patches to properly manage the irq semantics. Signed-off-by: NMark Huth <mhuth@mvista.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Bruce Allan 提交于
Upon code inspection it was spotted that the firmware handover bit get/set mismatched, which may have resulted in management issues on PCI-E adapters. Setting them correctly may fix some management issues such as arp routing etc. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Auke Kok 提交于
DEBUG_SHIRQ code exposed that e1000 was not ready for incoming interrupts after having called pci_request_irq. This obviously requires us to finish our software setup which assigns the irq handler before we request the irq. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Herbert Xu 提交于
When csum_offset was introduced we did a conversion from csum to csum_offset where applicable. A couple of drivers were missed in this process. It was harmless to begin with since the two fields coincided. Now that we've made them different with the addition of csum_start, the missed drivers must be converted or they can't send packets out at all that require checksum offload. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
To clearly state the intent of copying to linear sk_buffs, _offset being a overly long variant but interesting for the sake of saving some bytes. Signed-off-by: NArnaldo Carvalho de Melo <acme@ghostprotocols.net>
-
由 Arnaldo Carvalho de Melo 提交于
So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes on 64bit architectures, allowing us to combine the 4 bytes hole left by the layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4 64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN... :-) Many calculations that previously required that skb->{transport,network, mac}_header be first converted to a pointer now can be done directly, being meaningful as offsets or pointers. Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
The ip_hdrlen() buddy, created to reduce the number of skb->h.th-> uses and to avoid the longer, open coded equivalent. Ditched a no-op in bnx2 in the process. I wonder if we should have a BUG_ON(skb->h.th->doff < 5) in tcp_optlen()... Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
For the quite common 'skb->h.raw - skb->data' sequence. Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
Now the skb->nh union has just one member, .raw, i.e. it is just like the skb->mac union, strange, no? I'm just leaving it like that till the transport layer is done with, when we'll rename skb->mac.raw to skb->mac_header (or ->mac_header_offset?), ditto for ->{h,nh}. Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Arnaldo Carvalho de Melo 提交于
For the quite common 'skb->nh.raw - skb->data' sequence. Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 4月, 2007 1 次提交
-
-
由 Linus Torvalds 提交于
This reverts commit 60cba200. It's been linked to lockups of the e1000 hardware, see for example https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=229603 but it's likely that the commit itself is not really introducing the bug, but just allowing an unrelated problem to rear its ugly head (ie one current working theory is that the code exposes us to a hardware race condition by decreasing the amount of time we spend in each NAPI poll cycle). We'll revert it until root cause is known. Intel has a repeatable reproduction on two different machines and bus traces of the hardware doing something bad. Acked-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Cc: Jeff Garzik <jeff@garzik.org> Cc: David S. Miller <davem@davemloft.net> Cc: Greg KH <gregkh@suse.de> Cc: Dave Jones <davej@redhat.com> Cc: Auke Kok <auke-jan.h.kok@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 3月, 2007 1 次提交
-
-
由 Dan Aloni 提交于
This patch splits the vlan_group struct into a multi-allocated struct. On x86_64, the size of the original struct is a little more than 32KB, causing a 4-order allocation, which is prune to problems caused by buddy-system external fragmentation conditions. I couldn't just use vmalloc() because vfree() cannot be called in the softirq context of the RCU callback. Signed-off-by: NDan Aloni <da-x@monatomic.org> Acked-by: NJeff Garzik <jeff@garzik.org> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 22 2月, 2007 1 次提交
-
-
由 Linus Torvalds 提交于
This reverts commit d2ed1635. As Thomas Gleixner reports: "e1000 is not working anymore. ifup fails permanentely. ADDRCONF(NETDEV_UP): eth0: link is not ready nothing else" The broken commit was identified with "git bisect". Auke Kok says: "I think we need to drop this now. The report that says that this *fixes* something might have been on regular interrupts only. I currently suspect that it breaks all MSI interrupts, which would make sense if I look a the code. Very bad indeed." Cc: Jesse Brandeburg <jesse.brandeburg@intel.com> Acked-by: NAuke Kok <auke-jan.h.kok@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jeff Garzik <jeff@garzik.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 2月, 2007 2 次提交
-
-
由 Kok, Auke 提交于
Now that 2.6.19 provides a proper implementation that saves MSI, PCI-E config space, we can have the e1000 driver use those instead of it's custom implementation. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Kok, Auke 提交于
Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 08 2月, 2007 1 次提交
-
-
由 Linas Vepstas 提交于
Use newly minted routine to access the PCI channel state. Signed-off-by: NLinas Vepstas <linas@linas.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
-
- 06 2月, 2007 7 次提交
-
-
由 Arjan van de Ven 提交于
Remove the NETIF_F_TSO #ifdef-ery in drivers/net; this was for old-old-2.4 compat (even current 2.4 has NETIF_F_TSO) but it's time to get rid of it by now. Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Auke Kok 提交于
Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
-
由 Jesse Brandeburg 提交于
The driver was still mis-calculating the number of bytes sent during transmit, now the driver computes what appears to be exactly 100% correct byte counts (not including CRC) when figuring out how many bytes and frames were sent during the current transmit packet.
-
由 Bruce Allan 提交于
Since the driver sets the IP checksum insertion bit (IXSM in Status field) in transmit context descriptors, it should clear the IP checksum bits of any garbage so as not to confuse the hardware. Signed-off-by: NBruce Allan <bruce.w.allan@intel.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
-
由 Auke Kok 提交于
Print RX/TX flow control setting at link up time to display the actual link FC properties instead of the advertised values. Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
-
由 Jesse Brandeburg 提交于
This fix attempts to solve a customer (IBM) reported issue with NAPI enabled e1000 having bad performance when transmitting simultaneously on four ports. The issue comes down to an interaction between NAPI, hardware interrupt balancing, and the driver rescheduling poll on the same processor. Try to fix by allowing the driver to re-enable interrupts sooner instead of polling one more time, when there was recently all the work completed in cleanup. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
-
由 Jesse Brandeburg 提交于
Unfortunately the read-free MSI interrupt handler needs to flush write the icr register and thus we can't be read-free. Our MSI irq routine thus becomes a lot more simpler since we don't need to track link state anymore. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com>
-
- 08 1月, 2007 1 次提交
-
-
由 Jeff Garzik 提交于
This reverts commit 72f3ab74, which was superceded by commit 683a2aa3 ("e1000: Do not truncate TSO TCP header with 82544 workaround"), which fixed the real problem. Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 27 12月, 2006 4 次提交
-
-
由 Herbert Xu 提交于
The e1000 driver has a workaround for 82544 on PCI-X where if the terminating byte of a buffer is at addresses 0-3 mod 8, then 4 bytes are shaved off it and defered to a new segment. This is due to an erratum that could otherwise cause TX hangs. Unfortunately this breaks TSO because it may cause the TCP header to be split over two segments which itself causes TX hangs. The solution is to pull 4 bytes of data up from the next segment rather than pushing 4 bytes off. This ensures the TCP header remains in one piece and works around the PCI-X hang. This patch is based on one from Jesse Brandeburg. This bug has been trigered by both CONFIG_DEBUG_SLAB as well as Xen. Note that the only reason we don't see this normally is because the TCP stack starts writing from the end, i.e., it writes the TCP header first then slaps on the IP header, etc. So the end of the TCP header (skb->tail - 1 here) is always aligned correctly. Had we made the start of the IP header (e.g., IPv6) 8-byte aligned instead, this would happen for normal TCP traffic as well. Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au> Acked-by: NJesse Brandeburg <jesse.brandeburg@intel.com> -- Visit Openswan at http://www.openswan.org/ Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt -- Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Jesse Brandeburg 提交于
Currently after an interface up, the link state is detected 2 seconds later when the first watchdog timer runs. This patch changes that by triggering the hardware to generate a link-change interrupt from the up() function instead. This has the result that the link state gets detected immediately and without races. This has the potential to speed up booting since a normal distribution boot process waits for a link before DHCP is attempted. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Jeff Garzik 提交于
Add 3 extra packet redirect counters for tracking purposes to make sure we can test that all packets arrive properly. Originally from Jesse Brandeburg <jesse.brandeburg@intel.com>, rewritten to use feature flags by me. Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
由 Jesse Brandeburg 提交于
Allow the user to vary the size that copybreak works. Currently cb is enabled for packets < 256 bytes, but various tests indicate that this should be configurable for specific use cases. In addition, this parameter allows us to force never/always during testing to get full and predictable coverage of both code paths. Signed-off-by: NJesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: NAuke Kok <auke-jan.h.kok@intel.com> Signed-off-by: NArjan van de Ven <arjan@linux.intel.com> Signed-off-by: NJeff Garzik <jeff@garzik.org>
-