提交 d0b952a9 编写于 作者: L Linus Torvalds

Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

* master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: (109 commits)
  [ETHTOOL]: Fix UFO typo
  [SCTP]: Fix persistent slowdown in sctp when a gap ack consumes rx buffer.
  [SCTP]: Send only 1 window update SACK per message.
  [SCTP]: Don't do CRC32C checksum over loopback.
  [SCTP] Reset rtt_in_progress for the chunk when processing its sack.
  [SCTP]: Reject sctp packets with broadcast addresses.
  [SCTP]: Limit association max_retrans setting in setsockopt.
  [PFKEYV2]: Fix inconsistent typing in struct sadb_x_kmprivate.
  [IPV6]: Sum real space for RTAs.
  [IRDA]: Use put_unaligned() in irlmp_do_discovery().
  [BRIDGE]: Add support for NETIF_F_HW_CSUM devices
  [NET]: Add NETIF_F_GEN_CSUM and NETIF_F_ALL_CSUM
  [TG3]: Convert to non-LLTX
  [TG3]: Remove unnecessary tx_lock
  [TCP]: Add tcp_slow_start_after_idle sysctl.
  [BNX2]: Update version and reldate
  [BNX2]: Use CPU native page size
  [BNX2]: Use compressed firmware
  [BNX2]: Add firmware decompression
  [BNX2]: Allow WoL settings on new 5708 chips
  ...

Manual fixup for conflict in drivers/net/tulip/winbond-840.c
无相关合并请求
......@@ -1402,6 +1402,15 @@ running once the system is up.
If enabled at boot time, /selinux/disable can be used
later to disable prior to initial policy load.
selinux_compat_net =
[SELINUX] Set initial selinux_compat_net flag value.
Format: { "0" | "1" }
0 -- use new secmark-based packet controls
1 -- use legacy packet controls
Default value is 0 (preferred).
Value can be changed at runtime via
/selinux/compat_net.
serialnumber [BUGS=IA-32]
sg_def_reserved_size= [SCSI]
......
......@@ -362,6 +362,13 @@ tcp_workaround_signed_windows - BOOLEAN
not receive a window scaling option from them.
Default: 0
tcp_slow_start_after_idle - BOOLEAN
If set, provide RFC2861 behavior and time out the congestion
window after an idle period. An idle period is defined at
the current RTO. If unset, the congestion window will not
be timed out after an idle period.
Default: 1
IP Variables:
ip_local_port_range - 2 INTEGERS
......
......@@ -42,9 +42,9 @@ dev->get_stats:
Context: nominally process, but don't sleep inside an rwlock
dev->hard_start_xmit:
Synchronization: dev->xmit_lock spinlock.
Synchronization: netif_tx_lock spinlock.
When the driver sets NETIF_F_LLTX in dev->features this will be
called without holding xmit_lock. In this case the driver
called without holding netif_tx_lock. In this case the driver
has to lock by itself when needed. It is recommended to use a try lock
for this and return -1 when the spin lock fails.
The locking there should also properly protect against
......@@ -62,12 +62,12 @@ dev->hard_start_xmit:
Only valid when NETIF_F_LLTX is set.
dev->tx_timeout:
Synchronization: dev->xmit_lock spinlock.
Synchronization: netif_tx_lock spinlock.
Context: BHs disabled
Notes: netif_queue_stopped() is guaranteed true
dev->set_multicast_list:
Synchronization: dev->xmit_lock spinlock.
Synchronization: netif_tx_lock spinlock.
Context: BHs disabled
dev->poll:
......
......@@ -72,4 +72,6 @@ source "drivers/edac/Kconfig"
source "drivers/rtc/Kconfig"
source "drivers/dma/Kconfig"
endmenu
......@@ -74,3 +74,4 @@ obj-$(CONFIG_SGI_SN) += sn/
obj-y += firmware/
obj-$(CONFIG_CRYPTO) += crypto/
obj-$(CONFIG_SUPERH) += sh/
obj-$(CONFIG_DMA_ENGINE) += dma/
......@@ -116,8 +116,7 @@ aoenet_rcv(struct sk_buff *skb, struct net_device *ifp, struct packet_type *pt,
skb = skb_share_check(skb, GFP_ATOMIC);
if (skb == NULL)
return 0;
if (skb_is_nonlinear(skb))
if (skb_linearize(skb, GFP_ATOMIC) < 0)
if (skb_linearize(skb))
goto exit;
if (!is_aoe_netif(ifp))
goto exit;
......
......@@ -127,7 +127,7 @@ void cn_queue_del_callback(struct cn_queue_dev *dev, struct cb_id *id)
if (found) {
cn_queue_free_callback(cbq);
atomic_dec_and_test(&dev->refcnt);
atomic_dec(&dev->refcnt);
}
}
......
#
# DMA engine configuration
#
menu "DMA Engine support"
config DMA_ENGINE
bool "Support for DMA engines"
---help---
DMA engines offload copy operations from the CPU to dedicated
hardware, allowing the copies to happen asynchronously.
comment "DMA Clients"
config NET_DMA
bool "Network: TCP receive copy offload"
depends on DMA_ENGINE && NET
default y
---help---
This enables the use of DMA engines in the network stack to
offload receive copy-to-user operations, freeing CPU cycles.
Since this is the main user of the DMA engine, it should be enabled;
say Y here.
comment "DMA Devices"
config INTEL_IOATDMA
tristate "Intel I/OAT DMA support"
depends on DMA_ENGINE && PCI
default m
---help---
Enable support for the Intel(R) I/OAT DMA engine.
endmenu
obj-$(CONFIG_DMA_ENGINE) += dmaengine.o
obj-$(CONFIG_NET_DMA) += iovlock.o
obj-$(CONFIG_INTEL_IOATDMA) += ioatdma.o
/*
* Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
/*
* This code implements the DMA subsystem. It provides a HW-neutral interface
* for other kernel code to use asynchronous memory copy capabilities,
* if present, and allows different HW DMA drivers to register as providing
* this capability.
*
* Due to the fact we are accelerating what is already a relatively fast
* operation, the code goes to great lengths to avoid additional overhead,
* such as locking.
*
* LOCKING:
*
* The subsystem keeps two global lists, dma_device_list and dma_client_list.
* Both of these are protected by a mutex, dma_list_mutex.
*
* Each device has a channels list, which runs unlocked but is never modified
* once the device is registered, it's just setup by the driver.
*
* Each client has a channels list, it's only modified under the client->lock
* and in an RCU callback, so it's safe to read under rcu_read_lock().
*
* Each device has a kref, which is initialized to 1 when the device is
* registered. A kref_put is done for each class_device registered. When the
* class_device is released, the coresponding kref_put is done in the release
* method. Every time one of the device's channels is allocated to a client,
* a kref_get occurs. When the channel is freed, the coresponding kref_put
* happens. The device's release function does a completion, so
* unregister_device does a remove event, class_device_unregister, a kref_put
* for the first reference, then waits on the completion for all other
* references to finish.
*
* Each channel has an open-coded implementation of Rusty Russell's "bigref,"
* with a kref and a per_cpu local_t. A single reference is set when on an
* ADDED event, and removed with a REMOVE event. Net DMA client takes an
* extra reference per outstanding transaction. The relase function does a
* kref_put on the device. -ChrisL
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/dmaengine.h>
#include <linux/hardirq.h>
#include <linux/spinlock.h>
#include <linux/percpu.h>
#include <linux/rcupdate.h>
#include <linux/mutex.h>
static DEFINE_MUTEX(dma_list_mutex);
static LIST_HEAD(dma_device_list);
static LIST_HEAD(dma_client_list);
/* --- sysfs implementation --- */
static ssize_t show_memcpy_count(struct class_device *cd, char *buf)
{
struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
unsigned long count = 0;
int i;
for_each_possible_cpu(i)
count += per_cpu_ptr(chan->local, i)->memcpy_count;
return sprintf(buf, "%lu\n", count);
}
static ssize_t show_bytes_transferred(struct class_device *cd, char *buf)
{
struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
unsigned long count = 0;
int i;
for_each_possible_cpu(i)
count += per_cpu_ptr(chan->local, i)->bytes_transferred;
return sprintf(buf, "%lu\n", count);
}
static ssize_t show_in_use(struct class_device *cd, char *buf)
{
struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
return sprintf(buf, "%d\n", (chan->client ? 1 : 0));
}
static struct class_device_attribute dma_class_attrs[] = {
__ATTR(memcpy_count, S_IRUGO, show_memcpy_count, NULL),
__ATTR(bytes_transferred, S_IRUGO, show_bytes_transferred, NULL),
__ATTR(in_use, S_IRUGO, show_in_use, NULL),
__ATTR_NULL
};
static void dma_async_device_cleanup(struct kref *kref);
static void dma_class_dev_release(struct class_device *cd)
{
struct dma_chan *chan = container_of(cd, struct dma_chan, class_dev);
kref_put(&chan->device->refcount, dma_async_device_cleanup);
}
static struct class dma_devclass = {
.name = "dma",
.class_dev_attrs = dma_class_attrs,
.release = dma_class_dev_release,
};
/* --- client and device registration --- */
/**
* dma_client_chan_alloc - try to allocate a channel to a client
* @client: &dma_client
*
* Called with dma_list_mutex held.
*/
static struct dma_chan *dma_client_chan_alloc(struct dma_client *client)
{
struct dma_device *device;
struct dma_chan *chan;
unsigned long flags;
int desc; /* allocated descriptor count */
/* Find a channel, any DMA engine will do */
list_for_each_entry(device, &dma_device_list, global_node) {
list_for_each_entry(chan, &device->channels, device_node) {
if (chan->client)
continue;
desc = chan->device->device_alloc_chan_resources(chan);
if (desc >= 0) {
kref_get(&device->refcount);
kref_init(&chan->refcount);
chan->slow_ref = 0;
INIT_RCU_HEAD(&chan->rcu);
chan->client = client;
spin_lock_irqsave(&client->lock, flags);
list_add_tail_rcu(&chan->client_node,
&client->channels);
spin_unlock_irqrestore(&client->lock, flags);
return chan;
}
}
}
return NULL;
}
/**
* dma_client_chan_free - release a DMA channel
* @chan: &dma_chan
*/
void dma_chan_cleanup(struct kref *kref)
{
struct dma_chan *chan = container_of(kref, struct dma_chan, refcount);
chan->device->device_free_chan_resources(chan);
chan->client = NULL;
kref_put(&chan->device->refcount, dma_async_device_cleanup);
}
static void dma_chan_free_rcu(struct rcu_head *rcu)
{
struct dma_chan *chan = container_of(rcu, struct dma_chan, rcu);
int bias = 0x7FFFFFFF;
int i;
for_each_possible_cpu(i)
bias -= local_read(&per_cpu_ptr(chan->local, i)->refcount);
atomic_sub(bias, &chan->refcount.refcount);
kref_put(&chan->refcount, dma_chan_cleanup);
}
static void dma_client_chan_free(struct dma_chan *chan)
{
atomic_add(0x7FFFFFFF, &chan->refcount.refcount);
chan->slow_ref = 1;
call_rcu(&chan->rcu, dma_chan_free_rcu);
}
/**
* dma_chans_rebalance - reallocate channels to clients
*
* When the number of DMA channel in the system changes,
* channels need to be rebalanced among clients
*/
static void dma_chans_rebalance(void)
{
struct dma_client *client;
struct dma_chan *chan;
unsigned long flags;
mutex_lock(&dma_list_mutex);
list_for_each_entry(client, &dma_client_list, global_node) {
while (client->chans_desired > client->chan_count) {
chan = dma_client_chan_alloc(client);
if (!chan)
break;
client->chan_count++;
client->event_callback(client,
chan,
DMA_RESOURCE_ADDED);
}
while (client->chans_desired < client->chan_count) {
spin_lock_irqsave(&client->lock, flags);
chan = list_entry(client->channels.next,
struct dma_chan,
client_node);
list_del_rcu(&chan->client_node);
spin_unlock_irqrestore(&client->lock, flags);
client->chan_count--;
client->event_callback(client,
chan,
DMA_RESOURCE_REMOVED);
dma_client_chan_free(chan);
}
}
mutex_unlock(&dma_list_mutex);
}
/**
* dma_async_client_register - allocate and register a &dma_client
* @event_callback: callback for notification of channel addition/removal
*/
struct dma_client *dma_async_client_register(dma_event_callback event_callback)
{
struct dma_client *client;
client = kzalloc(sizeof(*client), GFP_KERNEL);
if (!client)
return NULL;
INIT_LIST_HEAD(&client->channels);
spin_lock_init(&client->lock);
client->chans_desired = 0;
client->chan_count = 0;
client->event_callback = event_callback;
mutex_lock(&dma_list_mutex);
list_add_tail(&client->global_node, &dma_client_list);
mutex_unlock(&dma_list_mutex);
return client;
}
/**
* dma_async_client_unregister - unregister a client and free the &dma_client
* @client:
*
* Force frees any allocated DMA channels, frees the &dma_client memory
*/
void dma_async_client_unregister(struct dma_client *client)
{
struct dma_chan *chan;
if (!client)
return;
rcu_read_lock();
list_for_each_entry_rcu(chan, &client->channels, client_node)
dma_client_chan_free(chan);
rcu_read_unlock();
mutex_lock(&dma_list_mutex);
list_del(&client->global_node);
mutex_unlock(&dma_list_mutex);
kfree(client);
dma_chans_rebalance();
}
/**
* dma_async_client_chan_request - request DMA channels
* @client: &dma_client
* @number: count of DMA channels requested
*
* Clients call dma_async_client_chan_request() to specify how many
* DMA channels they need, 0 to free all currently allocated.
* The resulting allocations/frees are indicated to the client via the
* event callback.
*/
void dma_async_client_chan_request(struct dma_client *client,
unsigned int number)
{
client->chans_desired = number;
dma_chans_rebalance();
}
/**
* dma_async_device_register -
* @device: &dma_device
*/
int dma_async_device_register(struct dma_device *device)
{
static int id;
int chancnt = 0;
struct dma_chan* chan;
if (!device)
return -ENODEV;
init_completion(&device->done);
kref_init(&device->refcount);
device->dev_id = id++;
/* represent channels in sysfs. Probably want devs too */
list_for_each_entry(chan, &device->channels, device_node) {
chan->local = alloc_percpu(typeof(*chan->local));
if (chan->local == NULL)
continue;
chan->chan_id = chancnt++;
chan->class_dev.class = &dma_devclass;
chan->class_dev.dev = NULL;
snprintf(chan->class_dev.class_id, BUS_ID_SIZE, "dma%dchan%d",
device->dev_id, chan->chan_id);
kref_get(&device->refcount);
class_device_register(&chan->class_dev);
}
mutex_lock(&dma_list_mutex);
list_add_tail(&device->global_node, &dma_device_list);
mutex_unlock(&dma_list_mutex);
dma_chans_rebalance();
return 0;
}
/**
* dma_async_device_unregister -
* @device: &dma_device
*/
static void dma_async_device_cleanup(struct kref *kref)
{
struct dma_device *device;
device = container_of(kref, struct dma_device, refcount);
complete(&device->done);
}
void dma_async_device_unregister(struct dma_device* device)
{
struct dma_chan *chan;
unsigned long flags;
mutex_lock(&dma_list_mutex);
list_del(&device->global_node);
mutex_unlock(&dma_list_mutex);
list_for_each_entry(chan, &device->channels, device_node) {
if (chan->client) {
spin_lock_irqsave(&chan->client->lock, flags);
list_del(&chan->client_node);
chan->client->chan_count--;
spin_unlock_irqrestore(&chan->client->lock, flags);
chan->client->event_callback(chan->client,
chan,
DMA_RESOURCE_REMOVED);
dma_client_chan_free(chan);
}
class_device_unregister(&chan->class_dev);
}
dma_chans_rebalance();
kref_put(&device->refcount, dma_async_device_cleanup);
wait_for_completion(&device->done);
}
static int __init dma_bus_init(void)
{
mutex_init(&dma_list_mutex);
return class_register(&dma_devclass);
}
subsys_initcall(dma_bus_init);
EXPORT_SYMBOL(dma_async_client_register);
EXPORT_SYMBOL(dma_async_client_unregister);
EXPORT_SYMBOL(dma_async_client_chan_request);
EXPORT_SYMBOL(dma_async_memcpy_buf_to_buf);
EXPORT_SYMBOL(dma_async_memcpy_buf_to_pg);
EXPORT_SYMBOL(dma_async_memcpy_pg_to_pg);
EXPORT_SYMBOL(dma_async_memcpy_complete);
EXPORT_SYMBOL(dma_async_memcpy_issue_pending);
EXPORT_SYMBOL(dma_async_device_register);
EXPORT_SYMBOL(dma_async_device_unregister);
EXPORT_SYMBOL(dma_chan_cleanup);
此差异已折叠。
/*
* Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
#ifndef IOATDMA_H
#define IOATDMA_H
#include <linux/dmaengine.h>
#include "ioatdma_hw.h"
#include <linux/init.h>
#include <linux/dmapool.h>
#include <linux/cache.h>
#include <linux/pci_ids.h>
#define IOAT_LOW_COMPLETION_MASK 0xffffffc0
extern struct list_head dma_device_list;
extern struct list_head dma_client_list;
/**
* struct ioat_device - internal representation of a IOAT device
* @pdev: PCI-Express device
* @reg_base: MMIO register space base address
* @dma_pool: for allocating DMA descriptors
* @common: embedded struct dma_device
* @msi: Message Signaled Interrupt number
*/
struct ioat_device {
struct pci_dev *pdev;
void *reg_base;
struct pci_pool *dma_pool;
struct pci_pool *completion_pool;
struct dma_device common;
u8 msi;
};
/**
* struct ioat_dma_chan - internal representation of a DMA channel
* @device:
* @reg_base:
* @sw_in_use:
* @completion:
* @completion_low:
* @completion_high:
* @completed_cookie: last cookie seen completed on cleanup
* @cookie: value of last cookie given to client
* @last_completion:
* @xfercap:
* @desc_lock:
* @free_desc:
* @used_desc:
* @resource:
* @device_node:
*/
struct ioat_dma_chan {
void *reg_base;
dma_cookie_t completed_cookie;
unsigned long last_completion;
u32 xfercap; /* XFERCAP register value expanded out */
spinlock_t cleanup_lock;
spinlock_t desc_lock;
struct list_head free_desc;
struct list_head used_desc;
int pending;
struct ioat_device *device;
struct dma_chan common;
dma_addr_t completion_addr;
union {
u64 full; /* HW completion writeback */
struct {
u32 low;
u32 high;
};
} *completion_virt;
};
/* wrapper around hardware descriptor format + additional software fields */
/**
* struct ioat_desc_sw - wrapper around hardware descriptor
* @hw: hardware DMA descriptor
* @node:
* @cookie:
* @phys:
*/
struct ioat_desc_sw {
struct ioat_dma_descriptor *hw;
struct list_head node;
dma_cookie_t cookie;
dma_addr_t phys;
DECLARE_PCI_UNMAP_ADDR(src)
DECLARE_PCI_UNMAP_LEN(src_len)
DECLARE_PCI_UNMAP_ADDR(dst)
DECLARE_PCI_UNMAP_LEN(dst_len)
};
#endif /* IOATDMA_H */
/*
* Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
#ifndef _IOAT_HW_H_
#define _IOAT_HW_H_
/* PCI Configuration Space Values */
#define IOAT_PCI_VID 0x8086
#define IOAT_PCI_DID 0x1A38
#define IOAT_PCI_RID 0x00
#define IOAT_PCI_SVID 0x8086
#define IOAT_PCI_SID 0x8086
#define IOAT_VER 0x12 /* Version 1.2 */
struct ioat_dma_descriptor {
uint32_t size;
uint32_t ctl;
uint64_t src_addr;
uint64_t dst_addr;
uint64_t next;
uint64_t rsv1;
uint64_t rsv2;
uint64_t user1;
uint64_t user2;
};
#define IOAT_DMA_DESCRIPTOR_CTL_INT_GN 0x00000001
#define IOAT_DMA_DESCRIPTOR_CTL_SRC_SN 0x00000002
#define IOAT_DMA_DESCRIPTOR_CTL_DST_SN 0x00000004
#define IOAT_DMA_DESCRIPTOR_CTL_CP_STS 0x00000008
#define IOAT_DMA_DESCRIPTOR_CTL_FRAME 0x00000010
#define IOAT_DMA_DESCRIPTOR_NUL 0x00000020
#define IOAT_DMA_DESCRIPTOR_OPCODE 0xFF000000
#endif
/*
* Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
#ifndef IOATDMA_IO_H
#define IOATDMA_IO_H
#include <asm/io.h>
/*
* device and per-channel MMIO register read and write functions
* this is a lot of anoying inline functions, but it's typesafe
*/
static inline u8 ioatdma_read8(struct ioat_device *device,
unsigned int offset)
{
return readb(device->reg_base + offset);
}
static inline u16 ioatdma_read16(struct ioat_device *device,
unsigned int offset)
{
return readw(device->reg_base + offset);
}
static inline u32 ioatdma_read32(struct ioat_device *device,
unsigned int offset)
{
return readl(device->reg_base + offset);
}
static inline void ioatdma_write8(struct ioat_device *device,
unsigned int offset, u8 value)
{
writeb(value, device->reg_base + offset);
}
static inline void ioatdma_write16(struct ioat_device *device,
unsigned int offset, u16 value)
{
writew(value, device->reg_base + offset);
}
static inline void ioatdma_write32(struct ioat_device *device,
unsigned int offset, u32 value)
{
writel(value, device->reg_base + offset);
}
static inline u8 ioatdma_chan_read8(struct ioat_dma_chan *chan,
unsigned int offset)
{
return readb(chan->reg_base + offset);
}
static inline u16 ioatdma_chan_read16(struct ioat_dma_chan *chan,
unsigned int offset)
{
return readw(chan->reg_base + offset);
}
static inline u32 ioatdma_chan_read32(struct ioat_dma_chan *chan,
unsigned int offset)
{
return readl(chan->reg_base + offset);
}
static inline void ioatdma_chan_write8(struct ioat_dma_chan *chan,
unsigned int offset, u8 value)
{
writeb(value, chan->reg_base + offset);
}
static inline void ioatdma_chan_write16(struct ioat_dma_chan *chan,
unsigned int offset, u16 value)
{
writew(value, chan->reg_base + offset);
}
static inline void ioatdma_chan_write32(struct ioat_dma_chan *chan,
unsigned int offset, u32 value)
{
writel(value, chan->reg_base + offset);
}
#if (BITS_PER_LONG == 64)
static inline u64 ioatdma_chan_read64(struct ioat_dma_chan *chan,
unsigned int offset)
{
return readq(chan->reg_base + offset);
}
static inline void ioatdma_chan_write64(struct ioat_dma_chan *chan,
unsigned int offset, u64 value)
{
writeq(value, chan->reg_base + offset);
}
#endif
#endif /* IOATDMA_IO_H */
/*
* Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
#ifndef _IOAT_REGISTERS_H_
#define _IOAT_REGISTERS_H_
/* MMIO Device Registers */
#define IOAT_CHANCNT_OFFSET 0x00 /* 8-bit */
#define IOAT_XFERCAP_OFFSET 0x01 /* 8-bit */
#define IOAT_XFERCAP_4KB 12
#define IOAT_XFERCAP_8KB 13
#define IOAT_XFERCAP_16KB 14
#define IOAT_XFERCAP_32KB 15
#define IOAT_XFERCAP_32GB 0
#define IOAT_GENCTRL_OFFSET 0x02 /* 8-bit */
#define IOAT_GENCTRL_DEBUG_EN 0x01
#define IOAT_INTRCTRL_OFFSET 0x03 /* 8-bit */
#define IOAT_INTRCTRL_MASTER_INT_EN 0x01 /* Master Interrupt Enable */
#define IOAT_INTRCTRL_INT_STATUS 0x02 /* ATTNSTATUS -or- Channel Int */
#define IOAT_INTRCTRL_INT 0x04 /* INT_STATUS -and- MASTER_INT_EN */
#define IOAT_ATTNSTATUS_OFFSET 0x04 /* Each bit is a channel */
#define IOAT_VER_OFFSET 0x08 /* 8-bit */
#define IOAT_VER_MAJOR_MASK 0xF0
#define IOAT_VER_MINOR_MASK 0x0F
#define GET_IOAT_VER_MAJOR(x) ((x) & IOAT_VER_MAJOR_MASK)
#define GET_IOAT_VER_MINOR(x) ((x) & IOAT_VER_MINOR_MASK)
#define IOAT_PERPORTOFFSET_OFFSET 0x0A /* 16-bit */
#define IOAT_INTRDELAY_OFFSET 0x0C /* 16-bit */
#define IOAT_INTRDELAY_INT_DELAY_MASK 0x3FFF /* Interrupt Delay Time */
#define IOAT_INTRDELAY_COALESE_SUPPORT 0x8000 /* Interrupt Coalesing Supported */
#define IOAT_DEVICE_STATUS_OFFSET 0x0E /* 16-bit */
#define IOAT_DEVICE_STATUS_DEGRADED_MODE 0x0001
#define IOAT_CHANNEL_MMIO_SIZE 0x80 /* Each Channel MMIO space is this size */
/* DMA Channel Registers */
#define IOAT_CHANCTRL_OFFSET 0x00 /* 16-bit Channel Control Register */
#define IOAT_CHANCTRL_CHANNEL_PRIORITY_MASK 0xF000
#define IOAT_CHANCTRL_CHANNEL_IN_USE 0x0100
#define IOAT_CHANCTRL_DESCRIPTOR_ADDR_SNOOP_CONTROL 0x0020
#define IOAT_CHANCTRL_ERR_INT_EN 0x0010
#define IOAT_CHANCTRL_ANY_ERR_ABORT_EN 0x0008
#define IOAT_CHANCTRL_ERR_COMPLETION_EN 0x0004
#define IOAT_CHANCTRL_INT_DISABLE 0x0001
#define IOAT_DMA_COMP_OFFSET 0x02 /* 16-bit DMA channel compatability */
#define IOAT_DMA_COMP_V1 0x0001 /* Compatability with DMA version 1 */
#define IOAT_CHANSTS_OFFSET 0x04 /* 64-bit Channel Status Register */
#define IOAT_CHANSTS_OFFSET_LOW 0x04
#define IOAT_CHANSTS_OFFSET_HIGH 0x08
#define IOAT_CHANSTS_COMPLETED_DESCRIPTOR_ADDR 0xFFFFFFFFFFFFFFC0
#define IOAT_CHANSTS_SOFT_ERR 0x0000000000000010
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS 0x0000000000000007
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS_ACTIVE 0x0
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS_DONE 0x1
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS_SUSPENDED 0x2
#define IOAT_CHANSTS_DMA_TRANSFER_STATUS_HALTED 0x3
#define IOAT_CHAINADDR_OFFSET 0x0C /* 64-bit Descriptor Chain Address Register */
#define IOAT_CHAINADDR_OFFSET_LOW 0x0C
#define IOAT_CHAINADDR_OFFSET_HIGH 0x10
#define IOAT_CHANCMD_OFFSET 0x14 /* 8-bit DMA Channel Command Register */
#define IOAT_CHANCMD_RESET 0x20
#define IOAT_CHANCMD_RESUME 0x10
#define IOAT_CHANCMD_ABORT 0x08
#define IOAT_CHANCMD_SUSPEND 0x04
#define IOAT_CHANCMD_APPEND 0x02
#define IOAT_CHANCMD_START 0x01
#define IOAT_CHANCMP_OFFSET 0x18 /* 64-bit Channel Completion Address Register */
#define IOAT_CHANCMP_OFFSET_LOW 0x18
#define IOAT_CHANCMP_OFFSET_HIGH 0x1C
#define IOAT_CDAR_OFFSET 0x20 /* 64-bit Current Descriptor Address Register */
#define IOAT_CDAR_OFFSET_LOW 0x20
#define IOAT_CDAR_OFFSET_HIGH 0x24
#define IOAT_CHANERR_OFFSET 0x28 /* 32-bit Channel Error Register */
#define IOAT_CHANERR_DMA_TRANSFER_SRC_ADDR_ERR 0x0001
#define IOAT_CHANERR_DMA_TRANSFER_DEST_ADDR_ERR 0x0002
#define IOAT_CHANERR_NEXT_DESCRIPTOR_ADDR_ERR 0x0004
#define IOAT_CHANERR_NEXT_DESCRIPTOR_ALIGNMENT_ERR 0x0008
#define IOAT_CHANERR_CHAIN_ADDR_VALUE_ERR 0x0010
#define IOAT_CHANERR_CHANCMD_ERR 0x0020
#define IOAT_CHANERR_CHIPSET_UNCORRECTABLE_DATA_INTEGRITY_ERR 0x0040
#define IOAT_CHANERR_DMA_UNCORRECTABLE_DATA_INTEGRITY_ERR 0x0080
#define IOAT_CHANERR_READ_DATA_ERR 0x0100
#define IOAT_CHANERR_WRITE_DATA_ERR 0x0200
#define IOAT_CHANERR_DESCRIPTOR_CONTROL_ERR 0x0400
#define IOAT_CHANERR_DESCRIPTOR_LENGTH_ERR 0x0800
#define IOAT_CHANERR_COMPLETION_ADDR_ERR 0x1000
#define IOAT_CHANERR_INT_CONFIGURATION_ERR 0x2000
#define IOAT_CHANERR_SOFT_ERR 0x4000
#define IOAT_CHANERR_MASK_OFFSET 0x2C /* 32-bit Channel Error Register */
#endif /* _IOAT_REGISTERS_H_ */
/*
* Copyright(c) 2004 - 2006 Intel Corporation. All rights reserved.
* Portions based on net/core/datagram.c and copyrighted by their authors.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
* The full GNU General Public License is included in this distribution in the
* file called COPYING.
*/
/*
* This code allows the net stack to make use of a DMA engine for
* skb to iovec copies.
*/
#include <linux/dmaengine.h>
#include <linux/pagemap.h>
#include <net/tcp.h> /* for memcpy_toiovec */
#include <asm/io.h>
#include <asm/uaccess.h>
int num_pages_spanned(struct iovec *iov)
{
return
((PAGE_ALIGN((unsigned long)iov->iov_base + iov->iov_len) -
((unsigned long)iov->iov_base & PAGE_MASK)) >> PAGE_SHIFT);
}
/*
* Pin down all the iovec pages needed for len bytes.
* Return a struct dma_pinned_list to keep track of pages pinned down.
*
* We are allocating a single chunk of memory, and then carving it up into
* 3 sections, the latter 2 whose size depends on the number of iovecs and the
* total number of pages, respectively.
*/
struct dma_pinned_list *dma_pin_iovec_pages(struct iovec *iov, size_t len)
{
struct dma_pinned_list *local_list;
struct page **pages;
int i;
int ret;
int nr_iovecs = 0;
int iovec_len_used = 0;
int iovec_pages_used = 0;
long err;
/* don't pin down non-user-based iovecs */
if (segment_eq(get_fs(), KERNEL_DS))
return NULL;
/* determine how many iovecs/pages there are, up front */
do {
iovec_len_used += iov[nr_iovecs].iov_len;
iovec_pages_used += num_pages_spanned(&iov[nr_iovecs]);
nr_iovecs++;
} while (iovec_len_used < len);
/* single kmalloc for pinned list, page_list[], and the page arrays */
local_list = kmalloc(sizeof(*local_list)
+ (nr_iovecs * sizeof (struct dma_page_list))
+ (iovec_pages_used * sizeof (struct page*)), GFP_KERNEL);
if (!local_list) {
err = -ENOMEM;
goto out;
}
/* list of pages starts right after the page list array */
pages = (struct page **) &local_list->page_list[nr_iovecs];
for (i = 0; i < nr_iovecs; i++) {
struct dma_page_list *page_list = &local_list->page_list[i];
len -= iov[i].iov_len;
if (!access_ok(VERIFY_WRITE, iov[i].iov_base, iov[i].iov_len)) {
err = -EFAULT;
goto unpin;
}
page_list->nr_pages = num_pages_spanned(&iov[i]);
page_list->base_address = iov[i].iov_base;
page_list->pages = pages;
pages += page_list->nr_pages;
/* pin pages down */
down_read(&current->mm->mmap_sem);
ret = get_user_pages(
current,
current->mm,
(unsigned long) iov[i].iov_base,
page_list->nr_pages,
1, /* write */
0, /* force */
page_list->pages,
NULL);
up_read(&current->mm->mmap_sem);
if (ret != page_list->nr_pages) {
err = -ENOMEM;
goto unpin;
}
local_list->nr_iovecs = i + 1;
}
return local_list;
unpin:
dma_unpin_iovec_pages(local_list);
out:
return ERR_PTR(err);
}
void dma_unpin_iovec_pages(struct dma_pinned_list *pinned_list)
{
int i, j;
if (!pinned_list)
return;
for (i = 0; i < pinned_list->nr_iovecs; i++) {
struct dma_page_list *page_list = &pinned_list->page_list[i];
for (j = 0; j < page_list->nr_pages; j++) {
set_page_dirty_lock(page_list->pages[j]);
page_cache_release(page_list->pages[j]);
}
}
kfree(pinned_list);
}
static dma_cookie_t dma_memcpy_to_kernel_iovec(struct dma_chan *chan, struct
iovec *iov, unsigned char *kdata, size_t len)
{
dma_cookie_t dma_cookie = 0;
while (len > 0) {
if (iov->iov_len) {
int copy = min_t(unsigned int, iov->iov_len, len);
dma_cookie = dma_async_memcpy_buf_to_buf(
chan,
iov->iov_base,
kdata,
copy);
kdata += copy;
len -= copy;
iov->iov_len -= copy;
iov->iov_base += copy;
}
iov++;
}
return dma_cookie;
}
/*
* We have already pinned down the pages we will be using in the iovecs.
* Each entry in iov array has corresponding entry in pinned_list->page_list.
* Using array indexing to keep iov[] and page_list[] in sync.
* Initial elements in iov array's iov->iov_len will be 0 if already copied into
* by another call.
* iov array length remaining guaranteed to be bigger than len.
*/
dma_cookie_t dma_memcpy_to_iovec(struct dma_chan *chan, struct iovec *iov,
struct dma_pinned_list *pinned_list, unsigned char *kdata, size_t len)
{
int iov_byte_offset;
int copy;
dma_cookie_t dma_cookie = 0;
int iovec_idx;
int page_idx;
if (!chan)
return memcpy_toiovec(iov, kdata, len);
/* -> kernel copies (e.g. smbfs) */
if (!pinned_list)
return dma_memcpy_to_kernel_iovec(chan, iov, kdata, len);
iovec_idx = 0;
while (iovec_idx < pinned_list->nr_iovecs) {
struct dma_page_list *page_list;
/* skip already used-up iovecs */
while (!iov[iovec_idx].iov_len)
iovec_idx++;
page_list = &pinned_list->page_list[iovec_idx];
iov_byte_offset = ((unsigned long)iov[iovec_idx].iov_base & ~PAGE_MASK);
page_idx = (((unsigned long)iov[iovec_idx].iov_base & PAGE_MASK)
- ((unsigned long)page_list->base_address & PAGE_MASK)) >> PAGE_SHIFT;
/* break up copies to not cross page boundary */
while (iov[iovec_idx].iov_len) {
copy = min_t(int, PAGE_SIZE - iov_byte_offset, len);
copy = min_t(int, copy, iov[iovec_idx].iov_len);
dma_cookie = dma_async_memcpy_buf_to_pg(chan,
page_list->pages[page_idx],
iov_byte_offset,
kdata,
copy);
len -= copy;
iov[iovec_idx].iov_len -= copy;
iov[iovec_idx].iov_base += copy;
if (!len)
return dma_cookie;
kdata += copy;
iov_byte_offset = 0;
page_idx++;
}
iovec_idx++;
}
/* really bad if we ever run out of iovecs */
BUG();
return -EFAULT;
}
dma_cookie_t dma_memcpy_pg_to_iovec(struct dma_chan *chan, struct iovec *iov,
struct dma_pinned_list *pinned_list, struct page *page,
unsigned int offset, size_t len)
{
int iov_byte_offset;
int copy;
dma_cookie_t dma_cookie = 0;
int iovec_idx;
int page_idx;
int err;
/* this needs as-yet-unimplemented buf-to-buff, so punt. */
/* TODO: use dma for this */
if (!chan || !pinned_list) {
u8 *vaddr = kmap(page);
err = memcpy_toiovec(iov, vaddr + offset, len);
kunmap(page);
return err;
}
iovec_idx = 0;
while (iovec_idx < pinned_list->nr_iovecs) {
struct dma_page_list *page_list;
/* skip already used-up iovecs */
while (!iov[iovec_idx].iov_len)
iovec_idx++;
page_list = &pinned_list->page_list[iovec_idx];
iov_byte_offset = ((unsigned long)iov[iovec_idx].iov_base & ~PAGE_MASK);
page_idx = (((unsigned long)iov[iovec_idx].iov_base & PAGE_MASK)
- ((unsigned long)page_list->base_address & PAGE_MASK)) >> PAGE_SHIFT;
/* break up copies to not cross page boundary */
while (iov[iovec_idx].iov_len) {
copy = min_t(int, PAGE_SIZE - iov_byte_offset, len);
copy = min_t(int, copy, iov[iovec_idx].iov_len);
dma_cookie = dma_async_memcpy_pg_to_pg(chan,
page_list->pages[page_idx],
iov_byte_offset,
page,
offset,
copy);
len -= copy;
iov[iovec_idx].iov_len -= copy;
iov[iovec_idx].iov_base += copy;
if (!len)
return dma_cookie;
offset += copy;
iov_byte_offset = 0;
page_idx++;
}
iovec_idx++;
}
/* really bad if we ever run out of iovecs */
BUG();
return -EFAULT;
}
......@@ -821,7 +821,8 @@ void ipoib_mcast_restart_task(void *dev_ptr)
ipoib_mcast_stop_thread(dev, 0);
spin_lock_irqsave(&dev->xmit_lock, flags);
local_irq_save(flags);
netif_tx_lock(dev);
spin_lock(&priv->lock);
/*
......@@ -896,7 +897,8 @@ void ipoib_mcast_restart_task(void *dev_ptr)
}
spin_unlock(&priv->lock);
spin_unlock_irqrestore(&dev->xmit_lock, flags);
netif_tx_unlock(dev);
local_irq_restore(flags);
/* We have to cancel outside of the spinlock */
list_for_each_entry_safe(mcast, tmcast, &remove_list, list) {
......
......@@ -1052,7 +1052,7 @@ static void wq_set_multicast_list (void *data)
dvb_net_feed_stop(dev);
priv->rx_mode = RX_MODE_UNI;
spin_lock_bh(&dev->xmit_lock);
netif_tx_lock_bh(dev);
if (dev->flags & IFF_PROMISC) {
dprintk("%s: promiscuous mode\n", dev->name);
......@@ -1077,7 +1077,7 @@ static void wq_set_multicast_list (void *data)
}
}
spin_unlock_bh(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
dvb_net_feed_start(dev);
}
......
......@@ -2180,6 +2180,8 @@ config TIGON3
config BNX2
tristate "Broadcom NetXtremeII support"
depends on PCI
select CRC32
select ZLIB_INFLATE
help
This driver supports Broadcom NetXtremeII gigabit Ethernet cards.
......
......@@ -32,6 +32,7 @@
#include <asm/irq.h>
#include <linux/delay.h>
#include <asm/byteorder.h>
#include <asm/page.h>
#include <linux/time.h>
#include <linux/ethtool.h>
#include <linux/mii.h>
......@@ -49,14 +50,15 @@
#include <linux/crc32.h>
#include <linux/prefetch.h>
#include <linux/cache.h>
#include <linux/zlib.h>
#include "bnx2.h"
#include "bnx2_fw.h"
#define DRV_MODULE_NAME "bnx2"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "1.4.40"
#define DRV_MODULE_RELDATE "May 22, 2006"
#define DRV_MODULE_VERSION "1.4.42"
#define DRV_MODULE_RELDATE "June 12, 2006"
#define RUN_AT(x) (jiffies + (x))
......@@ -1820,7 +1822,7 @@ bnx2_rx_int(struct bnx2 *bp, int budget)
skb->protocol = eth_type_trans(skb, bp->dev);
if ((len > (bp->dev->mtu + ETH_HLEN)) &&
(htons(skb->protocol) != 0x8100)) {
(ntohs(skb->protocol) != 0x8100)) {
dev_kfree_skb_irq(skb);
goto next_rx;
......@@ -2009,7 +2011,7 @@ bnx2_poll(struct net_device *dev, int *budget)
return 1;
}
/* Called with rtnl_lock from vlan functions and also dev->xmit_lock
/* Called with rtnl_lock from vlan functions and also netif_tx_lock
* from set_multicast.
*/
static void
......@@ -2083,6 +2085,92 @@ bnx2_set_rx_mode(struct net_device *dev)
spin_unlock_bh(&bp->phy_lock);
}
#define FW_BUF_SIZE 0x8000
static int
bnx2_gunzip_init(struct bnx2 *bp)
{
if ((bp->gunzip_buf = vmalloc(FW_BUF_SIZE)) == NULL)
goto gunzip_nomem1;
if ((bp->strm = kmalloc(sizeof(*bp->strm), GFP_KERNEL)) == NULL)
goto gunzip_nomem2;
bp->strm->workspace = kmalloc(zlib_inflate_workspacesize(), GFP_KERNEL);
if (bp->strm->workspace == NULL)
goto gunzip_nomem3;
return 0;
gunzip_nomem3:
kfree(bp->strm);
bp->strm = NULL;
gunzip_nomem2:
vfree(bp->gunzip_buf);
bp->gunzip_buf = NULL;
gunzip_nomem1:
printk(KERN_ERR PFX "%s: Cannot allocate firmware buffer for "
"uncompression.\n", bp->dev->name);
return -ENOMEM;
}
static void
bnx2_gunzip_end(struct bnx2 *bp)
{
kfree(bp->strm->workspace);
kfree(bp->strm);
bp->strm = NULL;
if (bp->gunzip_buf) {
vfree(bp->gunzip_buf);
bp->gunzip_buf = NULL;
}
}
static int
bnx2_gunzip(struct bnx2 *bp, u8 *zbuf, int len, void **outbuf, int *outlen)
{
int n, rc;
/* check gzip header */
if ((zbuf[0] != 0x1f) || (zbuf[1] != 0x8b) || (zbuf[2] != Z_DEFLATED))
return -EINVAL;
n = 10;
#define FNAME 0x8
if (zbuf[3] & FNAME)
while ((zbuf[n++] != 0) && (n < len));
bp->strm->next_in = zbuf + n;
bp->strm->avail_in = len - n;
bp->strm->next_out = bp->gunzip_buf;
bp->strm->avail_out = FW_BUF_SIZE;
rc = zlib_inflateInit2(bp->strm, -MAX_WBITS);
if (rc != Z_OK)
return rc;
rc = zlib_inflate(bp->strm, Z_FINISH);
*outlen = FW_BUF_SIZE - bp->strm->avail_out;
*outbuf = bp->gunzip_buf;
if ((rc != Z_OK) && (rc != Z_STREAM_END))
printk(KERN_ERR PFX "%s: Firmware decompression error: %s\n",
bp->dev->name, bp->strm->msg);
zlib_inflateEnd(bp->strm);
if (rc == Z_STREAM_END)
return 0;
return rc;
}
static void
load_rv2p_fw(struct bnx2 *bp, u32 *rv2p_code, u32 rv2p_code_len,
u32 rv2p_proc)
......@@ -2092,9 +2180,9 @@ load_rv2p_fw(struct bnx2 *bp, u32 *rv2p_code, u32 rv2p_code_len,
for (i = 0; i < rv2p_code_len; i += 8) {
REG_WR(bp, BNX2_RV2P_INSTR_HIGH, *rv2p_code);
REG_WR(bp, BNX2_RV2P_INSTR_HIGH, cpu_to_le32(*rv2p_code));
rv2p_code++;
REG_WR(bp, BNX2_RV2P_INSTR_LOW, *rv2p_code);
REG_WR(bp, BNX2_RV2P_INSTR_LOW, cpu_to_le32(*rv2p_code));
rv2p_code++;
if (rv2p_proc == RV2P_PROC1) {
......@@ -2134,7 +2222,7 @@ load_cpu_fw(struct bnx2 *bp, struct cpu_reg *cpu_reg, struct fw_info *fw)
int j;
for (j = 0; j < (fw->text_len / 4); j++, offset += 4) {
REG_WR_IND(bp, offset, fw->text[j]);
REG_WR_IND(bp, offset, cpu_to_le32(fw->text[j]));
}
}
......@@ -2190,15 +2278,32 @@ load_cpu_fw(struct bnx2 *bp, struct cpu_reg *cpu_reg, struct fw_info *fw)
REG_WR_IND(bp, cpu_reg->mode, val);
}
static void
static int
bnx2_init_cpus(struct bnx2 *bp)
{
struct cpu_reg cpu_reg;
struct fw_info fw;
int rc = 0;
void *text;
u32 text_len;
if ((rc = bnx2_gunzip_init(bp)) != 0)
return rc;
/* Initialize the RV2P processor. */
load_rv2p_fw(bp, bnx2_rv2p_proc1, sizeof(bnx2_rv2p_proc1), RV2P_PROC1);
load_rv2p_fw(bp, bnx2_rv2p_proc2, sizeof(bnx2_rv2p_proc2), RV2P_PROC2);
rc = bnx2_gunzip(bp, bnx2_rv2p_proc1, sizeof(bnx2_rv2p_proc1), &text,
&text_len);
if (rc)
goto init_cpu_err;
load_rv2p_fw(bp, text, text_len, RV2P_PROC1);
rc = bnx2_gunzip(bp, bnx2_rv2p_proc2, sizeof(bnx2_rv2p_proc2), &text,
&text_len);
if (rc)
goto init_cpu_err;
load_rv2p_fw(bp, text, text_len, RV2P_PROC2);
/* Initialize the RX Processor. */
cpu_reg.mode = BNX2_RXP_CPU_MODE;
......@@ -2222,7 +2327,13 @@ bnx2_init_cpus(struct bnx2 *bp)
fw.text_addr = bnx2_RXP_b06FwTextAddr;
fw.text_len = bnx2_RXP_b06FwTextLen;
fw.text_index = 0;
fw.text = bnx2_RXP_b06FwText;
rc = bnx2_gunzip(bp, bnx2_RXP_b06FwText, sizeof(bnx2_RXP_b06FwText),
&text, &text_len);
if (rc)
goto init_cpu_err;
fw.text = text;
fw.data_addr = bnx2_RXP_b06FwDataAddr;
fw.data_len = bnx2_RXP_b06FwDataLen;
......@@ -2268,7 +2379,13 @@ bnx2_init_cpus(struct bnx2 *bp)
fw.text_addr = bnx2_TXP_b06FwTextAddr;
fw.text_len = bnx2_TXP_b06FwTextLen;
fw.text_index = 0;
fw.text = bnx2_TXP_b06FwText;
rc = bnx2_gunzip(bp, bnx2_TXP_b06FwText, sizeof(bnx2_TXP_b06FwText),
&text, &text_len);
if (rc)
goto init_cpu_err;
fw.text = text;
fw.data_addr = bnx2_TXP_b06FwDataAddr;
fw.data_len = bnx2_TXP_b06FwDataLen;
......@@ -2314,7 +2431,13 @@ bnx2_init_cpus(struct bnx2 *bp)
fw.text_addr = bnx2_TPAT_b06FwTextAddr;
fw.text_len = bnx2_TPAT_b06FwTextLen;
fw.text_index = 0;
fw.text = bnx2_TPAT_b06FwText;
rc = bnx2_gunzip(bp, bnx2_TPAT_b06FwText, sizeof(bnx2_TPAT_b06FwText),
&text, &text_len);
if (rc)
goto init_cpu_err;
fw.text = text;
fw.data_addr = bnx2_TPAT_b06FwDataAddr;
fw.data_len = bnx2_TPAT_b06FwDataLen;
......@@ -2360,7 +2483,13 @@ bnx2_init_cpus(struct bnx2 *bp)
fw.text_addr = bnx2_COM_b06FwTextAddr;
fw.text_len = bnx2_COM_b06FwTextLen;
fw.text_index = 0;
fw.text = bnx2_COM_b06FwText;
rc = bnx2_gunzip(bp, bnx2_COM_b06FwText, sizeof(bnx2_COM_b06FwText),
&text, &text_len);
if (rc)
goto init_cpu_err;
fw.text = text;
fw.data_addr = bnx2_COM_b06FwDataAddr;
fw.data_len = bnx2_COM_b06FwDataLen;
......@@ -2384,6 +2513,9 @@ bnx2_init_cpus(struct bnx2 *bp)
load_cpu_fw(bp, &cpu_reg, &fw);
init_cpu_err:
bnx2_gunzip_end(bp);
return rc;
}
static int
......@@ -3256,7 +3388,9 @@ bnx2_init_chip(struct bnx2 *bp)
* context block must have already been enabled. */
bnx2_init_context(bp);
bnx2_init_cpus(bp);
if ((rc = bnx2_init_cpus(bp)) != 0)
return rc;
bnx2_init_nvram(bp);
bnx2_set_mac_addr(bp);
......@@ -3556,7 +3690,9 @@ bnx2_reset_nic(struct bnx2 *bp, u32 reset_code)
if (rc)
return rc;
bnx2_init_chip(bp);
if ((rc = bnx2_init_chip(bp)) != 0)
return rc;
bnx2_init_tx_ring(bp);
bnx2_init_rx_ring(bp);
return 0;
......@@ -4034,6 +4170,8 @@ bnx2_timer(unsigned long data)
msg = (u32) ++bp->fw_drv_pulse_wr_seq;
REG_WR_IND(bp, bp->shmem_base + BNX2_DRV_PULSE_MB, msg);
bp->stats_blk->stat_FwRxDrop = REG_RD_IND(bp, BNX2_FW_RX_DROP_COUNT);
if ((bp->phy_flags & PHY_SERDES_FLAG) &&
(CHIP_NUM(bp) == CHIP_NUM_5706)) {
......@@ -4252,7 +4390,7 @@ bnx2_vlan_rx_kill_vid(struct net_device *dev, uint16_t vid)
}
#endif
/* Called with dev->xmit_lock.
/* Called with netif_tx_lock.
* hard_start_xmit is pseudo-lockless - a lock is only required when
* the tx queue is full. This way, we get the benefit of lockless
* operations most of the time without the complexities to handle
......@@ -4310,7 +4448,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
ip_tcp_len = (skb->nh.iph->ihl << 2) + sizeof(struct tcphdr);
skb->nh.iph->check = 0;
skb->nh.iph->tot_len = ntohs(mss + ip_tcp_len + tcp_opt_len);
skb->nh.iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len);
skb->h.th->check =
~csum_tcpudp_magic(skb->nh.iph->saddr,
skb->nh.iph->daddr,
......@@ -4504,6 +4642,10 @@ bnx2_get_stats(struct net_device *dev)
net_stats->tx_aborted_errors +
net_stats->tx_carrier_errors;
net_stats->rx_missed_errors =
(unsigned long) (stats_blk->stat_IfInMBUFDiscards +
stats_blk->stat_FwRxDrop);
return net_stats;
}
......@@ -4986,7 +5128,7 @@ bnx2_set_rx_csum(struct net_device *dev, u32 data)
return 0;
}
#define BNX2_NUM_STATS 45
#define BNX2_NUM_STATS 46
static struct {
char string[ETH_GSTRING_LEN];
......@@ -5036,6 +5178,7 @@ static struct {
{ "rx_mac_ctrl_frames" },
{ "rx_filtered_packets" },
{ "rx_discards" },
{ "rx_fw_discards" },
};
#define STATS_OFFSET32(offset_name) (offsetof(struct statistics_block, offset_name) / 4)
......@@ -5086,6 +5229,7 @@ static const unsigned long bnx2_stats_offset_arr[BNX2_NUM_STATS] = {
STATS_OFFSET32(stat_MacControlFramesReceived),
STATS_OFFSET32(stat_IfInFramesL2FilterDiscards),
STATS_OFFSET32(stat_IfInMBUFDiscards),
STATS_OFFSET32(stat_FwRxDrop),
};
/* stat_IfHCInBadOctets and stat_Dot3StatsCarrierSenseErrors are
......@@ -5096,7 +5240,7 @@ static u8 bnx2_5706_stats_len_arr[BNX2_NUM_STATS] = {
4,0,4,4,4,4,4,4,4,4,
4,4,4,4,4,4,4,4,4,4,
4,4,4,4,4,4,4,4,4,4,
4,4,4,4,4,
4,4,4,4,4,4,
};
static u8 bnx2_5708_stats_len_arr[BNX2_NUM_STATS] = {
......@@ -5104,7 +5248,7 @@ static u8 bnx2_5708_stats_len_arr[BNX2_NUM_STATS] = {
4,4,4,4,4,4,4,4,4,4,
4,4,4,4,4,4,4,4,4,4,
4,4,4,4,4,4,4,4,4,4,
4,4,4,4,4,
4,4,4,4,4,4,
};
#define BNX2_NUM_TESTS 6
......@@ -5634,7 +5778,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
}
}
if (CHIP_NUM(bp) == CHIP_NUM_5708)
if ((CHIP_ID(bp) == CHIP_ID_5708_A0) ||
(CHIP_ID(bp) == CHIP_ID_5708_B0) ||
(CHIP_ID(bp) == CHIP_ID_5708_B1))
bp->flags |= NO_WOL_FLAG;
if (CHIP_ID(bp) == CHIP_ID_5706_A0) {
......
......@@ -231,6 +231,7 @@ struct statistics_block {
u32 stat_GenStat13;
u32 stat_GenStat14;
u32 stat_GenStat15;
u32 stat_FwRxDrop;
};
......@@ -3481,6 +3482,8 @@ struct l2_fhdr {
#define BNX2_COM_SCRATCH 0x00120000
#define BNX2_FW_RX_DROP_COUNT 0x00120084
/*
* cp_reg definition
......@@ -3747,7 +3750,12 @@ struct l2_fhdr {
#define DMA_READ_CHANS 5
#define DMA_WRITE_CHANS 3
#define BCM_PAGE_BITS 12
/* Use CPU native page size up to 16K for the ring sizes. */
#if (PAGE_SHIFT > 14)
#define BCM_PAGE_BITS 14
#else
#define BCM_PAGE_BITS PAGE_SHIFT
#endif
#define BCM_PAGE_SIZE (1 << BCM_PAGE_BITS)
#define TX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct tx_bd))
......@@ -3770,7 +3778,7 @@ struct l2_fhdr {
#define RX_RING_IDX(x) ((x) & bp->rx_max_ring_idx)
#define RX_RING(x) (((x) & ~MAX_RX_DESC_CNT) >> 8)
#define RX_RING(x) (((x) & ~MAX_RX_DESC_CNT) >> (BCM_PAGE_BITS - 4))
#define RX_IDX(x) ((x) & MAX_RX_DESC_CNT)
/* Context size. */
......@@ -4048,6 +4056,9 @@ struct bnx2 {
u32 flash_size;
int status_stats_size;
struct z_stream_s *strm;
void *gunzip_buf;
};
static u32 bnx2_reg_rd_ind(struct bnx2 *bp, u32 offset);
......
此差异已折叠。
......@@ -1199,8 +1199,7 @@ int bond_sethwaddr(struct net_device *bond_dev, struct net_device *slave_dev)
}
#define BOND_INTERSECT_FEATURES \
(NETIF_F_SG|NETIF_F_IP_CSUM|NETIF_F_NO_CSUM|NETIF_F_HW_CSUM|\
NETIF_F_TSO|NETIF_F_UFO)
(NETIF_F_SG | NETIF_F_ALL_CSUM | NETIF_F_TSO | NETIF_F_UFO)
/*
* Compute the common dev->feature set available to all slaves. Some
......@@ -1218,9 +1217,7 @@ static int bond_compute_features(struct bonding *bond)
features &= (slave->dev->features & BOND_INTERSECT_FEATURES);
if ((features & NETIF_F_SG) &&
!(features & (NETIF_F_IP_CSUM |
NETIF_F_NO_CSUM |
NETIF_F_HW_CSUM)))
!(features & NETIF_F_ALL_CSUM))
features &= ~NETIF_F_SG;
/*
......@@ -4191,7 +4188,7 @@ static int bond_init(struct net_device *bond_dev, struct bond_params *params)
*/
bond_dev->features |= NETIF_F_VLAN_CHALLENGED;
/* don't acquire bond device's xmit_lock when
/* don't acquire bond device's netif_tx_lock when
* transmitting */
bond_dev->features |= NETIF_F_LLTX;
......
......@@ -669,9 +669,9 @@ static const struct register_test nv_registers_test[] = {
* critical parts:
* - rx is (pseudo-) lockless: it relies on the single-threading provided
* by the arch code for interrupts.
* - tx setup is lockless: it relies on dev->xmit_lock. Actual submission
* - tx setup is lockless: it relies on netif_tx_lock. Actual submission
* needs dev->priv->lock :-(
* - set_multicast_list: preparation lockless, relies on dev->xmit_lock.
* - set_multicast_list: preparation lockless, relies on netif_tx_lock.
*/
/* in dev: base, irq */
......@@ -1405,7 +1405,7 @@ static void drain_ring(struct net_device *dev)
/*
* nv_start_xmit: dev->hard_start_xmit function
* Called with dev->xmit_lock held.
* Called with netif_tx_lock held.
*/
static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
......@@ -1599,7 +1599,7 @@ static void nv_tx_done(struct net_device *dev)
/*
* nv_tx_timeout: dev->tx_timeout function
* Called with dev->xmit_lock held.
* Called with netif_tx_lock held.
*/
static void nv_tx_timeout(struct net_device *dev)
{
......@@ -1930,7 +1930,7 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu)
* Changing the MTU is a rare event, it shouldn't matter.
*/
nv_disable_irq(dev);
spin_lock_bh(&dev->xmit_lock);
netif_tx_lock_bh(dev);
spin_lock(&np->lock);
/* stop engines */
nv_stop_rx(dev);
......@@ -1958,7 +1958,7 @@ static int nv_change_mtu(struct net_device *dev, int new_mtu)
nv_start_rx(dev);
nv_start_tx(dev);
spin_unlock(&np->lock);
spin_unlock_bh(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
nv_enable_irq(dev);
}
return 0;
......@@ -1993,7 +1993,7 @@ static int nv_set_mac_address(struct net_device *dev, void *addr)
memcpy(dev->dev_addr, macaddr->sa_data, ETH_ALEN);
if (netif_running(dev)) {
spin_lock_bh(&dev->xmit_lock);
netif_tx_lock_bh(dev);
spin_lock_irq(&np->lock);
/* stop rx engine */
......@@ -2005,7 +2005,7 @@ static int nv_set_mac_address(struct net_device *dev, void *addr)
/* restart rx engine */
nv_start_rx(dev);
spin_unlock_irq(&np->lock);
spin_unlock_bh(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
} else {
nv_copy_mac_to_hw(dev);
}
......@@ -2014,7 +2014,7 @@ static int nv_set_mac_address(struct net_device *dev, void *addr)
/*
* nv_set_multicast: dev->set_multicast function
* Called with dev->xmit_lock held.
* Called with netif_tx_lock held.
*/
static void nv_set_multicast(struct net_device *dev)
{
......
......@@ -308,9 +308,9 @@ static int sp_set_mac_address(struct net_device *dev, void *addr)
{
struct sockaddr_ax25 *sa = addr;
spin_lock_irq(&dev->xmit_lock);
netif_tx_lock_bh(dev);
memcpy(dev->dev_addr, &sa->sax25_call, AX25_ADDR_LEN);
spin_unlock_irq(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
return 0;
}
......@@ -767,9 +767,9 @@ static int sixpack_ioctl(struct tty_struct *tty, struct file *file,
break;
}
spin_lock_irq(&dev->xmit_lock);
netif_tx_lock_bh(dev);
memcpy(dev->dev_addr, &addr, AX25_ADDR_LEN);
spin_unlock_irq(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
err = 0;
break;
......
......@@ -357,9 +357,9 @@ static int ax_set_mac_address(struct net_device *dev, void *addr)
{
struct sockaddr_ax25 *sa = addr;
spin_lock_irq(&dev->xmit_lock);
netif_tx_lock_bh(dev);
memcpy(dev->dev_addr, &sa->sax25_call, AX25_ADDR_LEN);
spin_unlock_irq(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
return 0;
}
......@@ -886,9 +886,9 @@ static int mkiss_ioctl(struct tty_struct *tty, struct file *file,
break;
}
spin_lock_irq(&dev->xmit_lock);
netif_tx_lock_bh(dev);
memcpy(dev->dev_addr, addr, AX25_ADDR_LEN);
spin_unlock_irq(&dev->xmit_lock);
netif_tx_unlock_bh(dev);
err = 0;
break;
......
......@@ -76,13 +76,13 @@ static void ri_tasklet(unsigned long dev)
dp->st_task_enter++;
if ((skb = skb_peek(&dp->tq)) == NULL) {
dp->st_txq_refl_try++;
if (spin_trylock(&_dev->xmit_lock)) {
if (netif_tx_trylock(_dev)) {
dp->st_rxq_enter++;
while ((skb = skb_dequeue(&dp->rq)) != NULL) {
skb_queue_tail(&dp->tq, skb);
dp->st_rx2tx_tran++;
}
spin_unlock(&_dev->xmit_lock);
netif_tx_unlock(_dev);
} else {
/* reschedule */
dp->st_rxq_notenter++;
......@@ -110,7 +110,7 @@ static void ri_tasklet(unsigned long dev)
}
}
if (spin_trylock(&_dev->xmit_lock)) {
if (netif_tx_trylock(_dev)) {
dp->st_rxq_check++;
if ((skb = skb_peek(&dp->rq)) == NULL) {
dp->tasklet_pending = 0;
......@@ -118,10 +118,10 @@ static void ri_tasklet(unsigned long dev)
netif_wake_queue(_dev);
} else {
dp->st_rxq_rsch++;
spin_unlock(&_dev->xmit_lock);
netif_tx_unlock(_dev);
goto resched;
}
spin_unlock(&_dev->xmit_lock);
netif_tx_unlock(_dev);
} else {
resched:
dp->tasklet_pending = 1;
......
......@@ -417,5 +417,20 @@ config PXA_FICP
available capabilities may vary from one PXA2xx target to
another.
config MCS_FIR
tristate "MosChip MCS7780 IrDA-USB dongle"
depends on IRDA && USB && EXPERIMENTAL
help
Say Y or M here if you want to build support for the MosChip
MCS7780 IrDA-USB bridge device driver.
USB bridge based on the MosChip MCS7780 don't conform to the
IrDA-USB device class specification, and therefore need their
own specific driver. Those dongles support SIR and FIR (4Mbps)
speeds.
To compile it as a module, choose M here: the module will be called
mcs7780.
endmenu
......@@ -19,6 +19,7 @@ obj-$(CONFIG_ALI_FIR) += ali-ircc.o
obj-$(CONFIG_VLSI_FIR) += vlsi_ir.o
obj-$(CONFIG_VIA_FIR) += via-ircc.o
obj-$(CONFIG_PXA_FICP) += pxaficp_ir.o
obj-$(CONFIG_MCS_FIR) += mcs7780.o
# Old dongle drivers for old SIR drivers
obj-$(CONFIG_ESI_DONGLE_OLD) += esi.o
obj-$(CONFIG_TEKRAM_DONGLE_OLD) += tekram.o
......
......@@ -34,14 +34,12 @@
#include <linux/rtnetlink.h>
#include <linux/serial_reg.h>
#include <linux/dma-mapping.h>
#include <linux/platform_device.h>
#include <asm/io.h>
#include <asm/dma.h>
#include <asm/byteorder.h>
#include <linux/pm.h>
#include <linux/pm_legacy.h>
#include <net/irda/wrapper.h>
#include <net/irda/irda.h>
#include <net/irda/irda_device.h>
......@@ -51,7 +49,19 @@
#define CHIP_IO_EXTENT 8
#define BROKEN_DONGLE_ID
static char *driver_name = "ali-ircc";
#define ALI_IRCC_DRIVER_NAME "ali-ircc"
/* Power Management */
static int ali_ircc_suspend(struct platform_device *dev, pm_message_t state);
static int ali_ircc_resume(struct platform_device *dev);
static struct platform_driver ali_ircc_driver = {
.suspend = ali_ircc_suspend,
.resume = ali_ircc_resume,
.driver = {
.name = ALI_IRCC_DRIVER_NAME,
},
};
/* Module parameters */
static int qos_mtt_bits = 0x07; /* 1 ms or more */
......@@ -97,10 +107,7 @@ static int ali_ircc_is_receiving(struct ali_ircc_cb *self);
static int ali_ircc_net_open(struct net_device *dev);
static int ali_ircc_net_close(struct net_device *dev);
static int ali_ircc_net_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
static int ali_ircc_pmproc(struct pm_dev *dev, pm_request_t rqst, void *data);
static void ali_ircc_change_speed(struct ali_ircc_cb *self, __u32 baud);
static void ali_ircc_suspend(struct ali_ircc_cb *self);
static void ali_ircc_wakeup(struct ali_ircc_cb *self);
static struct net_device_stats *ali_ircc_net_get_stats(struct net_device *dev);
/* SIR function */
......@@ -145,6 +152,14 @@ static int __init ali_ircc_init(void)
int i = 0;
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__);
ret = platform_driver_register(&ali_ircc_driver);
if (ret) {
IRDA_ERROR("%s, Can't register driver!\n",
ALI_IRCC_DRIVER_NAME);
return ret;
}
/* Probe for all the ALi chipsets we know about */
for (chip= chips; chip->name; chip++, i++)
......@@ -214,6 +229,10 @@ static int __init ali_ircc_init(void)
}
IRDA_DEBUG(2, "%s(), ----------------- End -----------------\n", __FUNCTION__);
if (ret)
platform_driver_unregister(&ali_ircc_driver);
return ret;
}
......@@ -228,14 +247,14 @@ static void __exit ali_ircc_cleanup(void)
int i;
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__);
pm_unregister_all(ali_ircc_pmproc);
for (i=0; i < 4; i++) {
if (dev_self[i])
ali_ircc_close(dev_self[i]);
}
platform_driver_unregister(&ali_ircc_driver);
IRDA_DEBUG(2, "%s(), ----------------- End -----------------\n", __FUNCTION__);
}
......@@ -249,7 +268,6 @@ static int ali_ircc_open(int i, chipio_t *info)
{
struct net_device *dev;
struct ali_ircc_cb *self;
struct pm_dev *pmdev;
int dongle_id;
int err;
......@@ -284,7 +302,8 @@ static int ali_ircc_open(int i, chipio_t *info)
self->io.fifo_size = 16; /* SIR: 16, FIR: 32 Benjamin 2000/11/1 */
/* Reserve the ioports that we need */
if (!request_region(self->io.fir_base, self->io.fir_ext, driver_name)) {
if (!request_region(self->io.fir_base, self->io.fir_ext,
ALI_IRCC_DRIVER_NAME)) {
IRDA_WARNING("%s(), can't get iobase of 0x%03x\n", __FUNCTION__,
self->io.fir_base);
err = -ENODEV;
......@@ -354,13 +373,10 @@ static int ali_ircc_open(int i, chipio_t *info)
/* Check dongle id */
dongle_id = ali_ircc_read_dongle_id(i, info);
IRDA_MESSAGE("%s(), %s, Found dongle: %s\n", __FUNCTION__, driver_name, dongle_types[dongle_id]);
IRDA_MESSAGE("%s(), %s, Found dongle: %s\n", __FUNCTION__,
ALI_IRCC_DRIVER_NAME, dongle_types[dongle_id]);
self->io.dongle_id = dongle_id;
pmdev = pm_register(PM_SYS_DEV, PM_SYS_IRDA, ali_ircc_pmproc);
if (pmdev)
pmdev->data = self;
IRDA_DEBUG(2, "%s(), ----------------- End -----------------\n", __FUNCTION__);
......@@ -548,12 +564,11 @@ static int ali_ircc_setup(chipio_t *info)
/* Should be 0x00 in the M1535/M1535D */
if(version != 0x00)
{
IRDA_ERROR("%s, Wrong chip version %02x\n", driver_name, version);
IRDA_ERROR("%s, Wrong chip version %02x\n",
ALI_IRCC_DRIVER_NAME, version);
return -1;
}
// IRDA_MESSAGE("%s, Found chip at base=0x%03x\n", driver_name, info->cfg_base);
/* Set FIR FIFO Threshold Register */
switch_bank(iobase, BANK1);
outb(RX_FIFO_Threshold, iobase+FIR_FIFO_TR);
......@@ -583,7 +598,8 @@ static int ali_ircc_setup(chipio_t *info)
/* Switch to SIR space */
FIR2SIR(iobase);
IRDA_MESSAGE("%s, driver loaded (Benjamin Kong)\n", driver_name);
IRDA_MESSAGE("%s, driver loaded (Benjamin Kong)\n",
ALI_IRCC_DRIVER_NAME);
/* Enable receive interrupts */
// outb(UART_IER_RDI, iobase+UART_IER); //benjamin 2000/11/23 01:25PM
......@@ -647,7 +663,8 @@ static irqreturn_t ali_ircc_interrupt(int irq, void *dev_id,
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__);
if (!dev) {
IRDA_WARNING("%s: irq %d for unknown device.\n", driver_name, irq);
IRDA_WARNING("%s: irq %d for unknown device.\n",
ALI_IRCC_DRIVER_NAME, irq);
return IRQ_NONE;
}
......@@ -1328,7 +1345,8 @@ static int ali_ircc_net_open(struct net_device *dev)
/* Request IRQ and install Interrupt Handler */
if (request_irq(self->io.irq, ali_ircc_interrupt, 0, dev->name, dev))
{
IRDA_WARNING("%s, unable to allocate irq=%d\n", driver_name,
IRDA_WARNING("%s, unable to allocate irq=%d\n",
ALI_IRCC_DRIVER_NAME,
self->io.irq);
return -EAGAIN;
}
......@@ -1338,7 +1356,8 @@ static int ali_ircc_net_open(struct net_device *dev)
* failure.
*/
if (request_dma(self->io.dma, dev->name)) {
IRDA_WARNING("%s, unable to allocate dma=%d\n", driver_name,
IRDA_WARNING("%s, unable to allocate dma=%d\n",
ALI_IRCC_DRIVER_NAME,
self->io.dma);
free_irq(self->io.irq, self);
return -EAGAIN;
......@@ -2108,61 +2127,38 @@ static struct net_device_stats *ali_ircc_net_get_stats(struct net_device *dev)
return &self->stats;
}
static void ali_ircc_suspend(struct ali_ircc_cb *self)
static int ali_ircc_suspend(struct platform_device *dev, pm_message_t state)
{
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__ );
struct ali_ircc_cb *self = platform_get_drvdata(dev);
IRDA_MESSAGE("%s, Suspending\n", driver_name);
IRDA_MESSAGE("%s, Suspending\n", ALI_IRCC_DRIVER_NAME);
if (self->io.suspended)
return;
return 0;
ali_ircc_net_close(self->netdev);
self->io.suspended = 1;
IRDA_DEBUG(2, "%s(), ----------------- End ------------------\n", __FUNCTION__ );
return 0;
}
static void ali_ircc_wakeup(struct ali_ircc_cb *self)
static int ali_ircc_resume(struct platform_device *dev)
{
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__ );
struct ali_ircc_cb *self = platform_get_drvdata(dev);
if (!self->io.suspended)
return;
return 0;
ali_ircc_net_open(self->netdev);
IRDA_MESSAGE("%s, Waking up\n", driver_name);
IRDA_MESSAGE("%s, Waking up\n", ALI_IRCC_DRIVER_NAME);
self->io.suspended = 0;
IRDA_DEBUG(2, "%s(), ----------------- End ------------------\n", __FUNCTION__ );
}
static int ali_ircc_pmproc(struct pm_dev *dev, pm_request_t rqst, void *data)
{
struct ali_ircc_cb *self = (struct ali_ircc_cb*) dev->data;
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__ );
if (self) {
switch (rqst) {
case PM_SUSPEND:
ali_ircc_suspend(self);
break;
case PM_RESUME:
ali_ircc_wakeup(self);
break;
}
}
IRDA_DEBUG(2, "%s(), ----------------- End ------------------\n", __FUNCTION__ );
return 0;
}
/* ALi Chip Function */
static void SetCOMInterrupts(struct ali_ircc_cb *self , unsigned char enable)
......
......@@ -83,9 +83,9 @@ static struct usb_device_id dongles[] = {
/* Extended Systems, Inc., XTNDAccess IrDA USB (ESI-9685) */
{ USB_DEVICE(0x8e9, 0x100), .driver_info = IUC_SPEED_BUG | IUC_NO_WINDOW },
/* SigmaTel STIR4210/4220/4116 USB IrDA (VFIR) Bridge */
{ USB_DEVICE(0x66f, 0x4210), .driver_info = IUC_STIR_4210 | IUC_SPEED_BUG },
{ USB_DEVICE(0x66f, 0x4220), .driver_info = IUC_STIR_4210 | IUC_SPEED_BUG },
{ USB_DEVICE(0x66f, 0x4116), .driver_info = IUC_STIR_4210 | IUC_SPEED_BUG },
{ USB_DEVICE(0x66f, 0x4210), .driver_info = IUC_STIR421X | IUC_SPEED_BUG },
{ USB_DEVICE(0x66f, 0x4220), .driver_info = IUC_STIR421X | IUC_SPEED_BUG },
{ USB_DEVICE(0x66f, 0x4116), .driver_info = IUC_STIR421X | IUC_SPEED_BUG },
{ .match_flags = USB_DEVICE_ID_MATCH_INT_CLASS |
USB_DEVICE_ID_MATCH_INT_SUBCLASS,
.bInterfaceClass = USB_CLASS_APP_SPEC,
......@@ -154,7 +154,7 @@ static void irda_usb_build_header(struct irda_usb_cb *self,
* and if either speed or xbofs (or both) needs
* to be changed.
*/
if (self->capability & IUC_STIR_4210 &&
if (self->capability & IUC_STIR421X &&
((self->new_speed != -1) || (self->new_xbofs != -1))) {
/* With STIR421x, speed and xBOFs must be set at the same
......@@ -318,7 +318,7 @@ static void irda_usb_change_speed_xbofs(struct irda_usb_cb *self)
/* Set the new speed and xbofs in this fake frame */
irda_usb_build_header(self, frame, 1);
if ( self->capability & IUC_STIR_4210 ) {
if (self->capability & IUC_STIR421X) {
if (frame[0] == 0) return ; // do nothing if no change
frame[1] = 0; // other parameters don't change here
frame[2] = 0;
......@@ -455,7 +455,7 @@ static int irda_usb_hard_xmit(struct sk_buff *skb, struct net_device *netdev)
/* Change setting for next frame */
if ( self->capability & IUC_STIR_4210 ) {
if (self->capability & IUC_STIR421X) {
__u8 turnaround_time;
__u8* frame;
turnaround_time = get_turnaround_time( skb );
......@@ -897,10 +897,13 @@ static void irda_usb_receive(struct urb *urb, struct pt_regs *regs)
docopy = (urb->actual_length < IRDA_RX_COPY_THRESHOLD);
/* Allocate a new skb */
if ( self->capability & IUC_STIR_4210 )
newskb = dev_alloc_skb(docopy ? urb->actual_length : IRDA_SKB_MAX_MTU + USB_IRDA_SIGMATEL_HEADER);
if (self->capability & IUC_STIR421X)
newskb = dev_alloc_skb(docopy ? urb->actual_length :
IRDA_SKB_MAX_MTU +
USB_IRDA_STIR421X_HEADER);
else
newskb = dev_alloc_skb(docopy ? urb->actual_length : IRDA_SKB_MAX_MTU);
newskb = dev_alloc_skb(docopy ? urb->actual_length :
IRDA_SKB_MAX_MTU);
if (!newskb) {
self->stats.rx_dropped++;
......@@ -1022,188 +1025,140 @@ static int irda_usb_is_receiving(struct irda_usb_cb *self)
return 0; /* For now */
}
#define STIR421X_PATCH_PRODUCT_VERSION_STR "Product Version: "
#define STIR421X_PATCH_COMPONENT_VERSION_STR "Component Version: "
#define STIR421X_PATCH_DATA_TAG_STR "STMP"
#define STIR421X_PATCH_FILE_VERSION_MAX_OFFSET 512 /* version info is before here */
#define STIR421X_PATCH_FILE_IMAGE_MAX_OFFSET 512 /* patch image starts before here */
#define STIR421X_PATCH_FILE_END_OF_HEADER_TAG 0x1A /* marks end of patch file header (PC DOS text file EOF character) */
#define STIR421X_PATCH_PRODUCT_VER "Product Version: "
#define STIR421X_PATCH_STMP_TAG "STMP"
#define STIR421X_PATCH_CODE_OFFSET 512 /* patch image starts before here */
/* marks end of patch file header (PC DOS text file EOF character) */
#define STIR421X_PATCH_END_OF_HDR_TAG 0x1A
#define STIR421X_PATCH_BLOCK_SIZE 1023
/*
* Known firmware patches for STIR421x dongles
* Function stir421x_fwupload (struct irda_usb_cb *self,
* unsigned char *patch,
* const unsigned int patch_len)
*
* Upload firmware code to SigmaTel 421X IRDA-USB dongle
*/
static char * stir421x_patches[] = {
"42101001.sb",
"42101002.sb",
};
static int stir421x_get_patch_version(unsigned char * patch, const unsigned long patch_len)
static int stir421x_fw_upload(struct irda_usb_cb *self,
unsigned char *patch,
const unsigned int patch_len)
{
unsigned int version_offset;
unsigned long version_major, version_minor, version_build;
unsigned char * version_start;
int version_found = 0;
for (version_offset = 0;
version_offset < STIR421X_PATCH_FILE_END_OF_HEADER_TAG;
version_offset++) {
if (!memcmp(patch + version_offset,
STIR421X_PATCH_PRODUCT_VERSION_STR,
sizeof(STIR421X_PATCH_PRODUCT_VERSION_STR) - 1)) {
version_found = 1;
version_start = patch +
version_offset +
sizeof(STIR421X_PATCH_PRODUCT_VERSION_STR) - 1;
break;
}
int ret = -ENOMEM;
int actual_len = 0;
unsigned int i;
unsigned int block_size = 0;
unsigned char *patch_block;
patch_block = kzalloc(STIR421X_PATCH_BLOCK_SIZE, GFP_KERNEL);
if (patch_block == NULL)
return -ENOMEM;
/* break up patch into 1023-byte sections */
for (i = 0; i < patch_len; i += block_size) {
block_size = patch_len - i;
if (block_size > STIR421X_PATCH_BLOCK_SIZE)
block_size = STIR421X_PATCH_BLOCK_SIZE;
/* upload the patch section */
memcpy(patch_block, patch + i, block_size);
ret = usb_bulk_msg(self->usbdev,
usb_sndbulkpipe(self->usbdev,
self->bulk_out_ep),
patch_block, block_size,
&actual_len, msecs_to_jiffies(500));
IRDA_DEBUG(3,"%s(): Bulk send %u bytes, ret=%d\n",
__FUNCTION__, actual_len, ret);
if (ret < 0)
break;
}
/* We couldn't find a product version on this patch */
if (!version_found)
return -EINVAL;
/* Let's check if the product version is dotted */
if (version_start[3] != '.' ||
version_start[7] != '.')
return -EINVAL;
version_major = simple_strtoul(version_start, NULL, 10);
version_minor = simple_strtoul(version_start + 4, NULL, 10);
version_build = simple_strtoul(version_start + 8, NULL, 10);
IRDA_DEBUG(2, "%s(), Major: %ld Minor: %ld Build: %ld\n",
__FUNCTION__,
version_major, version_minor, version_build);
return (((version_major) << 12) +
((version_minor) << 8) +
((version_build / 10) << 4) +
(version_build % 10));
}
static int stir421x_upload_patch (struct irda_usb_cb *self,
unsigned char * patch,
const unsigned int patch_len)
{
int retval = 0;
int actual_len;
unsigned int i = 0, download_amount = 0;
unsigned char * patch_chunk;
IRDA_DEBUG (2, "%s(), Uploading STIR421x Patch\n", __FUNCTION__);
patch_chunk = kzalloc(STIR421X_MAX_PATCH_DOWNLOAD_SIZE, GFP_KERNEL);
if (patch_chunk == NULL)
return -ENOMEM;
/* break up patch into 1023-byte sections */
for (i = 0; retval >= 0 && i < patch_len; i += download_amount) {
download_amount = patch_len - i;
if (download_amount > STIR421X_MAX_PATCH_DOWNLOAD_SIZE)
download_amount = STIR421X_MAX_PATCH_DOWNLOAD_SIZE;
/* download the patch section */
memcpy(patch_chunk, patch + i, download_amount);
retval = usb_bulk_msg (self->usbdev,
usb_sndbulkpipe (self->usbdev,
self->bulk_out_ep),
patch_chunk, download_amount,
&actual_len, msecs_to_jiffies (500));
IRDA_DEBUG (2, "%s(), Sent %u bytes\n", __FUNCTION__,
actual_len);
if (retval == 0)
mdelay(10);
}
kfree(patch_chunk);
if (i != patch_len) {
IRDA_ERROR ("%s(), Pushed %d bytes (!= patch_len (%d))\n",
__FUNCTION__, i, patch_len);
retval = -EIO;
}
if (retval < 0)
/* todo - mark device as not ready */
IRDA_ERROR ("%s(), STIR421x patch upload failed (%d)\n",
__FUNCTION__, retval);
return retval;
}
kfree(patch_block);
return ret;
}
/*
* Function stir421x_patch_device(struct irda_usb_cb *self)
*
* Get a firmware code from userspase using hotplug request_firmware() call
*/
static int stir421x_patch_device(struct irda_usb_cb *self)
{
unsigned int i, patch_found = 0, data_found = 0, data_offset;
int patch_version, ret = 0;
const struct firmware *fw_entry;
for (i = 0; i < ARRAY_SIZE(stir421x_patches); i++) {
if(request_firmware(&fw_entry, stir421x_patches[i], &self->usbdev->dev) != 0) {
IRDA_ERROR( "%s(), Patch %s is not available\n", __FUNCTION__, stir421x_patches[i]);
continue;
}
/* We found a patch from userspace */
patch_version = stir421x_get_patch_version (fw_entry->data, fw_entry->size);
if (patch_version < 0) {
/* Couldn't fetch a version, let's move on to the next file */
IRDA_ERROR("%s(), version parsing failed\n", __FUNCTION__);
ret = patch_version;
release_firmware(fw_entry);
continue;
}
if (patch_version != self->usbdev->descriptor.bcdDevice) {
/* Patch version and device don't match */
IRDA_ERROR ("%s(), wrong patch version (%d <-> %d)\n",
__FUNCTION__,
patch_version, self->usbdev->descriptor.bcdDevice);
ret = -EINVAL;
release_firmware(fw_entry);
continue;
}
/* If we're here, we've found a correct patch */
patch_found = 1;
break;
}
/* We couldn't find a valid firmware, let's leave */
if (!patch_found)
return ret;
/* The actual image starts after the "STMP" keyword */
for (data_offset = 0; data_offset < STIR421X_PATCH_FILE_IMAGE_MAX_OFFSET; data_offset++) {
if (!memcmp(fw_entry->data + data_offset,
STIR421X_PATCH_DATA_TAG_STR,
sizeof(STIR421X_PATCH_FILE_IMAGE_MAX_OFFSET))) {
IRDA_DEBUG(2, "%s(), found patch data for STIR421x at offset %d\n",
__FUNCTION__, data_offset);
data_found = 1;
break;
}
}
/* We couldn't find "STMP" from the header */
if (!data_found)
return -EINVAL;
/* Let's upload the patch to the target */
ret = stir421x_upload_patch(self,
&fw_entry->data[data_offset + sizeof(STIR421X_PATCH_FILE_IMAGE_MAX_OFFSET)],
fw_entry->size - (data_offset + sizeof(STIR421X_PATCH_FILE_IMAGE_MAX_OFFSET)));
release_firmware(fw_entry);
return ret;
unsigned int i;
int ret;
char stir421x_fw_name[11];
const struct firmware *fw;
unsigned char *fw_version_ptr; /* pointer to version string */
unsigned long fw_version = 0;
/*
* Known firmware patch file names for STIR421x dongles
* are "42101001.sb" or "42101002.sb"
*/
sprintf(stir421x_fw_name, "4210%4X.sb",
self->usbdev->descriptor.bcdDevice);
ret = request_firmware(&fw, stir421x_fw_name, &self->usbdev->dev);
if (ret < 0)
return ret;
/* We get a patch from userspace */
IRDA_MESSAGE("%s(): Received firmware %s (%u bytes)\n",
__FUNCTION__, stir421x_fw_name, fw->size);
ret = -EINVAL;
/* Get the bcd product version */
if (!memcmp(fw->data, STIR421X_PATCH_PRODUCT_VER,
sizeof(STIR421X_PATCH_PRODUCT_VER) - 1)) {
fw_version_ptr = fw->data +
sizeof(STIR421X_PATCH_PRODUCT_VER) - 1;
/* Let's check if the product version is dotted */
if (fw_version_ptr[3] == '.' &&
fw_version_ptr[7] == '.') {
unsigned long major, minor, build;
major = simple_strtoul(fw_version_ptr, NULL, 10);
minor = simple_strtoul(fw_version_ptr + 4, NULL, 10);
build = simple_strtoul(fw_version_ptr + 8, NULL, 10);
fw_version = (major << 12)
+ (minor << 8)
+ ((build / 10) << 4)
+ (build % 10);
IRDA_DEBUG(3, "%s(): Firmware Product version %ld\n",
__FUNCTION__, fw_version);
}
}
if (self->usbdev->descriptor.bcdDevice == fw_version) {
/*
* If we're here, we've found a correct patch
* The actual image starts after the "STMP" keyword
* so forward to the firmware header tag
*/
for (i = 0; (fw->data[i] != STIR421X_PATCH_END_OF_HDR_TAG)
&& (i < fw->size); i++) ;
/* here we check for the out of buffer case */
if ((STIR421X_PATCH_END_OF_HDR_TAG == fw->data[i])
&& (i < STIR421X_PATCH_CODE_OFFSET)) {
if (!memcmp(fw->data + i + 1, STIR421X_PATCH_STMP_TAG,
sizeof(STIR421X_PATCH_STMP_TAG) - 1)) {
/* We can upload the patch to the target */
i += sizeof(STIR421X_PATCH_STMP_TAG);
ret = stir421x_fw_upload(self, &fw->data[i],
fw->size - i);
}
}
}
release_firmware(fw);
return ret;
}
......@@ -1702,12 +1657,12 @@ static int irda_usb_probe(struct usb_interface *intf,
init_timer(&self->rx_defer_timer);
self->capability = id->driver_info;
self->needspatch = ((self->capability & IUC_STIR_4210) != 0) ;
self->needspatch = ((self->capability & IUC_STIR421X) != 0);
/* Create all of the needed urbs */
if (self->capability & IUC_STIR_4210) {
if (self->capability & IUC_STIR421X) {
self->max_rx_urb = IU_SIGMATEL_MAX_RX_URBS;
self->header_length = USB_IRDA_SIGMATEL_HEADER;
self->header_length = USB_IRDA_STIR421X_HEADER;
} else {
self->max_rx_urb = IU_MAX_RX_URBS;
self->header_length = USB_IRDA_HEADER;
......@@ -1813,8 +1768,8 @@ static int irda_usb_probe(struct usb_interface *intf,
/* Now we fetch and upload the firmware patch */
ret = stir421x_patch_device(self);
self->needspatch = (ret < 0);
if (ret < 0) {
printk("patch_device failed\n");
if (self->needspatch) {
IRDA_ERROR("STIR421X: Couldn't upload patch\n");
goto err_out_5;
}
......
......@@ -34,9 +34,6 @@
#include <net/irda/irda.h>
#include <net/irda/irda_device.h> /* struct irlap_cb */
#define PATCH_FILE_SIZE_MAX 65536
#define PATCH_FILE_SIZE_MIN 80
#define RX_COPY_THRESHOLD 200
#define IRDA_USB_MAX_MTU 2051
#define IRDA_USB_SPEED_MTU 64 /* Weird, but work like this */
......@@ -107,14 +104,15 @@
#define IUC_SMALL_PKT 0x10 /* Device doesn't behave with big Rx packets */
#define IUC_MAX_WINDOW 0x20 /* Device underestimate the Rx window */
#define IUC_MAX_XBOFS 0x40 /* Device need more xbofs than advertised */
#define IUC_STIR_4210 0x80 /* SigmaTel 4210/4220/4116 VFIR */
#define IUC_STIR421X 0x80 /* SigmaTel 4210/4220/4116 VFIR */
/* USB class definitions */
#define USB_IRDA_HEADER 0x01
#define USB_CLASS_IRDA 0x02 /* USB_CLASS_APP_SPEC subclass */
#define USB_DT_IRDA 0x21
#define USB_IRDA_SIGMATEL_HEADER 0x03
#define IU_SIGMATEL_MAX_RX_URBS (IU_MAX_ACTIVE_RX_URBS + USB_IRDA_SIGMATEL_HEADER)
#define USB_IRDA_STIR421X_HEADER 0x03
#define IU_SIGMATEL_MAX_RX_URBS (IU_MAX_ACTIVE_RX_URBS + \
USB_IRDA_STIR421X_HEADER)
struct irda_class_desc {
__u8 bLength;
......
此差异已折叠。
/*****************************************************************************
*
* Filename: mcs7780.h
* Version: 0.2-alpha
* Description: Irda MosChip USB Dongle
* Status: Experimental
* Authors: Lukasz Stelmach <stlman@poczta.fm>
* Brian Pugh <bpugh@cs.pdx.edu>
*
* Copyright (C) 2005, Lukasz Stelmach <stlman@poczta.fm>
* Copyright (C) 2005, Brian Pugh <bpugh@cs.pdx.edu>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*
*****************************************************************************/
#ifndef _MCS7780_H
#define _MCS7780_H
#define MCS_MODE_SIR 0
#define MCS_MODE_MIR 1
#define MCS_MODE_FIR 2
#define MCS_CTRL_TIMEOUT 500
#define MCS_XMIT_TIMEOUT 500
/* Possible transceiver types */
#define MCS_TSC_VISHAY 0 /* Vishay TFD, default choice */
#define MCS_TSC_AGILENT 1 /* Agilent 3602/3600 */
#define MCS_TSC_SHARP 2 /* Sharp GP2W1000YP */
/* Requests */
#define MCS_RD_RTYPE 0xC0
#define MCS_WR_RTYPE 0x40
#define MCS_RDREQ 0x0F
#define MCS_WRREQ 0x0E
/* Register 0x00 */
#define MCS_MODE_REG 0
#define MCS_FIR ((__u16)0x0001)
#define MCS_SIR16US ((__u16)0x0002)
#define MCS_BBTG ((__u16)0x0004)
#define MCS_ASK ((__u16)0x0008)
#define MCS_PARITY ((__u16)0x0010)
/* SIR/MIR speed constants */
#define MCS_SPEED_SHIFT 5
#define MCS_SPEED_MASK ((__u16)0x00E0)
#define MCS_SPEED(x) ((x & MCS_SPEED_MASK) >> MCS_SPEED_SHIFT)
#define MCS_SPEED_2400 ((0 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_9600 ((1 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_19200 ((2 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_38400 ((3 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_57600 ((4 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_115200 ((5 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_576000 ((6 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_SPEED_1152000 ((7 << MCS_SPEED_SHIFT) & MCS_SPEED_MASK)
#define MCS_PLLPWDN ((__u16)0x0100)
#define MCS_DRIVER ((__u16)0x0200)
#define MCS_DTD ((__u16)0x0400)
#define MCS_DIR ((__u16)0x0800)
#define MCS_SIPEN ((__u16)0x1000)
#define MCS_SENDSIP ((__u16)0x2000)
#define MCS_CHGDIR ((__u16)0x4000)
#define MCS_RESET ((__u16)0x8000)
/* Register 0x02 */
#define MCS_XCVR_REG 2
#define MCS_MODE0 ((__u16)0x0001)
#define MCS_STFIR ((__u16)0x0002)
#define MCS_XCVR_CONF ((__u16)0x0004)
#define MCS_RXFAST ((__u16)0x0008)
/* TXCUR [6:4] */
#define MCS_TXCUR_SHIFT 4
#define MCS_TXCUR_MASK ((__u16)0x0070)
#define MCS_TXCUR(x) ((x & MCS_TXCUR_MASK) >> MCS_TXCUR_SHIFT)
#define MCS_SETTXCUR(x,y) \
((x & ~MCS_TXCUR_MASK) | (y << MCS_TXCUR_SHIFT) & MCS_TXCUR_MASK)
#define MCS_MODE1 ((__u16)0x0080)
#define MCS_SMODE0 ((__u16)0x0100)
#define MCS_SMODE1 ((__u16)0x0200)
#define MCS_INVTX ((__u16)0x0400)
#define MCS_INVRX ((__u16)0x0800)
#define MCS_MINRXPW_REG 4
#define MCS_RESV_REG 7
#define MCS_IRINTX ((__u16)0x0001)
#define MCS_IRINRX ((__u16)0x0002)
struct mcs_cb {
struct usb_device *usbdev; /* init: probe_irda */
struct net_device *netdev; /* network layer */
struct irlap_cb *irlap; /* The link layer we are binded to */
struct net_device_stats stats; /* network statistics */
struct qos_info qos;
unsigned int speed; /* Current speed */
unsigned int new_speed; /* new speed */
struct work_struct work; /* Change speed work */
struct sk_buff *tx_pending;
char in_buf[4096]; /* transmit/receive buffer */
char out_buf[4096]; /* transmit/receive buffer */
__u8 *fifo_status;
iobuff_t rx_buff; /* receive unwrap state machine */
struct timeval rx_time;
spinlock_t lock;
int receiving;
__u8 ep_in;
__u8 ep_out;
struct urb *rx_urb;
struct urb *tx_urb;
int transceiver_type;
int sir_tweak;
int receive_mode;
};
static int mcs_set_reg(struct mcs_cb *mcs, __u16 reg, __u16 val);
static int mcs_get_reg(struct mcs_cb *mcs, __u16 reg, __u16 * val);
static inline int mcs_setup_transceiver_vishay(struct mcs_cb *mcs);
static inline int mcs_setup_transceiver_agilent(struct mcs_cb *mcs);
static inline int mcs_setup_transceiver_sharp(struct mcs_cb *mcs);
static inline int mcs_setup_transceiver(struct mcs_cb *mcs);
static inline int mcs_wrap_sir_skb(struct sk_buff *skb, __u8 * buf);
static unsigned mcs_wrap_fir_skb(const struct sk_buff *skb, __u8 *buf);
static unsigned mcs_wrap_mir_skb(const struct sk_buff *skb, __u8 *buf);
static void mcs_unwrap_mir(struct mcs_cb *mcs, __u8 *buf, int len);
static void mcs_unwrap_fir(struct mcs_cb *mcs, __u8 *buf, int len);
static inline int mcs_setup_urbs(struct mcs_cb *mcs);
static inline int mcs_receive_start(struct mcs_cb *mcs);
static inline int mcs_find_endpoints(struct mcs_cb *mcs,
struct usb_host_endpoint *ep, int epnum);
static int mcs_speed_change(struct mcs_cb *mcs);
static int mcs_net_ioctl(struct net_device *netdev, struct ifreq *rq, int cmd);
static int mcs_net_close(struct net_device *netdev);
static int mcs_net_open(struct net_device *netdev);
static struct net_device_stats *mcs_net_get_stats(struct net_device *netdev);
static void mcs_receive_irq(struct urb *urb, struct pt_regs *regs);
static void mcs_send_irq(struct urb *urb, struct pt_regs *regs);
static int mcs_hard_xmit(struct sk_buff *skb, struct net_device *netdev);
static int mcs_probe(struct usb_interface *intf,
const struct usb_device_id *id);
static void mcs_disconnect(struct usb_interface *intf);
#endif /* _MCS7780_H */
......@@ -50,6 +50,7 @@
#include <linux/delay.h>
#include <linux/usb.h>
#include <linux/crc32.h>
#include <linux/kthread.h>
#include <net/irda/irda.h>
#include <net/irda/irlap.h>
#include <net/irda/irda_device.h>
......@@ -173,9 +174,7 @@ struct stir_cb {
struct qos_info qos;
unsigned speed; /* Current speed */
wait_queue_head_t thr_wait; /* transmit thread wakeup */
struct completion thr_exited;
pid_t thr_pid;
struct task_struct *thread; /* transmit thread */
struct sk_buff *tx_pending;
void *io_buf; /* transmit/receive buffer */
......@@ -577,7 +576,7 @@ static int stir_hard_xmit(struct sk_buff *skb, struct net_device *netdev)
SKB_LINEAR_ASSERT(skb);
skb = xchg(&stir->tx_pending, skb);
wake_up(&stir->thr_wait);
wake_up_process(stir->thread);
/* this should never happen unless stop/wakeup problem */
if (unlikely(skb)) {
......@@ -753,13 +752,7 @@ static int stir_transmit_thread(void *arg)
struct net_device *dev = stir->netdev;
struct sk_buff *skb;
daemonize("%s", dev->name);
allow_signal(SIGTERM);
while (netif_running(dev)
&& netif_device_present(dev)
&& !signal_pending(current))
{
while (!kthread_should_stop()) {
#ifdef CONFIG_PM
/* if suspending, then power off and wait */
if (unlikely(freezing(current))) {
......@@ -813,10 +806,11 @@ static int stir_transmit_thread(void *arg)
}
/* sleep if nothing to send */
wait_event_interruptible(stir->thr_wait, stir->tx_pending);
}
set_current_state(TASK_INTERRUPTIBLE);
schedule();
complete_and_exit (&stir->thr_exited, 0);
}
return 0;
}
......@@ -859,7 +853,7 @@ static void stir_rcv_irq(struct urb *urb, struct pt_regs *regs)
warn("%s: usb receive submit error: %d",
stir->netdev->name, err);
stir->receiving = 0;
wake_up(&stir->thr_wait);
wake_up_process(stir->thread);
}
}
......@@ -928,10 +922,10 @@ static int stir_net_open(struct net_device *netdev)
}
/** Start kernel thread for transmit. */
stir->thr_pid = kernel_thread(stir_transmit_thread, stir,
CLONE_FS|CLONE_FILES);
if (stir->thr_pid < 0) {
err = stir->thr_pid;
stir->thread = kthread_run(stir_transmit_thread, stir,
"%s", stir->netdev->name);
if (IS_ERR(stir->thread)) {
err = PTR_ERR(stir->thread);
err("stir4200: unable to start kernel thread");
goto err_out6;
}
......@@ -968,8 +962,7 @@ static int stir_net_close(struct net_device *netdev)
netif_stop_queue(netdev);
/* Kill transmit thread */
kill_proc(stir->thr_pid, SIGTERM, 1);
wait_for_completion(&stir->thr_exited);
kthread_stop(stir->thread);
kfree(stir->fifo_status);
/* Mop up receive urb's */
......@@ -1084,9 +1077,6 @@ static int stir_probe(struct usb_interface *intf,
stir->qos.min_turn_time.bits &= qos_mtt_bits;
irda_qos_bits_to_value(&stir->qos);
init_completion (&stir->thr_exited);
init_waitqueue_head (&stir->thr_wait);
/* Override the network functions we need to use */
net->hard_start_xmit = stir_hard_xmit;
net->open = stir_net_open;
......
......@@ -959,7 +959,7 @@ static int vlsi_hard_start_xmit(struct sk_buff *skb, struct net_device *ndev)
|| (now.tv_sec==ready.tv_sec && now.tv_usec>=ready.tv_usec))
break;
udelay(100);
/* must not sleep here - we are called under xmit_lock! */
/* must not sleep here - called under netif_tx_lock! */
}
}
......
......@@ -1200,7 +1200,7 @@ static int mv643xx_eth_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
if (has_tiny_unaligned_frags(skb)) {
if ((skb_linearize(skb, GFP_ATOMIC) != 0)) {
if (__skb_linearize(skb)) {
stats->tx_dropped++;
printk(KERN_DEBUG "%s: failed to linearize tiny "
"unaligned fragment\n", dev->name);
......
......@@ -318,12 +318,12 @@ performance critical codepaths:
The rx process only runs in the interrupt handler. Access from outside
the interrupt handler is only permitted after disable_irq().
The rx process usually runs under the dev->xmit_lock. If np->intr_tx_reap
The rx process usually runs under the netif_tx_lock. If np->intr_tx_reap
is set, then access is permitted under spin_lock_irq(&np->lock).
Thus configuration functions that want to access everything must call
disable_irq(dev->irq);
spin_lock_bh(dev->xmit_lock);
netif_tx_lock_bh(dev);
spin_lock_irq(&np->lock);
IV. Notes
......
......@@ -1609,8 +1609,6 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb)
kfree_skb(skb);
skb = ns;
}
else if (!pskb_may_pull(skb, skb->len))
goto err;
else
skb->ip_summed = CHECKSUM_NONE;
......
......@@ -69,8 +69,8 @@
#define DRV_MODULE_NAME "tg3"
#define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "3.59"
#define DRV_MODULE_RELDATE "June 8, 2006"
#define DRV_MODULE_VERSION "3.60"
#define DRV_MODULE_RELDATE "June 17, 2006"
#define TG3_DEF_MAC_MODE 0
#define TG3_DEF_RX_MODE 0
......@@ -229,6 +229,8 @@ static struct pci_device_id tg3_pci_tbl[] = {
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5755M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5786,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5787,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5787M,
......@@ -2965,6 +2967,27 @@ static int tg3_setup_phy(struct tg3 *tp, int force_reset)
return err;
}
/* This is called whenever we suspect that the system chipset is re-
* ordering the sequence of MMIO to the tx send mailbox. The symptom
* is bogus tx completions. We try to recover by setting the
* TG3_FLAG_MBOX_WRITE_REORDER flag and resetting the chip later
* in the workqueue.
*/
static void tg3_tx_recover(struct tg3 *tp)
{
BUG_ON((tp->tg3_flags & TG3_FLAG_MBOX_WRITE_REORDER) ||
tp->write32_tx_mbox == tg3_write_indirect_mbox);
printk(KERN_WARNING PFX "%s: The system may be re-ordering memory-"
"mapped I/O cycles to the network device, attempting to "
"recover. Please report the problem to the driver maintainer "
"and include system chipset information.\n", tp->dev->name);
spin_lock(&tp->lock);
tp->tg3_flags |= TG3_FLAG_TX_RECOVERY_PENDING;
spin_unlock(&tp->lock);
}
/* Tigon3 never reports partial packet sends. So we do not
* need special logic to handle SKBs that have not had all
* of their frags sent yet, like SunGEM does.
......@@ -2977,9 +3000,13 @@ static void tg3_tx(struct tg3 *tp)
while (sw_idx != hw_idx) {
struct tx_ring_info *ri = &tp->tx_buffers[sw_idx];
struct sk_buff *skb = ri->skb;
int i;
int i, tx_bug = 0;
if (unlikely(skb == NULL)) {
tg3_tx_recover(tp);
return;
}
BUG_ON(skb == NULL);
pci_unmap_single(tp->pdev,
pci_unmap_addr(ri, mapping),
skb_headlen(skb),
......@@ -2990,10 +3017,9 @@ static void tg3_tx(struct tg3 *tp)
sw_idx = NEXT_TX(sw_idx);
for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
BUG_ON(sw_idx == hw_idx);
ri = &tp->tx_buffers[sw_idx];
BUG_ON(ri->skb != NULL);
if (unlikely(ri->skb != NULL || sw_idx == hw_idx))
tx_bug = 1;
pci_unmap_page(tp->pdev,
pci_unmap_addr(ri, mapping),
......@@ -3004,6 +3030,11 @@ static void tg3_tx(struct tg3 *tp)
}
dev_kfree_skb(skb);
if (unlikely(tx_bug)) {
tg3_tx_recover(tp);
return;
}
}
tp->tx_cons = sw_idx;
......@@ -3331,6 +3362,11 @@ static int tg3_poll(struct net_device *netdev, int *budget)
/* run TX completion thread */
if (sblk->idx[0].tx_consumer != tp->tx_cons) {
tg3_tx(tp);
if (unlikely(tp->tg3_flags & TG3_FLAG_TX_RECOVERY_PENDING)) {
netif_rx_complete(netdev);
schedule_work(&tp->reset_task);
return 0;
}
}
/* run RX thread, within the bounds set by NAPI.
......@@ -3391,12 +3427,10 @@ static inline void tg3_full_lock(struct tg3 *tp, int irq_sync)
if (irq_sync)
tg3_irq_quiesce(tp);
spin_lock_bh(&tp->lock);
spin_lock(&tp->tx_lock);
}
static inline void tg3_full_unlock(struct tg3 *tp)
{
spin_unlock(&tp->tx_lock);
spin_unlock_bh(&tp->lock);
}
......@@ -3579,6 +3613,13 @@ static void tg3_reset_task(void *_data)
restart_timer = tp->tg3_flags2 & TG3_FLG2_RESTART_TIMER;
tp->tg3_flags2 &= ~TG3_FLG2_RESTART_TIMER;
if (tp->tg3_flags & TG3_FLAG_TX_RECOVERY_PENDING) {
tp->write32_tx_mbox = tg3_write32_tx_mbox;
tp->write32_rx_mbox = tg3_write_flush_reg32;
tp->tg3_flags |= TG3_FLAG_MBOX_WRITE_REORDER;
tp->tg3_flags &= ~TG3_FLAG_TX_RECOVERY_PENDING;
}
tg3_halt(tp, RESET_KIND_SHUTDOWN, 0);
tg3_init_hw(tp, 1);
......@@ -3718,14 +3759,11 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
len = skb_headlen(skb);
/* No BH disabling for tx_lock here. We are running in BH disabled
* context and TX reclaim runs via tp->poll inside of a software
/* We are running in BH disabled context with netif_tx_lock
* and TX reclaim runs via tp->poll inside of a software
* interrupt. Furthermore, IRQ processing runs lockless so we have
* no IRQ context deadlocks to worry about either. Rejoice!
*/
if (!spin_trylock(&tp->tx_lock))
return NETDEV_TX_LOCKED;
if (unlikely(TX_BUFFS_AVAIL(tp) <= (skb_shinfo(skb)->nr_frags + 1))) {
if (!netif_queue_stopped(dev)) {
netif_stop_queue(dev);
......@@ -3734,7 +3772,6 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
printk(KERN_ERR PFX "%s: BUG! Tx Ring full when "
"queue awake!\n", dev->name);
}
spin_unlock(&tp->tx_lock);
return NETDEV_TX_BUSY;
}
......@@ -3817,15 +3854,16 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
tw32_tx_mbox((MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW), entry);
tp->tx_prod = entry;
if (TX_BUFFS_AVAIL(tp) <= (MAX_SKB_FRAGS + 1)) {
if (unlikely(TX_BUFFS_AVAIL(tp) <= (MAX_SKB_FRAGS + 1))) {
spin_lock(&tp->tx_lock);
netif_stop_queue(dev);
if (TX_BUFFS_AVAIL(tp) > TG3_TX_WAKEUP_THRESH)
netif_wake_queue(tp->dev);
spin_unlock(&tp->tx_lock);
}
out_unlock:
mmiowb();
spin_unlock(&tp->tx_lock);
dev->trans_start = jiffies;
......@@ -3844,14 +3882,11 @@ static int tg3_start_xmit_dma_bug(struct sk_buff *skb, struct net_device *dev)
len = skb_headlen(skb);
/* No BH disabling for tx_lock here. We are running in BH disabled
* context and TX reclaim runs via tp->poll inside of a software
/* We are running in BH disabled context with netif_tx_lock
* and TX reclaim runs via tp->poll inside of a software
* interrupt. Furthermore, IRQ processing runs lockless so we have
* no IRQ context deadlocks to worry about either. Rejoice!
*/
if (!spin_trylock(&tp->tx_lock))
return NETDEV_TX_LOCKED;
if (unlikely(TX_BUFFS_AVAIL(tp) <= (skb_shinfo(skb)->nr_frags + 1))) {
if (!netif_queue_stopped(dev)) {
netif_stop_queue(dev);
......@@ -3860,7 +3895,6 @@ static int tg3_start_xmit_dma_bug(struct sk_buff *skb, struct net_device *dev)
printk(KERN_ERR PFX "%s: BUG! Tx Ring full when "
"queue awake!\n", dev->name);
}
spin_unlock(&tp->tx_lock);
return NETDEV_TX_BUSY;
}
......@@ -3998,15 +4032,16 @@ static int tg3_start_xmit_dma_bug(struct sk_buff *skb, struct net_device *dev)
tw32_tx_mbox((MAILBOX_SNDHOST_PROD_IDX_0 + TG3_64BIT_REG_LOW), entry);
tp->tx_prod = entry;
if (TX_BUFFS_AVAIL(tp) <= (MAX_SKB_FRAGS + 1)) {
if (unlikely(TX_BUFFS_AVAIL(tp) <= (MAX_SKB_FRAGS + 1))) {
spin_lock(&tp->tx_lock);
netif_stop_queue(dev);
if (TX_BUFFS_AVAIL(tp) > TG3_TX_WAKEUP_THRESH)
netif_wake_queue(tp->dev);
spin_unlock(&tp->tx_lock);
}
out_unlock:
mmiowb();
spin_unlock(&tp->tx_lock);
dev->trans_start = jiffies;
......@@ -11243,7 +11278,6 @@ static int __devinit tg3_init_one(struct pci_dev *pdev,
SET_MODULE_OWNER(dev);
SET_NETDEV_DEV(dev, &pdev->dev);
dev->features |= NETIF_F_LLTX;
#if TG3_VLAN_TAG_USED
dev->features |= NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX;
dev->vlan_rx_register = tg3_vlan_rx_register;
......
......@@ -2074,12 +2074,22 @@ struct tg3 {
/* SMP locking strategy:
*
* lock: Held during all operations except TX packet
* processing.
* lock: Held during reset, PHY access, timer, and when
* updating tg3_flags and tg3_flags2.
*
* tx_lock: Held during tg3_start_xmit and tg3_tx
* tx_lock: Held during tg3_start_xmit and tg3_tx only
* when calling netif_[start|stop]_queue.
* tg3_start_xmit is protected by netif_tx_lock.
*
* Both of these locks are to be held with BH safety.
*
* Because the IRQ handler, tg3_poll, and tg3_start_xmit
* are running lockless, it is necessary to completely
* quiesce the chip with tg3_netif_stop and tg3_full_lock
* before reconfiguring the device.
*
* indirect_lock: Held when accessing registers indirectly
* with IRQ disabling.
*/
spinlock_t lock;
spinlock_t indirect_lock;
......@@ -2155,11 +2165,7 @@ struct tg3 {
#define TG3_FLAG_ENABLE_ASF 0x00000020
#define TG3_FLAG_5701_REG_WRITE_BUG 0x00000040
#define TG3_FLAG_POLL_SERDES 0x00000080
#if defined(CONFIG_X86)
#define TG3_FLAG_MBOX_WRITE_REORDER 0x00000100
#else
#define TG3_FLAG_MBOX_WRITE_REORDER 0 /* disables code too */
#endif
#define TG3_FLAG_PCIX_TARGET_HWBUG 0x00000200
#define TG3_FLAG_WOL_SPEED_100MB 0x00000400
#define TG3_FLAG_WOL_ENABLE 0x00000800
......@@ -2172,6 +2178,7 @@ struct tg3 {
#define TG3_FLAG_PCI_HIGH_SPEED 0x00040000
#define TG3_FLAG_PCI_32BIT 0x00080000
#define TG3_FLAG_SRAM_USE_CONFIG 0x00100000
#define TG3_FLAG_TX_RECOVERY_PENDING 0x00200000
#define TG3_FLAG_SERDES_WOL_CAP 0x00400000
#define TG3_FLAG_JUMBO_RING_ENABLE 0x00800000
#define TG3_FLAG_10_100_ONLY 0x01000000
......
......@@ -1605,11 +1605,11 @@ static void __devexit w840_remove1 (struct pci_dev *pdev)
* - get_stats:
* spin_lock_irq(np->lock), doesn't touch hw if not present
* - hard_start_xmit:
* netif_stop_queue + spin_unlock_wait(&dev->xmit_lock);
* synchronize_irq + netif_tx_disable;
* - tx_timeout:
* netif_device_detach + spin_unlock_wait(&dev->xmit_lock);
* netif_device_detach + netif_tx_disable;
* - set_multicast_list
* netif_device_detach + spin_unlock_wait(&dev->xmit_lock);
* netif_device_detach + netif_tx_disable;
* - interrupt handler
* doesn't touch hw if not present, synchronize_irq waits for
* running instances of the interrupt handler.
......@@ -1635,12 +1635,11 @@ static int w840_suspend (struct pci_dev *pdev, pm_message_t state)
netif_device_detach(dev);
update_csr6(dev, 0);
iowrite32(0, ioaddr + IntrEnable);
netif_stop_queue(dev);
spin_unlock_irq(&np->lock);
spin_unlock_wait(&dev->xmit_lock);
synchronize_irq(dev->irq);
netif_tx_disable(dev);
np->stats.rx_missed_errors += ioread32(ioaddr + RxMissed) & 0xffff;
/* no more hardware accesses behind this line. */
......
......@@ -1899,6 +1899,13 @@ static int velocity_xmit(struct sk_buff *skb, struct net_device *dev)
int pktlen = skb->len;
#ifdef VELOCITY_ZERO_COPY_SUPPORT
if (skb_shinfo(skb)->nr_frags > 6 && __skb_linearize(skb)) {
kfree_skb(skb);
return 0;
}
#endif
spin_lock_irqsave(&vptr->lock, flags);
index = vptr->td_curr[qnum];
......@@ -1914,8 +1921,6 @@ static int velocity_xmit(struct sk_buff *skb, struct net_device *dev)
*/
if (pktlen < ETH_ZLEN) {
/* Cannot occur until ZC support */
if(skb_linearize(skb, GFP_ATOMIC))
return 0;
pktlen = ETH_ZLEN;
memcpy(tdinfo->buf, skb->data, skb->len);
memset(tdinfo->buf + skb->len, 0, ETH_ZLEN - skb->len);
......@@ -1933,7 +1938,6 @@ static int velocity_xmit(struct sk_buff *skb, struct net_device *dev)
int nfrags = skb_shinfo(skb)->nr_frags;
tdinfo->skb = skb;
if (nfrags > 6) {
skb_linearize(skb, GFP_ATOMIC);
memcpy(tdinfo->buf, skb->data, skb->len);
tdinfo->skb_dma[0] = tdinfo->buf_dma;
td_ptr->tdesc0.pktsize =
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -147,7 +147,6 @@ void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *ar
struct ipv4_config
{
int log_martians;
int autoconfig;
int no_pmtu_disc;
};
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册
反馈
建议
客服 返回
顶部