提交 82268da1 编写于 作者: I Ingo Molnar

Merge branch 'linus' into percpu-cpumask-x86-for-linus-2

Conflicts:
	arch/sparc/kernel/time_64.c
	drivers/gpu/drm/drm_proc.c

Manual merge to resolve build warning due to phys_addr_t type change
on x86:

	drivers/gpu/drm/drm_info.c
Signed-off-by: NIngo Molnar <mingo@elte.hu>
......@@ -227,6 +227,12 @@ usage should require reading the full document.
!Pinclude/net/mac80211.h Powersave support
</chapter>
<chapter id="beacon-filter">
<title>Beacon filter support</title>
!Pinclude/net/mac80211.h Beacon filter support
!Finclude/net/mac80211.h ieee80211_beacon_loss
</chapter>
<chapter id="qos">
<title>Multiple queues and QoS support</title>
<para>TBD</para>
......
......@@ -6,20 +6,47 @@ be removed from this file.
---------------------------
What: old static regulatory information and ieee80211_regdom module parameter
When: 2.6.29
What: The ieee80211_regdom module parameter
When: March 2010 / desktop catchup
Why: This was inherited by the CONFIG_WIRELESS_OLD_REGULATORY code,
and currently serves as an option for users to define an
ISO / IEC 3166 alpha2 code for the country they are currently
present in. Although there are userspace API replacements for this
through nl80211 distributions haven't yet caught up with implementing
decent alternatives through standard GUIs. Although available as an
option through iw or wpa_supplicant its just a matter of time before
distributions pick up good GUI options for this. The ideal solution
would actually consist of intelligent designs which would do this for
the user automatically even when travelling through different countries.
Until then we leave this module parameter as a compromise.
When userspace improves with reasonable widely-available alternatives for
this we will no longer need this module parameter. This entry hopes that
by the super-futuristically looking date of "March 2010" we will have
such replacements widely available.
Who: Luis R. Rodriguez <lrodriguez@atheros.com>
---------------------------
What: CONFIG_WIRELESS_OLD_REGULATORY - old static regulatory information
When: March 2010 / desktop catchup
Why: The old regulatory infrastructure has been replaced with a new one
which does not require statically defined regulatory domains. We do
not want to keep static regulatory domains in the kernel due to the
the dynamic nature of regulatory law and localization. We kept around
the old static definitions for the regulatory domains of:
* US
* JP
* EU
and used by default the US when CONFIG_WIRELESS_OLD_REGULATORY was
set. We also kept around the ieee80211_regdom module parameter in case
some applications were relying on it. Changing regulatory domains
can now be done instead by using nl80211, as is done with iw.
set. We will remove this option once the standard Linux desktop catches
up with the new userspace APIs we have implemented.
Who: Luis R. Rodriguez <lrodriguez@atheros.com>
---------------------------
......
......@@ -765,6 +765,14 @@ L: linux-wireless@vger.kernel.org
L: ath9k-devel@lists.ath9k.org
S: Supported
ATHEROS AR9170 WIRELESS DRIVER
P: Christian Lamparter
M: chunkeey@web.de
L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org/en/users/Drivers/ar9170
S: Maintained
F: drivers/net/wireless/ar9170/
ATI_REMOTE2 DRIVER
P: Ville Syrjala
M: syrjala@sci.fi
......@@ -3602,7 +3610,7 @@ S: Maintained
RALINK RT2X00 WIRELESS LAN DRIVER
P: rt2x00 project
L: linux-wireless@vger.kernel.org
L: rt2400-devel@lists.sourceforge.net
L: users@rt2x00.serialmonkey.com
W: http://rt2x00.serialmonkey.com/
S: Maintained
T: git kernel.org:/pub/scm/linux/kernel/git/ivd/rt2x00.git
......
......@@ -903,8 +903,9 @@ sys_alpha_pipe:
stq $26, 0($sp)
.prologue 0
mov $31, $17
lda $16, 8($sp)
jsr $26, do_pipe
jsr $26, do_pipe_flags
ldq $26, 0($sp)
bne $0, 1f
......
......@@ -46,8 +46,6 @@
#include <asm/hwrpb.h>
#include <asm/processor.h>
extern int do_pipe(int *);
/*
* Brk needs to return an error. Still support Linux's brk(0) query idiom,
* which OSF programs just shouldn't be doing. We're still not quite
......
......@@ -240,7 +240,7 @@ ia32_syscall_table:
data8 sys_ni_syscall
data8 sys_umask /* 60 */
data8 sys_chroot
data8 sys_ustat
data8 compat_sys_ustat
data8 sys_dup2
data8 sys_getppid
data8 sys_getpgrp /* 65 */
......
......@@ -2196,7 +2196,7 @@ pfmfs_delete_dentry(struct dentry *dentry)
return 1;
}
static struct dentry_operations pfmfs_dentry_operations = {
static const struct dentry_operations pfmfs_dentry_operations = {
.d_delete = pfmfs_delete_dentry,
};
......
......@@ -355,40 +355,6 @@ SYSCALL_DEFINE1(32_personality, unsigned long, personality)
return ret;
}
/* ustat compatibility */
struct ustat32 {
compat_daddr_t f_tfree;
compat_ino_t f_tinode;
char f_fname[6];
char f_fpack[6];
};
extern asmlinkage long sys_ustat(dev_t dev, struct ustat __user * ubuf);
SYSCALL_DEFINE2(32_ustat, dev_t, dev, struct ustat32 __user *, ubuf32)
{
int err;
struct ustat tmp;
struct ustat32 tmp32;
mm_segment_t old_fs = get_fs();
set_fs(KERNEL_DS);
err = sys_ustat(dev, (struct ustat __user *)&tmp);
set_fs(old_fs);
if (err)
goto out;
memset(&tmp32, 0, sizeof(struct ustat32));
tmp32.f_tfree = tmp.f_tfree;
tmp32.f_tinode = tmp.f_tinode;
err = copy_to_user(ubuf32, &tmp32, sizeof(struct ustat32)) ? -EFAULT : 0;
out:
return err;
}
SYSCALL_DEFINE4(32_sendfile, long, out_fd, long, in_fd,
compat_off_t __user *, offset, s32, count)
{
......
......@@ -253,7 +253,7 @@ EXPORT(sysn32_call_table)
PTR compat_sys_utime /* 6130 */
PTR sys_mknod
PTR sys_32_personality
PTR sys_32_ustat
PTR compat_sys_ustat
PTR compat_sys_statfs
PTR compat_sys_fstatfs /* 6135 */
PTR sys_sysfs
......
......@@ -265,7 +265,7 @@ sys_call_table:
PTR sys_olduname
PTR sys_umask /* 4060 */
PTR sys_chroot
PTR sys_32_ustat
PTR compat_sys_ustat
PTR sys_dup2
PTR sys_getppid
PTR sys_getpgrp /* 4065 */
......
......@@ -130,7 +130,7 @@
ENTRY_OURS(newuname)
ENTRY_SAME(umask) /* 60 */
ENTRY_SAME(chroot)
ENTRY_SAME(ustat)
ENTRY_COMP(ustat)
ENTRY_SAME(dup2)
ENTRY_SAME(getppid)
ENTRY_SAME(getpgrp) /* 65 */
......
......@@ -65,7 +65,7 @@ SYSCALL(ni_syscall)
SYSX(sys_ni_syscall,sys_olduname, sys_olduname)
COMPAT_SYS_SPU(umask)
SYSCALL_SPU(chroot)
SYSCALL(ustat)
COMPAT_SYS(ustat)
SYSCALL_SPU(dup2)
SYSCALL_SPU(getppid)
SYSCALL_SPU(getpgrp)
......
......@@ -252,7 +252,7 @@ sys32_chroot_wrapper:
sys32_ustat_wrapper:
llgfr %r2,%r2 # dev_t
llgtr %r3,%r3 # struct ustat *
jg sys_ustat
jg compat_sys_ustat
.globl sys32_dup2_wrapper
sys32_dup2_wrapper:
......
......@@ -1031,7 +1031,7 @@ void smp_fetch_global_regs(void)
* If the address space is non-shared (ie. mm->count == 1) we avoid
* cross calls when we want to flush the currently running process's
* tlb state. This is done by clearing all cpu bits except the current
* processor's in current->active_mm->cpu_vm_mask and performing the
* processor's in current->mm->cpu_vm_mask and performing the
* flush locally only. This will force any subsequent cpus which run
* this task to flush the context from the local tlb if the process
* migrates to another cpu (again).
......@@ -1074,7 +1074,7 @@ void smp_flush_tlb_pending(struct mm_struct *mm, unsigned long nr, unsigned long
u32 ctx = CTX_HWBITS(mm->context);
int cpu = get_cpu();
if (mm == current->active_mm && atomic_read(&mm->mm_users) == 1)
if (mm == current->mm && atomic_read(&mm->mm_users) == 1)
mm->cpu_vm_mask = cpumask_of_cpu(cpu);
else
smp_cross_call_masked(&xcall_flush_tlb_pending,
......
......@@ -51,7 +51,7 @@ sys_call_table32:
/*150*/ .word sys_nis_syscall, sys_inotify_init, sys_inotify_add_watch, sys_poll, sys_getdents64
.word compat_sys_fcntl64, sys_inotify_rm_watch, compat_sys_statfs, compat_sys_fstatfs, sys_oldumount
/*160*/ .word compat_sys_sched_setaffinity, compat_sys_sched_getaffinity, sys32_getdomainname, sys32_setdomainname, sys_nis_syscall
.word sys_quotactl, sys_set_tid_address, compat_sys_mount, sys_ustat, sys32_setxattr
.word sys_quotactl, sys_set_tid_address, compat_sys_mount, compat_sys_ustat, sys32_setxattr
/*170*/ .word sys32_lsetxattr, sys32_fsetxattr, sys_getxattr, sys_lgetxattr, compat_sys_getdents
.word sys_setsid, sys_fchdir, sys32_fgetxattr, sys_listxattr, sys_llistxattr
/*180*/ .word sys32_flistxattr, sys_removexattr, sys_lremovexattr, compat_sys_sigpending, sys_ni_syscall
......
......@@ -36,10 +36,10 @@
#include <linux/clocksource.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/irq.h>
#include <asm/oplib.h>
#include <asm/timer.h>
#include <asm/irq.h>
#include <asm/io.h>
#include <asm/prom.h>
#include <asm/starfire.h>
......
......@@ -86,7 +86,7 @@ static int uml_net_rx(struct net_device *dev)
drop_skb->dev = dev;
/* Read a packet into drop_skb and don't do anything with it. */
(*lp->read)(lp->fd, drop_skb, lp);
lp->stats.rx_dropped++;
dev->stats.rx_dropped++;
return 0;
}
......@@ -99,8 +99,8 @@ static int uml_net_rx(struct net_device *dev)
skb_trim(skb, pkt_len);
skb->protocol = (*lp->protocol)(skb);
lp->stats.rx_bytes += skb->len;
lp->stats.rx_packets++;
dev->stats.rx_bytes += skb->len;
dev->stats.rx_packets++;
netif_rx(skb);
return pkt_len;
}
......@@ -224,8 +224,8 @@ static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
len = (*lp->write)(lp->fd, skb, lp);
if (len == skb->len) {
lp->stats.tx_packets++;
lp->stats.tx_bytes += skb->len;
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
dev->trans_start = jiffies;
netif_start_queue(dev);
......@@ -234,7 +234,7 @@ static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
else if (len == 0) {
netif_start_queue(dev);
lp->stats.tx_dropped++;
dev->stats.tx_dropped++;
}
else {
netif_start_queue(dev);
......@@ -248,12 +248,6 @@ static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
return 0;
}
static struct net_device_stats *uml_net_get_stats(struct net_device *dev)
{
struct uml_net_private *lp = netdev_priv(dev);
return &lp->stats;
}
static void uml_net_set_multicast_list(struct net_device *dev)
{
return;
......@@ -377,6 +371,18 @@ static void net_device_release(struct device *dev)
free_netdev(netdev);
}
static const struct net_device_ops uml_netdev_ops = {
.ndo_open = uml_net_open,
.ndo_stop = uml_net_close,
.ndo_start_xmit = uml_net_start_xmit,
.ndo_set_multicast_list = uml_net_set_multicast_list,
.ndo_tx_timeout = uml_net_tx_timeout,
.ndo_set_mac_address = uml_net_set_mac,
.ndo_change_mtu = uml_net_change_mtu,
.ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr,
};
/*
* Ensures that platform_driver_register is called only once by
* eth_configure. Will be set in an initcall.
......@@ -473,14 +479,7 @@ static void eth_configure(int n, void *init, char *mac,
set_ether_mac(dev, device->mac);
dev->mtu = transport->user->mtu;
dev->open = uml_net_open;
dev->hard_start_xmit = uml_net_start_xmit;
dev->stop = uml_net_close;
dev->get_stats = uml_net_get_stats;
dev->set_multicast_list = uml_net_set_multicast_list;
dev->tx_timeout = uml_net_tx_timeout;
dev->set_mac_address = uml_net_set_mac;
dev->change_mtu = uml_net_change_mtu;
dev->netdev_ops = &uml_netdev_ops;
dev->ethtool_ops = &uml_net_ethtool_ops;
dev->watchdog_timeo = (HZ >> 1);
dev->irq = UM_ETH_IRQ;
......
......@@ -26,7 +26,7 @@ struct uml_net_private {
spinlock_t lock;
struct net_device *dev;
struct timer_list tl;
struct net_device_stats stats;
struct work_struct work;
int fd;
unsigned char mac[ETH_ALEN];
......
......@@ -557,7 +557,7 @@ ia32_sys_call_table:
.quad sys32_olduname
.quad sys_umask /* 60 */
.quad sys_chroot
.quad sys32_ustat
.quad compat_sys_ustat
.quad sys_dup2
.quad sys_getppid
.quad sys_getpgrp /* 65 */
......
......@@ -638,28 +638,6 @@ long sys32_uname(struct old_utsname __user *name)
return err ? -EFAULT : 0;
}
long sys32_ustat(unsigned dev, struct ustat32 __user *u32p)
{
struct ustat u;
mm_segment_t seg;
int ret;
seg = get_fs();
set_fs(KERNEL_DS);
ret = sys_ustat(dev, (struct ustat __user *)&u);
set_fs(seg);
if (ret < 0)
return ret;
if (!access_ok(VERIFY_WRITE, u32p, sizeof(struct ustat32)) ||
__put_user((__u32) u.f_tfree, &u32p->f_tfree) ||
__put_user((__u32) u.f_tinode, &u32p->f_tfree) ||
__copy_to_user(&u32p->f_fname, u.f_fname, sizeof(u.f_fname)) ||
__copy_to_user(&u32p->f_fpack, u.f_fpack, sizeof(u.f_fpack)))
ret = -EFAULT;
return ret;
}
asmlinkage long sys32_execve(char __user *name, compat_uptr_t __user *argv,
compat_uptr_t __user *envp, struct pt_regs *regs)
{
......
......@@ -129,13 +129,6 @@ typedef struct compat_siginfo {
} _sifields;
} compat_siginfo_t;
struct ustat32 {
__u32 f_tfree;
compat_ino_t f_tinode;
char f_fname[6];
char f_fpack[6];
};
#define IA32_STACK_TOP IA32_PAGE_OFFSET
#ifdef __KERNEL__
......
......@@ -70,8 +70,6 @@ struct old_utsname;
asmlinkage long sys32_olduname(struct oldold_utsname __user *);
long sys32_uname(struct old_utsname __user *);
long sys32_ustat(unsigned, struct ustat32 __user *);
asmlinkage long sys32_execve(char __user *, compat_uptr_t __user *,
compat_uptr_t __user *, struct pt_regs *);
asmlinkage long sys32_clone(unsigned int, unsigned int, struct pt_regs *);
......
......@@ -26,6 +26,10 @@
#define PCI_DEVICE_ID_INTEL_82965GME_IG 0x2A12
#define PCI_DEVICE_ID_INTEL_82945GME_HB 0x27AC
#define PCI_DEVICE_ID_INTEL_82945GME_IG 0x27AE
#define PCI_DEVICE_ID_INTEL_IGDGM_HB 0xA010
#define PCI_DEVICE_ID_INTEL_IGDGM_IG 0xA011
#define PCI_DEVICE_ID_INTEL_IGDG_HB 0xA000
#define PCI_DEVICE_ID_INTEL_IGDG_IG 0xA001
#define PCI_DEVICE_ID_INTEL_G33_HB 0x29C0
#define PCI_DEVICE_ID_INTEL_G33_IG 0x29C2
#define PCI_DEVICE_ID_INTEL_Q35_HB 0x29B0
......@@ -60,7 +64,12 @@
#define IS_G33 (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_G33_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q35_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q33_HB)
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q33_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IGDGM_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IGDG_HB)
#define IS_IGD (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IGDGM_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IGDG_HB)
#define IS_G4X (agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_IGD_E_HB || \
agp_bridge->dev->device == PCI_DEVICE_ID_INTEL_Q45_HB || \
......@@ -510,7 +519,7 @@ static void intel_i830_init_gtt_entries(void)
size = 512;
}
size += 4; /* add in BIOS popup space */
} else if (IS_G33) {
} else if (IS_G33 && !IS_IGD) {
/* G33's GTT size defined in gmch_ctrl */
switch (gmch_ctrl & G33_PGETBL_SIZE_MASK) {
case G33_PGETBL_SIZE_1M:
......@@ -526,7 +535,7 @@ static void intel_i830_init_gtt_entries(void)
size = 512;
}
size += 4;
} else if (IS_G4X) {
} else if (IS_G4X || IS_IGD) {
/* On 4 series hardware, GTT stolen is separate from graphics
* stolen, ignore it in stolen gtt entries counting. However,
* 4KB of the stolen memory doesn't get mapped to the GTT.
......@@ -2161,6 +2170,10 @@ static const struct intel_driver_description {
NULL, &intel_g33_driver },
{ PCI_DEVICE_ID_INTEL_Q33_HB, PCI_DEVICE_ID_INTEL_Q33_IG, 0, "Q33",
NULL, &intel_g33_driver },
{ PCI_DEVICE_ID_INTEL_IGDGM_HB, PCI_DEVICE_ID_INTEL_IGDGM_IG, 0, "IGD",
NULL, &intel_g33_driver },
{ PCI_DEVICE_ID_INTEL_IGDG_HB, PCI_DEVICE_ID_INTEL_IGDG_IG, 0, "IGD",
NULL, &intel_g33_driver },
{ PCI_DEVICE_ID_INTEL_GM45_HB, PCI_DEVICE_ID_INTEL_GM45_IG, 0,
"Mobile Intel® GM45 Express", NULL, &intel_i965_driver },
{ PCI_DEVICE_ID_INTEL_IGD_E_HB, PCI_DEVICE_ID_INTEL_IGD_E_IG, 0,
......@@ -2355,6 +2368,8 @@ static struct pci_device_id agp_intel_pci_table[] = {
ID(PCI_DEVICE_ID_INTEL_82945G_HB),
ID(PCI_DEVICE_ID_INTEL_82945GM_HB),
ID(PCI_DEVICE_ID_INTEL_82945GME_HB),
ID(PCI_DEVICE_ID_INTEL_IGDGM_HB),
ID(PCI_DEVICE_ID_INTEL_IGDG_HB),
ID(PCI_DEVICE_ID_INTEL_82946GZ_HB),
ID(PCI_DEVICE_ID_INTEL_82G35_HB),
ID(PCI_DEVICE_ID_INTEL_82965Q_HB),
......
......@@ -63,8 +63,7 @@ static int descriptor_count;
#define BIB_CMC ((1) << 30)
#define BIB_IMC ((1) << 31)
static u32 *
generate_config_rom(struct fw_card *card, size_t *config_rom_length)
static u32 *generate_config_rom(struct fw_card *card, size_t *config_rom_length)
{
struct fw_descriptor *desc;
static u32 config_rom[256];
......@@ -128,8 +127,7 @@ generate_config_rom(struct fw_card *card, size_t *config_rom_length)
return config_rom;
}
static void
update_config_roms(void)
static void update_config_roms(void)
{
struct fw_card *card;
u32 *config_rom;
......@@ -141,8 +139,7 @@ update_config_roms(void)
}
}
int
fw_core_add_descriptor(struct fw_descriptor *desc)
int fw_core_add_descriptor(struct fw_descriptor *desc)
{
size_t i;
......@@ -171,8 +168,7 @@ fw_core_add_descriptor(struct fw_descriptor *desc)
return 0;
}
void
fw_core_remove_descriptor(struct fw_descriptor *desc)
void fw_core_remove_descriptor(struct fw_descriptor *desc)
{
mutex_lock(&card_mutex);
......@@ -185,12 +181,30 @@ fw_core_remove_descriptor(struct fw_descriptor *desc)
mutex_unlock(&card_mutex);
}
static int set_broadcast_channel(struct device *dev, void *data)
{
fw_device_set_broadcast_channel(fw_device(dev), (long)data);
return 0;
}
static void allocate_broadcast_channel(struct fw_card *card, int generation)
{
int channel, bandwidth = 0;
fw_iso_resource_manage(card, generation, 1ULL << 31,
&channel, &bandwidth, true);
if (channel == 31) {
card->broadcast_channel_allocated = true;
device_for_each_child(card->device, (void *)(long)generation,
set_broadcast_channel);
}
}
static const char gap_count_table[] = {
63, 5, 7, 8, 10, 13, 16, 18, 21, 24, 26, 29, 32, 35, 37, 40
};
void
fw_schedule_bm_work(struct fw_card *card, unsigned long delay)
void fw_schedule_bm_work(struct fw_card *card, unsigned long delay)
{
int scheduled;
......@@ -200,37 +214,38 @@ fw_schedule_bm_work(struct fw_card *card, unsigned long delay)
fw_card_put(card);
}
static void
fw_card_bm_work(struct work_struct *work)
static void fw_card_bm_work(struct work_struct *work)
{
struct fw_card *card = container_of(work, struct fw_card, work.work);
struct fw_device *root_device;
struct fw_node *root_node, *local_node;
struct fw_node *root_node;
unsigned long flags;
int root_id, new_root_id, irm_id, gap_count, generation, grace, rcode;
int root_id, new_root_id, irm_id, local_id;
int gap_count, generation, grace, rcode;
bool do_reset = false;
bool root_device_is_running;
bool root_device_is_cmc;
__be32 lock_data[2];
spin_lock_irqsave(&card->lock, flags);
local_node = card->local_node;
root_node = card->root_node;
if (local_node == NULL) {
if (card->local_node == NULL) {
spin_unlock_irqrestore(&card->lock, flags);
goto out_put_card;
}
fw_node_get(local_node);
fw_node_get(root_node);
generation = card->generation;
root_node = card->root_node;
fw_node_get(root_node);
root_device = root_node->data;
root_device_is_running = root_device &&
atomic_read(&root_device->state) == FW_DEVICE_RUNNING;
root_device_is_cmc = root_device && root_device->cmc;
root_id = root_node->node_id;
grace = time_after(jiffies, card->reset_jiffies + DIV_ROUND_UP(HZ, 10));
root_id = root_node->node_id;
irm_id = card->irm_node->node_id;
local_id = card->local_node->node_id;
grace = time_after(jiffies, card->reset_jiffies + DIV_ROUND_UP(HZ, 8));
if (is_next_generation(generation, card->bm_generation) ||
(card->bm_generation != generation && grace)) {
......@@ -246,16 +261,15 @@ fw_card_bm_work(struct work_struct *work)
* next generation.
*/
irm_id = card->irm_node->node_id;
if (!card->irm_node->link_on) {
new_root_id = local_node->node_id;
new_root_id = local_id;
fw_notify("IRM has link off, making local node (%02x) root.\n",
new_root_id);
goto pick_me;
}
lock_data[0] = cpu_to_be32(0x3f);
lock_data[1] = cpu_to_be32(local_node->node_id);
lock_data[1] = cpu_to_be32(local_id);
spin_unlock_irqrestore(&card->lock, flags);
......@@ -269,9 +283,14 @@ fw_card_bm_work(struct work_struct *work)
goto out;
if (rcode == RCODE_COMPLETE &&
lock_data[0] != cpu_to_be32(0x3f))
/* Somebody else is BM, let them do the work. */
lock_data[0] != cpu_to_be32(0x3f)) {
/* Somebody else is BM. Only act as IRM. */
if (local_id == irm_id)
allocate_broadcast_channel(card, generation);
goto out;
}
spin_lock_irqsave(&card->lock, flags);
......@@ -282,19 +301,18 @@ fw_card_bm_work(struct work_struct *work)
* do a bus reset and pick the local node as
* root, and thus, IRM.
*/
new_root_id = local_node->node_id;
new_root_id = local_id;
fw_notify("BM lock failed, making local node (%02x) root.\n",
new_root_id);
goto pick_me;
}
} else if (card->bm_generation != generation) {
/*
* OK, we weren't BM in the last generation, and it's
* less than 100ms since last bus reset. Reschedule
* this task 100ms from now.
* We weren't BM in the last generation, and the last
* bus reset is less than 125ms ago. Reschedule this job.
*/
spin_unlock_irqrestore(&card->lock, flags);
fw_schedule_bm_work(card, DIV_ROUND_UP(HZ, 10));
fw_schedule_bm_work(card, DIV_ROUND_UP(HZ, 8));
goto out;
}
......@@ -310,7 +328,7 @@ fw_card_bm_work(struct work_struct *work)
* Either link_on is false, or we failed to read the
* config rom. In either case, pick another root.
*/
new_root_id = local_node->node_id;
new_root_id = local_id;
} else if (!root_device_is_running) {
/*
* If we haven't probed this device yet, bail out now
......@@ -332,7 +350,7 @@ fw_card_bm_work(struct work_struct *work)
* successfully read the config rom, but it's not
* cycle master capable.
*/
new_root_id = local_node->node_id;
new_root_id = local_id;
}
pick_me:
......@@ -363,25 +381,28 @@ fw_card_bm_work(struct work_struct *work)
card->index, new_root_id, gap_count);
fw_send_phy_config(card, new_root_id, generation, gap_count);
fw_core_initiate_bus_reset(card, 1);
/* Will allocate broadcast channel after the reset. */
} else {
if (local_id == irm_id)
allocate_broadcast_channel(card, generation);
}
out:
fw_node_put(root_node);
fw_node_put(local_node);
out_put_card:
fw_card_put(card);
}
static void
flush_timer_callback(unsigned long data)
static void flush_timer_callback(unsigned long data)
{
struct fw_card *card = (struct fw_card *)data;
fw_flush_transactions(card);
}
void
fw_card_initialize(struct fw_card *card, const struct fw_card_driver *driver,
struct device *device)
void fw_card_initialize(struct fw_card *card,
const struct fw_card_driver *driver,
struct device *device)
{
static atomic_t index = ATOMIC_INIT(-1);
......@@ -406,13 +427,12 @@ fw_card_initialize(struct fw_card *card, const struct fw_card_driver *driver,
}
EXPORT_SYMBOL(fw_card_initialize);
int
fw_card_add(struct fw_card *card,
u32 max_receive, u32 link_speed, u64 guid)
int fw_card_add(struct fw_card *card,
u32 max_receive, u32 link_speed, u64 guid)
{
u32 *config_rom;
size_t length;
int err;
int ret;
card->max_receive = max_receive;
card->link_speed = link_speed;
......@@ -423,13 +443,14 @@ fw_card_add(struct fw_card *card,
list_add_tail(&card->link, &card_list);
mutex_unlock(&card_mutex);
err = card->driver->enable(card, config_rom, length);
if (err < 0) {
ret = card->driver->enable(card, config_rom, length);
if (ret < 0) {
mutex_lock(&card_mutex);
list_del(&card->link);
mutex_unlock(&card_mutex);
}
return err;
return ret;
}
EXPORT_SYMBOL(fw_card_add);
......@@ -442,23 +463,20 @@ EXPORT_SYMBOL(fw_card_add);
* dummy driver just fails all IO.
*/
static int
dummy_enable(struct fw_card *card, u32 *config_rom, size_t length)
static int dummy_enable(struct fw_card *card, u32 *config_rom, size_t length)
{
BUG();
return -1;
}
static int
dummy_update_phy_reg(struct fw_card *card, int address,
int clear_bits, int set_bits)
static int dummy_update_phy_reg(struct fw_card *card, int address,
int clear_bits, int set_bits)
{
return -ENODEV;
}
static int
dummy_set_config_rom(struct fw_card *card,
u32 *config_rom, size_t length)
static int dummy_set_config_rom(struct fw_card *card,
u32 *config_rom, size_t length)
{
/*
* We take the card out of card_list before setting the dummy
......@@ -468,27 +486,23 @@ dummy_set_config_rom(struct fw_card *card,
return -1;
}
static void
dummy_send_request(struct fw_card *card, struct fw_packet *packet)
static void dummy_send_request(struct fw_card *card, struct fw_packet *packet)
{
packet->callback(packet, card, -ENODEV);
}
static void
dummy_send_response(struct fw_card *card, struct fw_packet *packet)
static void dummy_send_response(struct fw_card *card, struct fw_packet *packet)
{
packet->callback(packet, card, -ENODEV);
}
static int
dummy_cancel_packet(struct fw_card *card, struct fw_packet *packet)
static int dummy_cancel_packet(struct fw_card *card, struct fw_packet *packet)
{
return -ENOENT;
}
static int
dummy_enable_phys_dma(struct fw_card *card,
int node_id, int generation)
static int dummy_enable_phys_dma(struct fw_card *card,
int node_id, int generation)
{
return -ENODEV;
}
......@@ -503,16 +517,14 @@ static struct fw_card_driver dummy_driver = {
.enable_phys_dma = dummy_enable_phys_dma,
};
void
fw_card_release(struct kref *kref)
void fw_card_release(struct kref *kref)
{
struct fw_card *card = container_of(kref, struct fw_card, kref);
complete(&card->done);
}
void
fw_core_remove_card(struct fw_card *card)
void fw_core_remove_card(struct fw_card *card)
{
card->driver->update_phy_reg(card, 4,
PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
......@@ -536,8 +548,7 @@ fw_core_remove_card(struct fw_card *card)
}
EXPORT_SYMBOL(fw_core_remove_card);
int
fw_core_initiate_bus_reset(struct fw_card *card, int short_reset)
int fw_core_initiate_bus_reset(struct fw_card *card, int short_reset)
{
int reg = short_reset ? 5 : 1;
int bit = short_reset ? PHY_BUS_SHORT_RESET : PHY_BUS_RESET;
......
此差异已折叠。
......@@ -18,22 +18,26 @@
* Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <linux/module.h>
#include <linux/wait.h>
#include <linux/errno.h>
#include <linux/kthread.h>
#include <linux/device.h>
#include <linux/ctype.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/idr.h>
#include <linux/jiffies.h>
#include <linux/string.h>
#include <linux/kobject.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/semaphore.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/workqueue.h>
#include <asm/system.h>
#include <linux/ctype.h>
#include "fw-transaction.h"
#include "fw-topology.h"
#include "fw-device.h"
#include "fw-topology.h"
#include "fw-transaction.h"
void fw_csr_iterator_init(struct fw_csr_iterator *ci, u32 * p)
{
......@@ -132,8 +136,7 @@ static int get_modalias(struct fw_unit *unit, char *buffer, size_t buffer_size)
vendor, model, specifier_id, version);
}
static int
fw_unit_uevent(struct device *dev, struct kobj_uevent_env *env)
static int fw_unit_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct fw_unit *unit = fw_unit(dev);
char modalias[64];
......@@ -152,27 +155,6 @@ struct bus_type fw_bus_type = {
};
EXPORT_SYMBOL(fw_bus_type);
static void fw_device_release(struct device *dev)
{
struct fw_device *device = fw_device(dev);
struct fw_card *card = device->card;
unsigned long flags;
/*
* Take the card lock so we don't set this to NULL while a
* FW_NODE_UPDATED callback is being handled or while the
* bus manager work looks at this node.
*/
spin_lock_irqsave(&card->lock, flags);
device->node->data = NULL;
spin_unlock_irqrestore(&card->lock, flags);
fw_node_put(device->node);
kfree(device->config_rom);
kfree(device);
fw_card_put(card);
}
int fw_device_enable_phys_dma(struct fw_device *device)
{
int generation = device->generation;
......@@ -191,8 +173,8 @@ struct config_rom_attribute {
u32 key;
};
static ssize_t
show_immediate(struct device *dev, struct device_attribute *dattr, char *buf)
static ssize_t show_immediate(struct device *dev,
struct device_attribute *dattr, char *buf)
{
struct config_rom_attribute *attr =
container_of(dattr, struct config_rom_attribute, attr);
......@@ -223,8 +205,8 @@ show_immediate(struct device *dev, struct device_attribute *dattr, char *buf)
#define IMMEDIATE_ATTR(name, key) \
{ __ATTR(name, S_IRUGO, show_immediate, NULL), key }
static ssize_t
show_text_leaf(struct device *dev, struct device_attribute *dattr, char *buf)
static ssize_t show_text_leaf(struct device *dev,
struct device_attribute *dattr, char *buf)
{
struct config_rom_attribute *attr =
container_of(dattr, struct config_rom_attribute, attr);
......@@ -293,10 +275,9 @@ static struct config_rom_attribute config_rom_attributes[] = {
TEXT_LEAF_ATTR(hardware_version_name, CSR_HARDWARE_VERSION),
};
static void
init_fw_attribute_group(struct device *dev,
struct device_attribute *attrs,
struct fw_attribute_group *group)
static void init_fw_attribute_group(struct device *dev,
struct device_attribute *attrs,
struct fw_attribute_group *group)
{
struct device_attribute *attr;
int i, j;
......@@ -319,9 +300,8 @@ init_fw_attribute_group(struct device *dev,
dev->groups = group->groups;
}
static ssize_t
modalias_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t modalias_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fw_unit *unit = fw_unit(dev);
int length;
......@@ -332,9 +312,8 @@ modalias_show(struct device *dev,
return length + 1;
}
static ssize_t
rom_index_show(struct device *dev,
struct device_attribute *attr, char *buf)
static ssize_t rom_index_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fw_device *device = fw_device(dev->parent);
struct fw_unit *unit = fw_unit(dev);
......@@ -349,8 +328,8 @@ static struct device_attribute fw_unit_attributes[] = {
__ATTR_NULL,
};
static ssize_t
config_rom_show(struct device *dev, struct device_attribute *attr, char *buf)
static ssize_t config_rom_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fw_device *device = fw_device(dev);
size_t length;
......@@ -363,8 +342,8 @@ config_rom_show(struct device *dev, struct device_attribute *attr, char *buf)
return length;
}
static ssize_t
guid_show(struct device *dev, struct device_attribute *attr, char *buf)
static ssize_t guid_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct fw_device *device = fw_device(dev);
int ret;
......@@ -383,8 +362,8 @@ static struct device_attribute fw_device_attributes[] = {
__ATTR_NULL,
};
static int
read_rom(struct fw_device *device, int generation, int index, u32 *data)
static int read_rom(struct fw_device *device,
int generation, int index, u32 *data)
{
int rcode;
......@@ -539,7 +518,7 @@ static int read_bus_info_block(struct fw_device *device, int generation)
kfree(old_rom);
ret = 0;
device->cmc = rom[2] & 1 << 30;
device->cmc = rom[2] >> 30 & 1;
out:
kfree(rom);
......@@ -679,11 +658,53 @@ static void fw_device_shutdown(struct work_struct *work)
fw_device_put(device);
}
static void fw_device_release(struct device *dev)
{
struct fw_device *device = fw_device(dev);
struct fw_card *card = device->card;
unsigned long flags;
/*
* Take the card lock so we don't set this to NULL while a
* FW_NODE_UPDATED callback is being handled or while the
* bus manager work looks at this node.
*/
spin_lock_irqsave(&card->lock, flags);
device->node->data = NULL;
spin_unlock_irqrestore(&card->lock, flags);
fw_node_put(device->node);
kfree(device->config_rom);
kfree(device);
fw_card_put(card);
}
static struct device_type fw_device_type = {
.release = fw_device_release,
.release = fw_device_release,
};
static void fw_device_update(struct work_struct *work);
static int update_unit(struct device *dev, void *data)
{
struct fw_unit *unit = fw_unit(dev);
struct fw_driver *driver = (struct fw_driver *)dev->driver;
if (is_fw_unit(dev) && driver != NULL && driver->update != NULL) {
down(&dev->sem);
driver->update(unit);
up(&dev->sem);
}
return 0;
}
static void fw_device_update(struct work_struct *work)
{
struct fw_device *device =
container_of(work, struct fw_device, work.work);
fw_device_cdev_update(device);
device_for_each_child(&device->device, NULL, update_unit);
}
/*
* If a device was pending for deletion because its node went away but its
......@@ -735,12 +756,50 @@ static int lookup_existing_device(struct device *dev, void *data)
return match;
}
enum { BC_UNKNOWN = 0, BC_UNIMPLEMENTED, BC_IMPLEMENTED, };
void fw_device_set_broadcast_channel(struct fw_device *device, int generation)
{
struct fw_card *card = device->card;
__be32 data;
int rcode;
if (!card->broadcast_channel_allocated)
return;
if (device->bc_implemented == BC_UNKNOWN) {
rcode = fw_run_transaction(card, TCODE_READ_QUADLET_REQUEST,
device->node_id, generation, device->max_speed,
CSR_REGISTER_BASE + CSR_BROADCAST_CHANNEL,
&data, 4);
switch (rcode) {
case RCODE_COMPLETE:
if (data & cpu_to_be32(1 << 31)) {
device->bc_implemented = BC_IMPLEMENTED;
break;
}
/* else fall through to case address error */
case RCODE_ADDRESS_ERROR:
device->bc_implemented = BC_UNIMPLEMENTED;
}
}
if (device->bc_implemented == BC_IMPLEMENTED) {
data = cpu_to_be32(BROADCAST_CHANNEL_INITIAL |
BROADCAST_CHANNEL_VALID);
fw_run_transaction(card, TCODE_WRITE_QUADLET_REQUEST,
device->node_id, generation, device->max_speed,
CSR_REGISTER_BASE + CSR_BROADCAST_CHANNEL,
&data, 4);
}
}
static void fw_device_init(struct work_struct *work)
{
struct fw_device *device =
container_of(work, struct fw_device, work.work);
struct device *revived_dev;
int minor, err;
int minor, ret;
/*
* All failure paths here set node->data to NULL, so that we
......@@ -776,12 +835,12 @@ static void fw_device_init(struct work_struct *work)
fw_device_get(device);
down_write(&fw_device_rwsem);
err = idr_pre_get(&fw_device_idr, GFP_KERNEL) ?
ret = idr_pre_get(&fw_device_idr, GFP_KERNEL) ?
idr_get_new(&fw_device_idr, device, &minor) :
-ENOMEM;
up_write(&fw_device_rwsem);
if (err < 0)
if (ret < 0)
goto error;
device->device.bus = &fw_bus_type;
......@@ -828,6 +887,8 @@ static void fw_device_init(struct work_struct *work)
device->config_rom[3], device->config_rom[4],
1 << device->max_speed);
device->config_rom_retries = 0;
fw_device_set_broadcast_channel(device, device->generation);
}
/*
......@@ -851,29 +912,6 @@ static void fw_device_init(struct work_struct *work)
put_device(&device->device); /* our reference */
}
static int update_unit(struct device *dev, void *data)
{
struct fw_unit *unit = fw_unit(dev);
struct fw_driver *driver = (struct fw_driver *)dev->driver;
if (is_fw_unit(dev) && driver != NULL && driver->update != NULL) {
down(&dev->sem);
driver->update(unit);
up(&dev->sem);
}
return 0;
}
static void fw_device_update(struct work_struct *work)
{
struct fw_device *device =
container_of(work, struct fw_device, work.work);
fw_device_cdev_update(device);
device_for_each_child(&device->device, NULL, update_unit);
}
enum {
REREAD_BIB_ERROR,
REREAD_BIB_GONE,
......@@ -894,7 +932,7 @@ static int reread_bus_info_block(struct fw_device *device, int generation)
if (i == 0 && q == 0)
return REREAD_BIB_GONE;
if (i > device->config_rom_length || q != device->config_rom[i])
if (q != device->config_rom[i])
return REREAD_BIB_CHANGED;
}
......@@ -1004,6 +1042,7 @@ void fw_node_event(struct fw_card *card, struct fw_node *node, int event)
device->node = fw_node_get(node);
device->node_id = node->node_id;
device->generation = card->generation;
mutex_init(&device->client_list_mutex);
INIT_LIST_HEAD(&device->client_list);
/*
......
......@@ -19,10 +19,17 @@
#ifndef __fw_device_h
#define __fw_device_h
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/cdev.h>
#include <linux/idr.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/rwsem.h>
#include <linux/sysfs.h>
#include <linux/types.h>
#include <linux/workqueue.h>
#include <asm/atomic.h>
enum fw_device_state {
......@@ -38,6 +45,9 @@ struct fw_attribute_group {
struct attribute *attrs[11];
};
struct fw_node;
struct fw_card;
/*
* Note, fw_device.generation always has to be read before fw_device.node_id.
* Use SMP memory barriers to ensure this. Otherwise requests will be sent
......@@ -61,13 +71,18 @@ struct fw_device {
int node_id;
int generation;
unsigned max_speed;
bool cmc;
struct fw_card *card;
struct device device;
struct mutex client_list_mutex;
struct list_head client_list;
u32 *config_rom;
size_t config_rom_length;
int config_rom_retries;
unsigned cmc:1;
unsigned bc_implemented:2;
struct delayed_work work;
struct fw_attribute_group attribute_group;
};
......@@ -96,6 +111,7 @@ static inline void fw_device_put(struct fw_device *device)
struct fw_device *fw_device_get_by_devt(dev_t devt);
int fw_device_enable_phys_dma(struct fw_device *device);
void fw_device_set_broadcast_channel(struct fw_device *device, int generation);
void fw_device_cdev_update(struct fw_device *device);
void fw_device_cdev_remove(struct fw_device *device);
......@@ -176,8 +192,7 @@ struct fw_driver {
const struct fw_device_id *id_table;
};
static inline struct fw_driver *
fw_driver(struct device_driver *drv)
static inline struct fw_driver *fw_driver(struct device_driver *drv)
{
return container_of(drv, struct fw_driver, driver);
}
......
/*
* Isochronous IO functionality
* Isochronous I/O functionality:
* - Isochronous DMA context management
* - Isochronous bus resource management (channels, bandwidth), client side
*
* Copyright (C) 2006 Kristian Hoegsberg <krh@bitplanet.net>
*
......@@ -18,21 +20,25 @@
* Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/dma-mapping.h>
#include <linux/vmalloc.h>
#include <linux/errno.h>
#include <linux/firewire-constants.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/spinlock.h>
#include <linux/vmalloc.h>
#include "fw-transaction.h"
#include "fw-topology.h"
#include "fw-device.h"
#include "fw-transaction.h"
int
fw_iso_buffer_init(struct fw_iso_buffer *buffer, struct fw_card *card,
int page_count, enum dma_data_direction direction)
/*
* Isochronous DMA context management
*/
int fw_iso_buffer_init(struct fw_iso_buffer *buffer, struct fw_card *card,
int page_count, enum dma_data_direction direction)
{
int i, j, retval = -ENOMEM;
int i, j;
dma_addr_t address;
buffer->page_count = page_count;
......@@ -69,19 +75,21 @@ fw_iso_buffer_init(struct fw_iso_buffer *buffer, struct fw_card *card,
kfree(buffer->pages);
out:
buffer->pages = NULL;
return retval;
return -ENOMEM;
}
int fw_iso_buffer_map(struct fw_iso_buffer *buffer, struct vm_area_struct *vma)
{
unsigned long uaddr;
int i, retval;
int i, err;
uaddr = vma->vm_start;
for (i = 0; i < buffer->page_count; i++) {
retval = vm_insert_page(vma, uaddr, buffer->pages[i]);
if (retval)
return retval;
err = vm_insert_page(vma, uaddr, buffer->pages[i]);
if (err)
return err;
uaddr += PAGE_SIZE;
}
......@@ -105,14 +113,14 @@ void fw_iso_buffer_destroy(struct fw_iso_buffer *buffer,
buffer->pages = NULL;
}
struct fw_iso_context *
fw_iso_context_create(struct fw_card *card, int type,
int channel, int speed, size_t header_size,
fw_iso_callback_t callback, void *callback_data)
struct fw_iso_context *fw_iso_context_create(struct fw_card *card,
int type, int channel, int speed, size_t header_size,
fw_iso_callback_t callback, void *callback_data)
{
struct fw_iso_context *ctx;
ctx = card->driver->allocate_iso_context(card, type, header_size);
ctx = card->driver->allocate_iso_context(card,
type, channel, header_size);
if (IS_ERR(ctx))
return ctx;
......@@ -134,25 +142,186 @@ void fw_iso_context_destroy(struct fw_iso_context *ctx)
card->driver->free_iso_context(ctx);
}
int
fw_iso_context_start(struct fw_iso_context *ctx, int cycle, int sync, int tags)
int fw_iso_context_start(struct fw_iso_context *ctx,
int cycle, int sync, int tags)
{
return ctx->card->driver->start_iso(ctx, cycle, sync, tags);
}
int
fw_iso_context_queue(struct fw_iso_context *ctx,
struct fw_iso_packet *packet,
struct fw_iso_buffer *buffer,
unsigned long payload)
int fw_iso_context_queue(struct fw_iso_context *ctx,
struct fw_iso_packet *packet,
struct fw_iso_buffer *buffer,
unsigned long payload)
{
struct fw_card *card = ctx->card;
return card->driver->queue_iso(ctx, packet, buffer, payload);
}
int
fw_iso_context_stop(struct fw_iso_context *ctx)
int fw_iso_context_stop(struct fw_iso_context *ctx)
{
return ctx->card->driver->stop_iso(ctx);
}
/*
* Isochronous bus resource management (channels, bandwidth), client side
*/
static int manage_bandwidth(struct fw_card *card, int irm_id, int generation,
int bandwidth, bool allocate)
{
__be32 data[2];
int try, new, old = allocate ? BANDWIDTH_AVAILABLE_INITIAL : 0;
/*
* On a 1394a IRM with low contention, try < 1 is enough.
* On a 1394-1995 IRM, we need at least try < 2.
* Let's just do try < 5.
*/
for (try = 0; try < 5; try++) {
new = allocate ? old - bandwidth : old + bandwidth;
if (new < 0 || new > BANDWIDTH_AVAILABLE_INITIAL)
break;
data[0] = cpu_to_be32(old);
data[1] = cpu_to_be32(new);
switch (fw_run_transaction(card, TCODE_LOCK_COMPARE_SWAP,
irm_id, generation, SCODE_100,
CSR_REGISTER_BASE + CSR_BANDWIDTH_AVAILABLE,
data, sizeof(data))) {
case RCODE_GENERATION:
/* A generation change frees all bandwidth. */
return allocate ? -EAGAIN : bandwidth;
case RCODE_COMPLETE:
if (be32_to_cpup(data) == old)
return bandwidth;
old = be32_to_cpup(data);
/* Fall through. */
}
}
return -EIO;
}
static int manage_channel(struct fw_card *card, int irm_id, int generation,
u32 channels_mask, u64 offset, bool allocate)
{
__be32 data[2], c, all, old;
int i, retry = 5;
old = all = allocate ? cpu_to_be32(~0) : 0;
for (i = 0; i < 32; i++) {
if (!(channels_mask & 1 << i))
continue;
c = cpu_to_be32(1 << (31 - i));
if ((old & c) != (all & c))
continue;
data[0] = old;
data[1] = old ^ c;
switch (fw_run_transaction(card, TCODE_LOCK_COMPARE_SWAP,
irm_id, generation, SCODE_100,
offset, data, sizeof(data))) {
case RCODE_GENERATION:
/* A generation change frees all channels. */
return allocate ? -EAGAIN : i;
case RCODE_COMPLETE:
if (data[0] == old)
return i;
old = data[0];
/* Is the IRM 1394a-2000 compliant? */
if ((data[0] & c) == (data[1] & c))
continue;
/* 1394-1995 IRM, fall through to retry. */
default:
if (retry--)
i--;
}
}
return -EIO;
}
static void deallocate_channel(struct fw_card *card, int irm_id,
int generation, int channel)
{
u32 mask;
u64 offset;
mask = channel < 32 ? 1 << channel : 1 << (channel - 32);
offset = channel < 32 ? CSR_REGISTER_BASE + CSR_CHANNELS_AVAILABLE_HI :
CSR_REGISTER_BASE + CSR_CHANNELS_AVAILABLE_LO;
manage_channel(card, irm_id, generation, mask, offset, false);
}
/**
* fw_iso_resource_manage - Allocate or deallocate a channel and/or bandwidth
*
* In parameters: card, generation, channels_mask, bandwidth, allocate
* Out parameters: channel, bandwidth
* This function blocks (sleeps) during communication with the IRM.
*
* Allocates or deallocates at most one channel out of channels_mask.
* channels_mask is a bitfield with MSB for channel 63 and LSB for channel 0.
* (Note, the IRM's CHANNELS_AVAILABLE is a big-endian bitfield with MSB for
* channel 0 and LSB for channel 63.)
* Allocates or deallocates as many bandwidth allocation units as specified.
*
* Returns channel < 0 if no channel was allocated or deallocated.
* Returns bandwidth = 0 if no bandwidth was allocated or deallocated.
*
* If generation is stale, deallocations succeed but allocations fail with
* channel = -EAGAIN.
*
* If channel allocation fails, no bandwidth will be allocated either.
* If bandwidth allocation fails, no channel will be allocated either.
* But deallocations of channel and bandwidth are tried independently
* of each other's success.
*/
void fw_iso_resource_manage(struct fw_card *card, int generation,
u64 channels_mask, int *channel, int *bandwidth,
bool allocate)
{
u32 channels_hi = channels_mask; /* channels 31...0 */
u32 channels_lo = channels_mask >> 32; /* channels 63...32 */
int irm_id, ret, c = -EINVAL;
spin_lock_irq(&card->lock);
irm_id = card->irm_node->node_id;
spin_unlock_irq(&card->lock);
if (channels_hi)
c = manage_channel(card, irm_id, generation, channels_hi,
CSR_REGISTER_BASE + CSR_CHANNELS_AVAILABLE_HI, allocate);
if (channels_lo && c < 0) {
c = manage_channel(card, irm_id, generation, channels_lo,
CSR_REGISTER_BASE + CSR_CHANNELS_AVAILABLE_LO, allocate);
if (c >= 0)
c += 32;
}
*channel = c;
if (allocate && channels_mask != 0 && c < 0)
*bandwidth = 0;
if (*bandwidth == 0)
return;
ret = manage_bandwidth(card, irm_id, generation, *bandwidth, allocate);
if (ret < 0)
*bandwidth = 0;
if (allocate && ret < 0 && c >= 0) {
deallocate_channel(card, irm_id, generation, c);
*channel = ret;
}
}
此差异已折叠。
......@@ -392,20 +392,18 @@ static const struct {
}
};
static void
free_orb(struct kref *kref)
static void free_orb(struct kref *kref)
{
struct sbp2_orb *orb = container_of(kref, struct sbp2_orb, kref);
kfree(orb);
}
static void
sbp2_status_write(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source,
int generation, int speed,
unsigned long long offset,
void *payload, size_t length, void *callback_data)
static void sbp2_status_write(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source,
int generation, int speed,
unsigned long long offset,
void *payload, size_t length, void *callback_data)
{
struct sbp2_logical_unit *lu = callback_data;
struct sbp2_orb *orb;
......@@ -451,9 +449,8 @@ sbp2_status_write(struct fw_card *card, struct fw_request *request,
fw_send_response(card, request, RCODE_COMPLETE);
}
static void
complete_transaction(struct fw_card *card, int rcode,
void *payload, size_t length, void *data)
static void complete_transaction(struct fw_card *card, int rcode,
void *payload, size_t length, void *data)
{
struct sbp2_orb *orb = data;
unsigned long flags;
......@@ -482,9 +479,8 @@ complete_transaction(struct fw_card *card, int rcode,
kref_put(&orb->kref, free_orb);
}
static void
sbp2_send_orb(struct sbp2_orb *orb, struct sbp2_logical_unit *lu,
int node_id, int generation, u64 offset)
static void sbp2_send_orb(struct sbp2_orb *orb, struct sbp2_logical_unit *lu,
int node_id, int generation, u64 offset)
{
struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
unsigned long flags;
......@@ -531,8 +527,8 @@ static int sbp2_cancel_orbs(struct sbp2_logical_unit *lu)
return retval;
}
static void
complete_management_orb(struct sbp2_orb *base_orb, struct sbp2_status *status)
static void complete_management_orb(struct sbp2_orb *base_orb,
struct sbp2_status *status)
{
struct sbp2_management_orb *orb =
container_of(base_orb, struct sbp2_management_orb, base);
......@@ -542,10 +538,9 @@ complete_management_orb(struct sbp2_orb *base_orb, struct sbp2_status *status)
complete(&orb->done);
}
static int
sbp2_send_management_orb(struct sbp2_logical_unit *lu, int node_id,
int generation, int function, int lun_or_login_id,
void *response)
static int sbp2_send_management_orb(struct sbp2_logical_unit *lu, int node_id,
int generation, int function,
int lun_or_login_id, void *response)
{
struct fw_device *device = fw_device(lu->tgt->unit->device.parent);
struct sbp2_management_orb *orb;
......@@ -652,9 +647,8 @@ static void sbp2_agent_reset(struct sbp2_logical_unit *lu)
&d, sizeof(d));
}
static void
complete_agent_reset_write_no_wait(struct fw_card *card, int rcode,
void *payload, size_t length, void *data)
static void complete_agent_reset_write_no_wait(struct fw_card *card,
int rcode, void *payload, size_t length, void *data)
{
kfree(data);
}
......@@ -1299,8 +1293,7 @@ static void sbp2_unmap_scatterlist(struct device *card_device,
sizeof(orb->page_table), DMA_TO_DEVICE);
}
static unsigned int
sbp2_status_to_sense_data(u8 *sbp2_status, u8 *sense_data)
static unsigned int sbp2_status_to_sense_data(u8 *sbp2_status, u8 *sense_data)
{
int sam_status;
......@@ -1337,8 +1330,8 @@ sbp2_status_to_sense_data(u8 *sbp2_status, u8 *sense_data)
}
}
static void
complete_command_orb(struct sbp2_orb *base_orb, struct sbp2_status *status)
static void complete_command_orb(struct sbp2_orb *base_orb,
struct sbp2_status *status)
{
struct sbp2_command_orb *orb =
container_of(base_orb, struct sbp2_command_orb, base);
......@@ -1384,9 +1377,8 @@ complete_command_orb(struct sbp2_orb *base_orb, struct sbp2_status *status)
orb->done(orb->cmd);
}
static int
sbp2_map_scatterlist(struct sbp2_command_orb *orb, struct fw_device *device,
struct sbp2_logical_unit *lu)
static int sbp2_map_scatterlist(struct sbp2_command_orb *orb,
struct fw_device *device, struct sbp2_logical_unit *lu)
{
struct scatterlist *sg = scsi_sglist(orb->cmd);
int i, n;
......@@ -1584,9 +1576,8 @@ static int sbp2_scsi_abort(struct scsi_cmnd *cmd)
* This is the concatenation of target port identifier and logical unit
* identifier as per SAM-2...SAM-4 annex A.
*/
static ssize_t
sbp2_sysfs_ieee1394_id_show(struct device *dev, struct device_attribute *attr,
char *buf)
static ssize_t sbp2_sysfs_ieee1394_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scsi_device *sdev = to_scsi_device(dev);
struct sbp2_logical_unit *lu;
......
......@@ -314,9 +314,8 @@ typedef void (*fw_node_callback_t)(struct fw_card * card,
struct fw_node * node,
struct fw_node * parent);
static void
for_each_fw_node(struct fw_card *card, struct fw_node *root,
fw_node_callback_t callback)
static void for_each_fw_node(struct fw_card *card, struct fw_node *root,
fw_node_callback_t callback)
{
struct list_head list;
struct fw_node *node, *next, *child, *parent;
......@@ -349,9 +348,8 @@ for_each_fw_node(struct fw_card *card, struct fw_node *root,
fw_node_put(node);
}
static void
report_lost_node(struct fw_card *card,
struct fw_node *node, struct fw_node *parent)
static void report_lost_node(struct fw_card *card,
struct fw_node *node, struct fw_node *parent)
{
fw_node_event(card, node, FW_NODE_DESTROYED);
fw_node_put(node);
......@@ -360,9 +358,8 @@ report_lost_node(struct fw_card *card,
card->bm_retries = 0;
}
static void
report_found_node(struct fw_card *card,
struct fw_node *node, struct fw_node *parent)
static void report_found_node(struct fw_card *card,
struct fw_node *node, struct fw_node *parent)
{
int b_path = (node->phy_speed == SCODE_BETA);
......@@ -415,8 +412,7 @@ static void move_tree(struct fw_node *node0, struct fw_node *node1, int port)
* found, lost or updated. Update the nodes in the card topology tree
* as we go.
*/
static void
update_tree(struct fw_card *card, struct fw_node *root)
static void update_tree(struct fw_card *card, struct fw_node *root)
{
struct list_head list0, list1;
struct fw_node *node0, *node1, *next1;
......@@ -497,8 +493,8 @@ update_tree(struct fw_card *card, struct fw_node *root)
}
}
static void
update_topology_map(struct fw_card *card, u32 *self_ids, int self_id_count)
static void update_topology_map(struct fw_card *card,
u32 *self_ids, int self_id_count)
{
int node_count;
......@@ -510,10 +506,8 @@ update_topology_map(struct fw_card *card, u32 *self_ids, int self_id_count)
fw_compute_block_crc(card->topology_map);
}
void
fw_core_handle_bus_reset(struct fw_card *card,
int node_id, int generation,
int self_id_count, u32 * self_ids)
void fw_core_handle_bus_reset(struct fw_card *card, int node_id, int generation,
int self_id_count, u32 *self_ids)
{
struct fw_node *local_node;
unsigned long flags;
......@@ -532,6 +526,7 @@ fw_core_handle_bus_reset(struct fw_card *card,
spin_lock_irqsave(&card->lock, flags);
card->broadcast_channel_allocated = false;
card->node_id = node_id;
/*
* Update node_id before generation to prevent anybody from using
......
......@@ -19,6 +19,11 @@
#ifndef __fw_topology_h
#define __fw_topology_h
#include <linux/list.h>
#include <linux/slab.h>
#include <asm/atomic.h>
enum {
FW_NODE_CREATED,
FW_NODE_UPDATED,
......@@ -51,26 +56,22 @@ struct fw_node {
struct fw_node *ports[0];
};
static inline struct fw_node *
fw_node_get(struct fw_node *node)
static inline struct fw_node *fw_node_get(struct fw_node *node)
{
atomic_inc(&node->ref_count);
return node;
}
static inline void
fw_node_put(struct fw_node *node)
static inline void fw_node_put(struct fw_node *node)
{
if (atomic_dec_and_test(&node->ref_count))
kfree(node);
}
void
fw_destroy_nodes(struct fw_card *card);
int
fw_compute_block_crc(u32 *block);
struct fw_card;
void fw_destroy_nodes(struct fw_card *card);
int fw_compute_block_crc(u32 *block);
#endif /* __fw_topology_h */
......@@ -64,10 +64,8 @@
#define PHY_CONFIG_ROOT_ID(node_id) ((((node_id) & 0x3f) << 24) | (1 << 23))
#define PHY_IDENTIFIER(id) ((id) << 30)
static int
close_transaction(struct fw_transaction *transaction,
struct fw_card *card, int rcode,
u32 *payload, size_t length)
static int close_transaction(struct fw_transaction *transaction,
struct fw_card *card, int rcode)
{
struct fw_transaction *t;
unsigned long flags;
......@@ -83,7 +81,7 @@ close_transaction(struct fw_transaction *transaction,
spin_unlock_irqrestore(&card->lock, flags);
if (&t->link != &card->transaction_list) {
t->callback(card, rcode, payload, length, t->callback_data);
t->callback(card, rcode, NULL, 0, t->callback_data);
return 0;
}
......@@ -94,9 +92,8 @@ close_transaction(struct fw_transaction *transaction,
* Only valid for transactions that are potentially pending (ie have
* been sent).
*/
int
fw_cancel_transaction(struct fw_card *card,
struct fw_transaction *transaction)
int fw_cancel_transaction(struct fw_card *card,
struct fw_transaction *transaction)
{
/*
* Cancel the packet transmission if it's still queued. That
......@@ -112,20 +109,19 @@ fw_cancel_transaction(struct fw_card *card,
* if the transaction is still pending and remove it in that case.
*/
return close_transaction(transaction, card, RCODE_CANCELLED, NULL, 0);
return close_transaction(transaction, card, RCODE_CANCELLED);
}
EXPORT_SYMBOL(fw_cancel_transaction);
static void
transmit_complete_callback(struct fw_packet *packet,
struct fw_card *card, int status)
static void transmit_complete_callback(struct fw_packet *packet,
struct fw_card *card, int status)
{
struct fw_transaction *t =
container_of(packet, struct fw_transaction, packet);
switch (status) {
case ACK_COMPLETE:
close_transaction(t, card, RCODE_COMPLETE, NULL, 0);
close_transaction(t, card, RCODE_COMPLETE);
break;
case ACK_PENDING:
t->timestamp = packet->timestamp;
......@@ -133,31 +129,42 @@ transmit_complete_callback(struct fw_packet *packet,
case ACK_BUSY_X:
case ACK_BUSY_A:
case ACK_BUSY_B:
close_transaction(t, card, RCODE_BUSY, NULL, 0);
close_transaction(t, card, RCODE_BUSY);
break;
case ACK_DATA_ERROR:
close_transaction(t, card, RCODE_DATA_ERROR, NULL, 0);
close_transaction(t, card, RCODE_DATA_ERROR);
break;
case ACK_TYPE_ERROR:
close_transaction(t, card, RCODE_TYPE_ERROR, NULL, 0);
close_transaction(t, card, RCODE_TYPE_ERROR);
break;
default:
/*
* In this case the ack is really a juju specific
* rcode, so just forward that to the callback.
*/
close_transaction(t, card, status, NULL, 0);
close_transaction(t, card, status);
break;
}
}
static void
fw_fill_request(struct fw_packet *packet, int tcode, int tlabel,
static void fw_fill_request(struct fw_packet *packet, int tcode, int tlabel,
int destination_id, int source_id, int generation, int speed,
unsigned long long offset, void *payload, size_t length)
{
int ext_tcode;
if (tcode == TCODE_STREAM_DATA) {
packet->header[0] =
HEADER_DATA_LENGTH(length) |
destination_id |
HEADER_TCODE(TCODE_STREAM_DATA);
packet->header_length = 4;
packet->payload = payload;
packet->payload_length = length;
goto common;
}
if (tcode > 0x10) {
ext_tcode = tcode & ~0x10;
tcode = TCODE_LOCK_REQUEST;
......@@ -204,7 +211,7 @@ fw_fill_request(struct fw_packet *packet, int tcode, int tlabel,
packet->payload_length = 0;
break;
}
common:
packet->speed = speed;
packet->generation = generation;
packet->ack = 0;
......@@ -246,13 +253,14 @@ fw_fill_request(struct fw_packet *packet, int tcode, int tlabel,
* @param callback function to be called when the transaction is completed
* @param callback_data pointer to arbitrary data, which will be
* passed to the callback
*
* In case of asynchronous stream packets i.e. TCODE_STREAM_DATA, the caller
* needs to synthesize @destination_id with fw_stream_packet_destination_id().
*/
void
fw_send_request(struct fw_card *card, struct fw_transaction *t,
int tcode, int destination_id, int generation, int speed,
unsigned long long offset,
void *payload, size_t length,
fw_transaction_callback_t callback, void *callback_data)
void fw_send_request(struct fw_card *card, struct fw_transaction *t, int tcode,
int destination_id, int generation, int speed,
unsigned long long offset, void *payload, size_t length,
fw_transaction_callback_t callback, void *callback_data)
{
unsigned long flags;
int tlabel;
......@@ -322,16 +330,16 @@ static void transaction_callback(struct fw_card *card, int rcode,
* Returns the RCODE.
*/
int fw_run_transaction(struct fw_card *card, int tcode, int destination_id,
int generation, int speed, unsigned long long offset,
void *data, size_t length)
int generation, int speed, unsigned long long offset,
void *payload, size_t length)
{
struct transaction_callback_data d;
struct fw_transaction t;
init_completion(&d.done);
d.payload = data;
d.payload = payload;
fw_send_request(card, &t, tcode, destination_id, generation, speed,
offset, data, length, transaction_callback, &d);
offset, payload, length, transaction_callback, &d);
wait_for_completion(&d.done);
return d.rcode;
......@@ -399,9 +407,8 @@ void fw_flush_transactions(struct fw_card *card)
}
}
static struct fw_address_handler *
lookup_overlapping_address_handler(struct list_head *list,
unsigned long long offset, size_t length)
static struct fw_address_handler *lookup_overlapping_address_handler(
struct list_head *list, unsigned long long offset, size_t length)
{
struct fw_address_handler *handler;
......@@ -414,9 +421,8 @@ lookup_overlapping_address_handler(struct list_head *list,
return NULL;
}
static struct fw_address_handler *
lookup_enclosing_address_handler(struct list_head *list,
unsigned long long offset, size_t length)
static struct fw_address_handler *lookup_enclosing_address_handler(
struct list_head *list, unsigned long long offset, size_t length)
{
struct fw_address_handler *handler;
......@@ -449,36 +455,44 @@ const struct fw_address_region fw_unit_space_region =
#endif /* 0 */
/**
* Allocate a range of addresses in the node space of the OHCI
* controller. When a request is received that falls within the
* specified address range, the specified callback is invoked. The
* parameters passed to the callback give the details of the
* particular request.
* fw_core_add_address_handler - register for incoming requests
* @handler: callback
* @region: region in the IEEE 1212 node space address range
*
* region->start, ->end, and handler->length have to be quadlet-aligned.
*
* When a request is received that falls within the specified address range,
* the specified callback is invoked. The parameters passed to the callback
* give the details of the particular request.
*
* Return value: 0 on success, non-zero otherwise.
* The start offset of the handler's address region is determined by
* fw_core_add_address_handler() and is returned in handler->offset.
* The offset is quadlet-aligned.
*/
int
fw_core_add_address_handler(struct fw_address_handler *handler,
const struct fw_address_region *region)
int fw_core_add_address_handler(struct fw_address_handler *handler,
const struct fw_address_region *region)
{
struct fw_address_handler *other;
unsigned long flags;
int ret = -EBUSY;
if (region->start & 0xffff000000000003ULL ||
region->end & 0xffff000000000003ULL ||
region->start >= region->end ||
handler->length & 3 ||
handler->length == 0)
return -EINVAL;
spin_lock_irqsave(&address_handler_lock, flags);
handler->offset = roundup(region->start, 4);
handler->offset = region->start;
while (handler->offset + handler->length <= region->end) {
other =
lookup_overlapping_address_handler(&address_handler_list,
handler->offset,
handler->length);
if (other != NULL) {
handler->offset =
roundup(other->offset + other->length, 4);
handler->offset += other->length;
} else {
list_add_tail(&handler->link, &address_handler_list);
ret = 0;
......@@ -493,12 +507,7 @@ fw_core_add_address_handler(struct fw_address_handler *handler,
EXPORT_SYMBOL(fw_core_add_address_handler);
/**
* Deallocate a range of addresses allocated with fw_allocate. This
* will call the associated callback one last time with a the special
* tcode TCODE_DEALLOCATE, to let the client destroy the registered
* callback data. For convenience, the callback parameters offset and
* length are set to the start and the length respectively for the
* deallocated region, payload is set to NULL.
* fw_core_remove_address_handler - unregister an address handler
*/
void fw_core_remove_address_handler(struct fw_address_handler *handler)
{
......@@ -518,9 +527,8 @@ struct fw_request {
u32 data[0];
};
static void
free_response_callback(struct fw_packet *packet,
struct fw_card *card, int status)
static void free_response_callback(struct fw_packet *packet,
struct fw_card *card, int status)
{
struct fw_request *request;
......@@ -528,9 +536,8 @@ free_response_callback(struct fw_packet *packet,
kfree(request);
}
void
fw_fill_response(struct fw_packet *response, u32 *request_header,
int rcode, void *payload, size_t length)
void fw_fill_response(struct fw_packet *response, u32 *request_header,
int rcode, void *payload, size_t length)
{
int tcode, tlabel, extended_tcode, source, destination;
......@@ -588,8 +595,7 @@ fw_fill_response(struct fw_packet *response, u32 *request_header,
}
EXPORT_SYMBOL(fw_fill_response);
static struct fw_request *
allocate_request(struct fw_packet *p)
static struct fw_request *allocate_request(struct fw_packet *p)
{
struct fw_request *request;
u32 *data, length;
......@@ -649,8 +655,8 @@ allocate_request(struct fw_packet *p)
return request;
}
void
fw_send_response(struct fw_card *card, struct fw_request *request, int rcode)
void fw_send_response(struct fw_card *card,
struct fw_request *request, int rcode)
{
/* unified transaction or broadcast transaction: don't respond */
if (request->ack != ACK_PENDING ||
......@@ -670,8 +676,7 @@ fw_send_response(struct fw_card *card, struct fw_request *request, int rcode)
}
EXPORT_SYMBOL(fw_send_response);
void
fw_core_handle_request(struct fw_card *card, struct fw_packet *p)
void fw_core_handle_request(struct fw_card *card, struct fw_packet *p)
{
struct fw_address_handler *handler;
struct fw_request *request;
......@@ -719,8 +724,7 @@ fw_core_handle_request(struct fw_card *card, struct fw_packet *p)
}
EXPORT_SYMBOL(fw_core_handle_request);
void
fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
void fw_core_handle_response(struct fw_card *card, struct fw_packet *p)
{
struct fw_transaction *t;
unsigned long flags;
......@@ -793,12 +797,10 @@ static const struct fw_address_region topology_map_region =
{ .start = CSR_REGISTER_BASE | CSR_TOPOLOGY_MAP,
.end = CSR_REGISTER_BASE | CSR_TOPOLOGY_MAP_END, };
static void
handle_topology_map(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source,
int generation, int speed,
unsigned long long offset,
void *payload, size_t length, void *callback_data)
static void handle_topology_map(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source, int generation,
int speed, unsigned long long offset,
void *payload, size_t length, void *callback_data)
{
int i, start, end;
__be32 *map;
......@@ -832,12 +834,10 @@ static const struct fw_address_region registers_region =
{ .start = CSR_REGISTER_BASE,
.end = CSR_REGISTER_BASE | CSR_CONFIG_ROM, };
static void
handle_registers(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source,
int generation, int speed,
unsigned long long offset,
void *payload, size_t length, void *callback_data)
static void handle_registers(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source, int generation,
int speed, unsigned long long offset,
void *payload, size_t length, void *callback_data)
{
int reg = offset & ~CSR_REGISTER_BASE;
unsigned long long bus_time;
......@@ -939,11 +939,11 @@ static struct fw_descriptor model_id_descriptor = {
static int __init fw_core_init(void)
{
int retval;
int ret;
retval = bus_register(&fw_bus_type);
if (retval < 0)
return retval;
ret = bus_register(&fw_bus_type);
if (ret < 0)
return ret;
fw_cdev_major = register_chrdev(0, "firewire", &fw_device_ops);
if (fw_cdev_major < 0) {
......@@ -951,19 +951,10 @@ static int __init fw_core_init(void)
return fw_cdev_major;
}
retval = fw_core_add_address_handler(&topology_map,
&topology_map_region);
BUG_ON(retval < 0);
retval = fw_core_add_address_handler(&registers,
&registers_region);
BUG_ON(retval < 0);
/* Add the vendor textual descriptor. */
retval = fw_core_add_descriptor(&vendor_id_descriptor);
BUG_ON(retval < 0);
retval = fw_core_add_descriptor(&model_id_descriptor);
BUG_ON(retval < 0);
fw_core_add_address_handler(&topology_map, &topology_map_region);
fw_core_add_address_handler(&registers, &registers_region);
fw_core_add_descriptor(&vendor_id_descriptor);
fw_core_add_descriptor(&model_id_descriptor);
return 0;
}
......
......@@ -82,14 +82,14 @@
#define CSR_SPEED_MAP 0x2000
#define CSR_SPEED_MAP_END 0x3000
#define BANDWIDTH_AVAILABLE_INITIAL 4915
#define BROADCAST_CHANNEL_INITIAL (1 << 31 | 31)
#define BROADCAST_CHANNEL_VALID (1 << 30)
#define fw_notify(s, args...) printk(KERN_NOTICE KBUILD_MODNAME ": " s, ## args)
#define fw_error(s, args...) printk(KERN_ERR KBUILD_MODNAME ": " s, ## args)
static inline void
fw_memcpy_from_be32(void *_dst, void *_src, size_t size)
static inline void fw_memcpy_from_be32(void *_dst, void *_src, size_t size)
{
u32 *dst = _dst;
__be32 *src = _src;
......@@ -99,8 +99,7 @@ fw_memcpy_from_be32(void *_dst, void *_src, size_t size)
dst[i] = be32_to_cpu(src[i]);
}
static inline void
fw_memcpy_to_be32(void *_dst, void *_src, size_t size)
static inline void fw_memcpy_to_be32(void *_dst, void *_src, size_t size)
{
fw_memcpy_from_be32(_dst, _src, size);
}
......@@ -125,8 +124,7 @@ typedef void (*fw_packet_callback_t)(struct fw_packet *packet,
struct fw_card *card, int status);
typedef void (*fw_transaction_callback_t)(struct fw_card *card, int rcode,
void *data,
size_t length,
void *data, size_t length,
void *callback_data);
/*
......@@ -141,12 +139,6 @@ typedef void (*fw_address_callback_t)(struct fw_card *card,
void *data, size_t length,
void *callback_data);
typedef void (*fw_bus_reset_callback_t)(struct fw_card *handle,
int node_id, int generation,
u32 *self_ids,
int self_id_count,
void *callback_data);
struct fw_packet {
int speed;
int generation;
......@@ -187,12 +179,6 @@ struct fw_transaction {
void *callback_data;
};
static inline struct fw_packet *
fw_packet(struct list_head *l)
{
return list_entry(l, struct fw_packet, link);
}
struct fw_address_handler {
u64 offset;
size_t length;
......@@ -201,7 +187,6 @@ struct fw_address_handler {
struct list_head link;
};
struct fw_address_region {
u64 start;
u64 end;
......@@ -255,6 +240,7 @@ struct fw_card {
int bm_retries;
int bm_generation;
bool broadcast_channel_allocated;
u32 broadcast_channel;
u32 topology_map[(CSR_TOPOLOGY_MAP_END - CSR_TOPOLOGY_MAP) / 4];
};
......@@ -315,10 +301,8 @@ struct fw_iso_packet {
struct fw_iso_context;
typedef void (*fw_iso_callback_t)(struct fw_iso_context *context,
u32 cycle,
size_t header_length,
void *header,
void *data);
u32 cycle, size_t header_length,
void *header, void *data);
/*
* An iso buffer is just a set of pages mapped for DMA in the
......@@ -344,36 +328,25 @@ struct fw_iso_context {
void *callback_data;
};
int
fw_iso_buffer_init(struct fw_iso_buffer *buffer,
struct fw_card *card,
int page_count,
enum dma_data_direction direction);
int
fw_iso_buffer_map(struct fw_iso_buffer *buffer, struct vm_area_struct *vma);
void
fw_iso_buffer_destroy(struct fw_iso_buffer *buffer, struct fw_card *card);
struct fw_iso_context *
fw_iso_context_create(struct fw_card *card, int type,
int channel, int speed, size_t header_size,
fw_iso_callback_t callback, void *callback_data);
void
fw_iso_context_destroy(struct fw_iso_context *ctx);
int
fw_iso_context_queue(struct fw_iso_context *ctx,
struct fw_iso_packet *packet,
struct fw_iso_buffer *buffer,
unsigned long payload);
int
fw_iso_context_start(struct fw_iso_context *ctx,
int cycle, int sync, int tags);
int
fw_iso_context_stop(struct fw_iso_context *ctx);
int fw_iso_buffer_init(struct fw_iso_buffer *buffer, struct fw_card *card,
int page_count, enum dma_data_direction direction);
int fw_iso_buffer_map(struct fw_iso_buffer *buffer, struct vm_area_struct *vma);
void fw_iso_buffer_destroy(struct fw_iso_buffer *buffer, struct fw_card *card);
struct fw_iso_context *fw_iso_context_create(struct fw_card *card,
int type, int channel, int speed, size_t header_size,
fw_iso_callback_t callback, void *callback_data);
int fw_iso_context_queue(struct fw_iso_context *ctx,
struct fw_iso_packet *packet,
struct fw_iso_buffer *buffer,
unsigned long payload);
int fw_iso_context_start(struct fw_iso_context *ctx,
int cycle, int sync, int tags);
int fw_iso_context_stop(struct fw_iso_context *ctx);
void fw_iso_context_destroy(struct fw_iso_context *ctx);
void fw_iso_resource_manage(struct fw_card *card, int generation,
u64 channels_mask, int *channel, int *bandwidth, bool allocate);
struct fw_card_driver {
/*
......@@ -415,7 +388,7 @@ struct fw_card_driver {
struct fw_iso_context *
(*allocate_iso_context)(struct fw_card *card,
int type, size_t header_size);
int type, int channel, size_t header_size);
void (*free_iso_context)(struct fw_iso_context *ctx);
int (*start_iso)(struct fw_iso_context *ctx,
......@@ -429,54 +402,45 @@ struct fw_card_driver {
int (*stop_iso)(struct fw_iso_context *ctx);
};
int
fw_core_initiate_bus_reset(struct fw_card *card, int short_reset);
int fw_core_initiate_bus_reset(struct fw_card *card, int short_reset);
void
fw_send_request(struct fw_card *card, struct fw_transaction *t,
void fw_send_request(struct fw_card *card, struct fw_transaction *t,
int tcode, int destination_id, int generation, int speed,
unsigned long long offset, void *data, size_t length,
unsigned long long offset, void *payload, size_t length,
fw_transaction_callback_t callback, void *callback_data);
int fw_run_transaction(struct fw_card *card, int tcode, int destination_id,
int generation, int speed, unsigned long long offset,
void *data, size_t length);
int fw_cancel_transaction(struct fw_card *card,
struct fw_transaction *transaction);
void fw_flush_transactions(struct fw_card *card);
int fw_run_transaction(struct fw_card *card, int tcode, int destination_id,
int generation, int speed, unsigned long long offset,
void *payload, size_t length);
void fw_send_phy_config(struct fw_card *card,
int node_id, int generation, int gap_count);
static inline int fw_stream_packet_destination_id(int tag, int channel, int sy)
{
return tag << 14 | channel << 8 | sy;
}
/*
* Called by the topology code to inform the device code of node
* activity; found, lost, or updated nodes.
*/
void
fw_node_event(struct fw_card *card, struct fw_node *node, int event);
void fw_node_event(struct fw_card *card, struct fw_node *node, int event);
/* API used by card level drivers */
void
fw_card_initialize(struct fw_card *card, const struct fw_card_driver *driver,
struct device *device);
int
fw_card_add(struct fw_card *card,
u32 max_receive, u32 link_speed, u64 guid);
void
fw_core_remove_card(struct fw_card *card);
void
fw_core_handle_bus_reset(struct fw_card *card,
int node_id, int generation,
int self_id_count, u32 *self_ids);
void
fw_core_handle_request(struct fw_card *card, struct fw_packet *request);
void
fw_core_handle_response(struct fw_card *card, struct fw_packet *packet);
void fw_card_initialize(struct fw_card *card,
const struct fw_card_driver *driver, struct device *device);
int fw_card_add(struct fw_card *card,
u32 max_receive, u32 link_speed, u64 guid);
void fw_core_remove_card(struct fw_card *card);
void fw_core_handle_bus_reset(struct fw_card *card, int node_id,
int generation, int self_id_count, u32 *self_ids);
void fw_core_handle_request(struct fw_card *card, struct fw_packet *request);
void fw_core_handle_response(struct fw_card *card, struct fw_packet *packet);
extern int fw_irm_set_broadcast_channel_register(struct device *dev,
void *data);
#endif /* __fw_transaction_h */
......@@ -10,7 +10,8 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_lock.o drm_memory.o drm_proc.o drm_stub.o drm_vm.o \
drm_agpsupport.o drm_scatter.o ati_pcigart.o drm_pci.o \
drm_sysfs.o drm_hashtab.o drm_sman.o drm_mm.o \
drm_crtc.o drm_crtc_helper.o drm_modes.o drm_edid.o
drm_crtc.o drm_crtc_helper.o drm_modes.o drm_edid.o \
drm_info.o drm_debugfs.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
......
/**
* \file drm_debugfs.c
* debugfs support for DRM
*
* \author Ben Gamari <bgamari@gmail.com>
*/
/*
* Created: Sun Dec 21 13:08:50 2008 by bgamari@gmail.com
*
* Copyright 2008 Ben Gamari <bgamari@gmail.com>
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include "drmP.h"
#if defined(CONFIG_DEBUG_FS)
/***************************************************
* Initialization, etc.
**************************************************/
static struct drm_info_list drm_debugfs_list[] = {
{"name", drm_name_info, 0},
{"vm", drm_vm_info, 0},
{"clients", drm_clients_info, 0},
{"queues", drm_queues_info, 0},
{"bufs", drm_bufs_info, 0},
{"gem_names", drm_gem_name_info, DRIVER_GEM},
{"gem_objects", drm_gem_object_info, DRIVER_GEM},
#if DRM_DEBUG_CODE
{"vma", drm_vma_info, 0},
#endif
};
#define DRM_DEBUGFS_ENTRIES ARRAY_SIZE(drm_debugfs_list)
static int drm_debugfs_open(struct inode *inode, struct file *file)
{
struct drm_info_node *node = inode->i_private;
return single_open(file, node->info_ent->show, node);
}
static const struct file_operations drm_debugfs_fops = {
.owner = THIS_MODULE,
.open = drm_debugfs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
/**
* Initialize a given set of debugfs files for a device
*
* \param files The array of files to create
* \param count The number of files given
* \param root DRI debugfs dir entry.
* \param minor device minor number
* \return Zero on success, non-zero on failure
*
* Create a given set of debugfs files represented by an array of
* gdm_debugfs_lists in the given root directory.
*/
int drm_debugfs_create_files(struct drm_info_list *files, int count,
struct dentry *root, struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
struct dentry *ent;
struct drm_info_node *tmp;
char name[64];
int i, ret;
for (i = 0; i < count; i++) {
u32 features = files[i].driver_features;
if (features != 0 &&
(dev->driver->driver_features & features) != features)
continue;
tmp = drm_alloc(sizeof(struct drm_info_node),
_DRM_DRIVER);
ent = debugfs_create_file(files[i].name, S_IFREG | S_IRUGO,
root, tmp, &drm_debugfs_fops);
if (!ent) {
DRM_ERROR("Cannot create /debugfs/dri/%s/%s\n",
name, files[i].name);
drm_free(tmp, sizeof(struct drm_info_node),
_DRM_DRIVER);
ret = -1;
goto fail;
}
tmp->minor = minor;
tmp->dent = ent;
tmp->info_ent = &files[i];
list_add(&(tmp->list), &(minor->debugfs_nodes.list));
}
return 0;
fail:
drm_debugfs_remove_files(files, count, minor);
return ret;
}
EXPORT_SYMBOL(drm_debugfs_create_files);
/**
* Initialize the DRI debugfs filesystem for a device
*
* \param dev DRM device
* \param minor device minor number
* \param root DRI debugfs dir entry.
*
* Create the DRI debugfs root entry "/debugfs/dri", the device debugfs root entry
* "/debugfs/dri/%minor%/", and each entry in debugfs_list as
* "/debugfs/dri/%minor%/%name%".
*/
int drm_debugfs_init(struct drm_minor *minor, int minor_id,
struct dentry *root)
{
struct drm_device *dev = minor->dev;
char name[64];
int ret;
INIT_LIST_HEAD(&minor->debugfs_nodes.list);
sprintf(name, "%d", minor_id);
minor->debugfs_root = debugfs_create_dir(name, root);
if (!minor->debugfs_root) {
DRM_ERROR("Cannot create /debugfs/dri/%s\n", name);
return -1;
}
ret = drm_debugfs_create_files(drm_debugfs_list, DRM_DEBUGFS_ENTRIES,
minor->debugfs_root, minor);
if (ret) {
debugfs_remove(minor->debugfs_root);
minor->debugfs_root = NULL;
DRM_ERROR("Failed to create core drm debugfs files\n");
return ret;
}
if (dev->driver->debugfs_init) {
ret = dev->driver->debugfs_init(minor);
if (ret) {
DRM_ERROR("DRM: Driver failed to initialize "
"/debugfs/dri.\n");
return ret;
}
}
return 0;
}
/**
* Remove a list of debugfs files
*
* \param files The list of files
* \param count The number of files
* \param minor The minor of which we should remove the files
* \return always zero.
*
* Remove all debugfs entries created by debugfs_init().
*/
int drm_debugfs_remove_files(struct drm_info_list *files, int count,
struct drm_minor *minor)
{
struct list_head *pos, *q;
struct drm_info_node *tmp;
int i;
for (i = 0; i < count; i++) {
list_for_each_safe(pos, q, &minor->debugfs_nodes.list) {
tmp = list_entry(pos, struct drm_info_node, list);
if (tmp->info_ent == &files[i]) {
debugfs_remove(tmp->dent);
list_del(pos);
drm_free(tmp, sizeof(struct drm_info_node),
_DRM_DRIVER);
}
}
}
return 0;
}
EXPORT_SYMBOL(drm_debugfs_remove_files);
/**
* Cleanup the debugfs filesystem resources.
*
* \param minor device minor number.
* \return always zero.
*
* Remove all debugfs entries created by debugfs_init().
*/
int drm_debugfs_cleanup(struct drm_minor *minor)
{
struct drm_device *dev = minor->dev;
if (!minor->debugfs_root)
return 0;
if (dev->driver->debugfs_cleanup)
dev->driver->debugfs_cleanup(minor);
drm_debugfs_remove_files(drm_debugfs_list, DRM_DEBUGFS_ENTRIES, minor);
debugfs_remove(minor->debugfs_root);
minor->debugfs_root = NULL;
return 0;
}
#endif /* CONFIG_DEBUG_FS */
......@@ -46,9 +46,11 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/debugfs.h>
#include "drmP.h"
#include "drm_core.h"
static int drm_version(struct drm_device *dev, void *data,
struct drm_file *file_priv);
......@@ -178,7 +180,7 @@ int drm_lastclose(struct drm_device * dev)
/* Clear AGP information */
if (drm_core_has_AGP(dev) && dev->agp &&
!drm_core_check_feature(dev, DRIVER_MODESET)) {
!drm_core_check_feature(dev, DRIVER_MODESET)) {
struct drm_agp_mem *entry, *tempe;
/* Remove AGP resources, but leave dev->agp
......@@ -382,6 +384,13 @@ static int __init drm_core_init(void)
goto err_p3;
}
drm_debugfs_root = debugfs_create_dir("dri", NULL);
if (!drm_debugfs_root) {
DRM_ERROR("Cannot create /debugfs/dri\n");
ret = -1;
goto err_p3;
}
drm_mem_init();
DRM_INFO("Initialized %s %d.%d.%d %s\n",
......@@ -400,6 +409,7 @@ static int __init drm_core_init(void)
static void __exit drm_core_exit(void)
{
remove_proc_entry("dri", NULL);
debugfs_remove(drm_debugfs_root);
drm_sysfs_destroy();
unregister_chrdev(DRM_MAJOR, "drm");
......
/**
* \file drm_info.c
* DRM info file implementations
*
* \author Ben Gamari <bgamari@gmail.com>
*/
/*
* Created: Sun Dec 21 13:09:50 2008 by bgamari@gmail.com
*
* Copyright 1999 Precision Insight, Inc., Cedar Park, Texas.
* Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
* Copyright 2008 Ben Gamari <bgamari@gmail.com>
* All Rights Reserved.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/seq_file.h>
#include "drmP.h"
/**
* Called when "/proc/dri/.../name" is read.
*
* Prints the device name together with the bus id if available.
*/
int drm_name_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_minor *minor = node->minor;
struct drm_device *dev = minor->dev;
struct drm_master *master = minor->master;
if (!master)
return 0;
if (master->unique) {
seq_printf(m, "%s %s %s\n",
dev->driver->pci_driver.name,
pci_name(dev->pdev), master->unique);
} else {
seq_printf(m, "%s %s\n", dev->driver->pci_driver.name,
pci_name(dev->pdev));
}
return 0;
}
/**
* Called when "/proc/dri/.../vm" is read.
*
* Prints information about all mappings in drm_device::maplist.
*/
int drm_vm_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_map *map;
struct drm_map_list *r_list;
/* Hardcoded from _DRM_FRAME_BUFFER,
_DRM_REGISTERS, _DRM_SHM, _DRM_AGP, and
_DRM_SCATTER_GATHER and _DRM_CONSISTENT */
const char *types[] = { "FB", "REG", "SHM", "AGP", "SG", "PCI" };
const char *type;
int i;
mutex_lock(&dev->struct_mutex);
seq_printf(m, "slot offset size type flags address mtrr\n\n");
i = 0;
list_for_each_entry(r_list, &dev->maplist, head) {
map = r_list->map;
if (!map)
continue;
if (map->type < 0 || map->type > 5)
type = "??";
else
type = types[map->type];
seq_printf(m, "%4d 0x%08lx 0x%08lx %4.4s 0x%02x 0x%08lx ",
i,
map->offset,
map->size, type, map->flags,
(unsigned long) r_list->user_token);
if (map->mtrr < 0)
seq_printf(m, "none\n");
else
seq_printf(m, "%4d\n", map->mtrr);
i++;
}
mutex_unlock(&dev->struct_mutex);
return 0;
}
/**
* Called when "/proc/dri/.../queues" is read.
*/
int drm_queues_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
int i;
struct drm_queue *q;
mutex_lock(&dev->struct_mutex);
seq_printf(m, " ctx/flags use fin"
" blk/rw/rwf wait flushed queued"
" locks\n\n");
for (i = 0; i < dev->queue_count; i++) {
q = dev->queuelist[i];
atomic_inc(&q->use_count);
seq_printf(m, "%5d/0x%03x %5d %5d"
" %5d/%c%c/%c%c%c %5Zd\n",
i,
q->flags,
atomic_read(&q->use_count),
atomic_read(&q->finalization),
atomic_read(&q->block_count),
atomic_read(&q->block_read) ? 'r' : '-',
atomic_read(&q->block_write) ? 'w' : '-',
waitqueue_active(&q->read_queue) ? 'r' : '-',
waitqueue_active(&q->write_queue) ? 'w' : '-',
waitqueue_active(&q->flush_queue) ? 'f' : '-',
DRM_BUFCOUNT(&q->waitlist));
atomic_dec(&q->use_count);
}
mutex_unlock(&dev->struct_mutex);
return 0;
}
/**
* Called when "/proc/dri/.../bufs" is read.
*/
int drm_bufs_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_device_dma *dma;
int i, seg_pages;
mutex_lock(&dev->struct_mutex);
dma = dev->dma;
if (!dma) {
mutex_unlock(&dev->struct_mutex);
return 0;
}
seq_printf(m, " o size count free segs pages kB\n\n");
for (i = 0; i <= DRM_MAX_ORDER; i++) {
if (dma->bufs[i].buf_count) {
seg_pages = dma->bufs[i].seg_count * (1 << dma->bufs[i].page_order);
seq_printf(m, "%2d %8d %5d %5d %5d %5d %5ld\n",
i,
dma->bufs[i].buf_size,
dma->bufs[i].buf_count,
atomic_read(&dma->bufs[i].freelist.count),
dma->bufs[i].seg_count,
seg_pages,
seg_pages * PAGE_SIZE / 1024);
}
}
seq_printf(m, "\n");
for (i = 0; i < dma->buf_count; i++) {
if (i && !(i % 32))
seq_printf(m, "\n");
seq_printf(m, " %d", dma->buflist[i]->list);
}
seq_printf(m, "\n");
mutex_unlock(&dev->struct_mutex);
return 0;
}
/**
* Called when "/proc/dri/.../vblank" is read.
*/
int drm_vblank_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
int crtc;
mutex_lock(&dev->struct_mutex);
for (crtc = 0; crtc < dev->num_crtcs; crtc++) {
seq_printf(m, "CRTC %d enable: %d\n",
crtc, atomic_read(&dev->vblank_refcount[crtc]));
seq_printf(m, "CRTC %d counter: %d\n",
crtc, drm_vblank_count(dev, crtc));
seq_printf(m, "CRTC %d last wait: %d\n",
crtc, dev->last_vblank_wait[crtc]);
seq_printf(m, "CRTC %d in modeset: %d\n",
crtc, dev->vblank_inmodeset[crtc]);
}
mutex_unlock(&dev->struct_mutex);
return 0;
}
/**
* Called when "/proc/dri/.../clients" is read.
*
*/
int drm_clients_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_file *priv;
mutex_lock(&dev->struct_mutex);
seq_printf(m, "a dev pid uid magic ioctls\n\n");
list_for_each_entry(priv, &dev->filelist, lhead) {
seq_printf(m, "%c %3d %5d %5d %10u %10lu\n",
priv->authenticated ? 'y' : 'n',
priv->minor->index,
priv->pid,
priv->uid, priv->magic, priv->ioctl_count);
}
mutex_unlock(&dev->struct_mutex);
return 0;
}
int drm_gem_one_name_info(int id, void *ptr, void *data)
{
struct drm_gem_object *obj = ptr;
struct seq_file *m = data;
seq_printf(m, "name %d size %zd\n", obj->name, obj->size);
seq_printf(m, "%6d %8zd %7d %8d\n",
obj->name, obj->size,
atomic_read(&obj->handlecount.refcount),
atomic_read(&obj->refcount.refcount));
return 0;
}
int drm_gem_name_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
seq_printf(m, " name size handles refcount\n");
idr_for_each(&dev->object_name_idr, drm_gem_one_name_info, m);
return 0;
}
int drm_gem_object_info(struct seq_file *m, void* data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
seq_printf(m, "%d objects\n", atomic_read(&dev->object_count));
seq_printf(m, "%d object bytes\n", atomic_read(&dev->object_memory));
seq_printf(m, "%d pinned\n", atomic_read(&dev->pin_count));
seq_printf(m, "%d pin bytes\n", atomic_read(&dev->pin_memory));
seq_printf(m, "%d gtt bytes\n", atomic_read(&dev->gtt_memory));
seq_printf(m, "%d gtt total\n", dev->gtt_total);
return 0;
}
#if DRM_DEBUG_CODE
int drm_vma_info(struct seq_file *m, void *data)
{
struct drm_info_node *node = (struct drm_info_node *) m->private;
struct drm_device *dev = node->minor->dev;
struct drm_vma_entry *pt;
struct vm_area_struct *vma;
#if defined(__i386__)
unsigned int pgprot;
#endif
mutex_lock(&dev->struct_mutex);
seq_printf(m, "vma use count: %d, high_memory = %p, 0x%08llx\n",
atomic_read(&dev->vma_count),
high_memory, (u64)virt_to_phys(high_memory));
list_for_each_entry(pt, &dev->vmalist, head) {
vma = pt->vma;
if (!vma)
continue;
seq_printf(m,
"\n%5d 0x%08lx-0x%08lx %c%c%c%c%c%c 0x%08lx000",
pt->pid, vma->vm_start, vma->vm_end,
vma->vm_flags & VM_READ ? 'r' : '-',
vma->vm_flags & VM_WRITE ? 'w' : '-',
vma->vm_flags & VM_EXEC ? 'x' : '-',
vma->vm_flags & VM_MAYSHARE ? 's' : 'p',
vma->vm_flags & VM_LOCKED ? 'l' : '-',
vma->vm_flags & VM_IO ? 'i' : '-',
vma->vm_pgoff);
#if defined(__i386__)
pgprot = pgprot_val(vma->vm_page_prot);
seq_printf(m, " %c%c%c%c%c%c%c%c%c",
pgprot & _PAGE_PRESENT ? 'p' : '-',
pgprot & _PAGE_RW ? 'w' : 'r',
pgprot & _PAGE_USER ? 'u' : 's',
pgprot & _PAGE_PWT ? 't' : 'b',
pgprot & _PAGE_PCD ? 'u' : 'c',
pgprot & _PAGE_ACCESSED ? 'a' : '-',
pgprot & _PAGE_DIRTY ? 'd' : '-',
pgprot & _PAGE_PSE ? 'm' : 'k',
pgprot & _PAGE_GLOBAL ? 'g' : 'l');
#endif
seq_printf(m, "\n");
}
mutex_unlock(&dev->struct_mutex);
return 0;
}
#endif
此差异已折叠。
......@@ -50,6 +50,7 @@ struct idr drm_minors_idr;
struct class *drm_class;
struct proc_dir_entry *drm_proc_root;
struct dentry *drm_debugfs_root;
static int drm_minor_get_id(struct drm_device *dev, int type)
{
......@@ -313,7 +314,15 @@ static int drm_get_minor(struct drm_device *dev, struct drm_minor **minor, int t
goto err_mem;
}
} else
new_minor->dev_root = NULL;
new_minor->proc_root = NULL;
#if defined(CONFIG_DEBUG_FS)
ret = drm_debugfs_init(new_minor, minor_id, drm_debugfs_root);
if (ret) {
DRM_ERROR("DRM: Failed to initialize /debugfs/dri.\n");
goto err_g2;
}
#endif
ret = drm_sysfs_device_add(new_minor);
if (ret) {
......@@ -451,6 +460,10 @@ int drm_put_minor(struct drm_minor **minor_p)
if (minor->type == DRM_MINOR_LEGACY)
drm_proc_cleanup(minor, drm_proc_root);
#if defined(CONFIG_DEBUG_FS)
drm_debugfs_cleanup(minor);
#endif
drm_sysfs_device_remove(minor);
idr_remove(&drm_minors_idr, minor->index);
......
......@@ -7,7 +7,7 @@ i915-y := i915_drv.o i915_dma.o i915_irq.o i915_mem.o \
i915_suspend.o \
i915_gem.o \
i915_gem_debug.o \
i915_gem_proc.o \
i915_gem_debugfs.o \
i915_gem_tiling.o \
intel_display.o \
intel_crt.o \
......
此差异已折叠。
......@@ -150,8 +150,10 @@ static struct drm_driver driver = {
.get_reg_ofs = drm_core_get_reg_ofs,
.master_create = i915_master_create,
.master_destroy = i915_master_destroy,
.proc_init = i915_gem_proc_init,
.proc_cleanup = i915_gem_proc_cleanup,
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = i915_gem_debugfs_init,
.debugfs_cleanup = i915_gem_debugfs_cleanup,
#endif
.gem_init_object = i915_gem_init_object,
.gem_free_object = i915_gem_free_object,
.gem_vm_ops = &i915_gem_vm_ops,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -162,13 +162,13 @@ struct bdb_lvds_options {
u8 panel_type;
u8 rsvd1;
/* LVDS capabilities, stored in a dword */
u8 rsvd2:1;
u8 lvds_edid:1;
u8 pixel_dither:1;
u8 pfit_ratio_auto:1;
u8 pfit_gfx_mode_enhanced:1;
u8 pfit_text_mode_enhanced:1;
u8 pfit_mode:2;
u8 pfit_text_mode_enhanced:1;
u8 pfit_gfx_mode_enhanced:1;
u8 pfit_ratio_auto:1;
u8 pixel_dither:1;
u8 lvds_edid:1;
u8 rsvd2:1;
u8 rsvd4;
} __attribute__((packed));
......
此差异已折叠。
......@@ -265,7 +265,7 @@ static void intel_lvds_mode_set(struct drm_encoder *encoder,
pfit_control = 0;
if (!IS_I965G(dev)) {
if (dev_priv->panel_wants_dither)
if (dev_priv->panel_wants_dither || dev_priv->lvds_dither)
pfit_control |= PANEL_8TO6_DITHER_ENABLE;
}
else
......
此差异已折叠。
此差异已折叠。
......@@ -2171,7 +2171,7 @@ static const struct file_operations dv1394_fops=
* Export information about protocols/devices supported by this driver.
*/
#ifdef MODULE
static struct ieee1394_device_id dv1394_id_table[] = {
static const struct ieee1394_device_id dv1394_id_table[] = {
{
.match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
.specifier_id = AVC_UNIT_SPEC_ID_ENTRY & 0xffffff,
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -230,6 +230,7 @@ obj-$(CONFIG_PASEMI_MAC) += pasemi_mac_driver.o
pasemi_mac_driver-objs := pasemi_mac.o pasemi_mac_ethtool.o
obj-$(CONFIG_MLX4_CORE) += mlx4/
obj-$(CONFIG_ENC28J60) += enc28j60.o
obj-$(CONFIG_ETHOC) += ethoc.o
obj-$(CONFIG_XTENSA_XT2000_SONIC) += xtsonic.o
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册