Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
openeuler
Kernel
提交
a78b3371
K
Kernel
项目概览
openeuler
/
Kernel
11 个月 前同步成功
通知
5
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
K
Kernel
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
a78b3371
编写于
8月 29, 2005
作者:
L
Linus Torvalds
浏览文件
操作
浏览文件
下载
差异文件
Merge HEAD from master.kernel.org:/pub/scm/linux/kernel/git/roland/infiniband.git
上级
97c169a2
a4d61e84
变更
69
展开全部
隐藏空白更改
内联
并排
Showing
69 changed file
with
2732 addition
and
1424 deletion
+2732
-1424
drivers/infiniband/core/Makefile
drivers/infiniband/core/Makefile
+0
-2
drivers/infiniband/core/agent.c
drivers/infiniband/core/agent.c
+7
-6
drivers/infiniband/core/agent_priv.h
drivers/infiniband/core/agent_priv.h
+5
-5
drivers/infiniband/core/cache.c
drivers/infiniband/core/cache.c
+4
-2
drivers/infiniband/core/cm.c
drivers/infiniband/core/cm.c
+64
-61
drivers/infiniband/core/cm_msgs.h
drivers/infiniband/core/cm_msgs.h
+96
-98
drivers/infiniband/core/core_priv.h
drivers/infiniband/core/core_priv.h
+1
-1
drivers/infiniband/core/device.c
drivers/infiniband/core/device.c
+1
-0
drivers/infiniband/core/fmr_pool.c
drivers/infiniband/core/fmr_pool.c
+7
-1
drivers/infiniband/core/mad.c
drivers/infiniband/core/mad.c
+8
-7
drivers/infiniband/core/mad_priv.h
drivers/infiniband/core/mad_priv.h
+5
-5
drivers/infiniband/core/mad_rmpp.c
drivers/infiniband/core/mad_rmpp.c
+245
-66
drivers/infiniband/core/packer.c
drivers/infiniband/core/packer.c
+2
-1
drivers/infiniband/core/sa_query.c
drivers/infiniband/core/sa_query.c
+3
-3
drivers/infiniband/core/smi.c
drivers/infiniband/core/smi.c
+7
-6
drivers/infiniband/core/sysfs.c
drivers/infiniband/core/sysfs.c
+21
-19
drivers/infiniband/core/ucm.c
drivers/infiniband/core/ucm.c
+157
-307
drivers/infiniband/core/ucm.h
drivers/infiniband/core/ucm.h
+5
-8
drivers/infiniband/core/ud_header.c
drivers/infiniband/core/ud_header.c
+6
-5
drivers/infiniband/core/user_mad.c
drivers/infiniband/core/user_mad.c
+5
-5
drivers/infiniband/core/uverbs.h
drivers/infiniband/core/uverbs.h
+9
-2
drivers/infiniband/core/uverbs_cmd.c
drivers/infiniband/core/uverbs_cmd.c
+180
-2
drivers/infiniband/core/uverbs_main.c
drivers/infiniband/core/uverbs_main.c
+21
-1
drivers/infiniband/core/uverbs_mem.c
drivers/infiniband/core/uverbs_mem.c
+1
-0
drivers/infiniband/core/verbs.c
drivers/infiniband/core/verbs.c
+63
-2
drivers/infiniband/hw/mthca/Makefile
drivers/infiniband/hw/mthca/Makefile
+1
-3
drivers/infiniband/hw/mthca/mthca_allocator.c
drivers/infiniband/hw/mthca/mthca_allocator.c
+116
-0
drivers/infiniband/hw/mthca/mthca_av.c
drivers/infiniband/hw/mthca/mthca_av.c
+14
-14
drivers/infiniband/hw/mthca/mthca_cmd.c
drivers/infiniband/hw/mthca/mthca_cmd.c
+81
-25
drivers/infiniband/hw/mthca/mthca_cmd.h
drivers/infiniband/hw/mthca/mthca_cmd.h
+13
-7
drivers/infiniband/hw/mthca/mthca_config_reg.h
drivers/infiniband/hw/mthca/mthca_config_reg.h
+1
-0
drivers/infiniband/hw/mthca/mthca_cq.c
drivers/infiniband/hw/mthca/mthca_cq.c
+81
-175
drivers/infiniband/hw/mthca/mthca_dev.h
drivers/infiniband/hw/mthca/mthca_dev.h
+43
-9
drivers/infiniband/hw/mthca/mthca_doorbell.h
drivers/infiniband/hw/mthca/mthca_doorbell.h
+7
-6
drivers/infiniband/hw/mthca/mthca_eq.c
drivers/infiniband/hw/mthca/mthca_eq.c
+32
-31
drivers/infiniband/hw/mthca/mthca_mad.c
drivers/infiniband/hw/mthca/mthca_mad.c
+6
-4
drivers/infiniband/hw/mthca/mthca_main.c
drivers/infiniband/hw/mthca/mthca_main.c
+106
-73
drivers/infiniband/hw/mthca/mthca_mcg.c
drivers/infiniband/hw/mthca/mthca_mcg.c
+20
-16
drivers/infiniband/hw/mthca/mthca_memfree.c
drivers/infiniband/hw/mthca/mthca_memfree.c
+9
-3
drivers/infiniband/hw/mthca/mthca_memfree.h
drivers/infiniband/hw/mthca/mthca_memfree.h
+3
-2
drivers/infiniband/hw/mthca/mthca_mr.c
drivers/infiniband/hw/mthca/mthca_mr.c
+18
-17
drivers/infiniband/hw/mthca/mthca_pd.c
drivers/infiniband/hw/mthca/mthca_pd.c
+1
-0
drivers/infiniband/hw/mthca/mthca_profile.c
drivers/infiniband/hw/mthca/mthca_profile.c
+2
-0
drivers/infiniband/hw/mthca/mthca_profile.h
drivers/infiniband/hw/mthca/mthca_profile.h
+2
-0
drivers/infiniband/hw/mthca/mthca_provider.c
drivers/infiniband/hw/mthca/mthca_provider.c
+105
-10
drivers/infiniband/hw/mthca/mthca_provider.h
drivers/infiniband/hw/mthca/mthca_provider.h
+41
-13
drivers/infiniband/hw/mthca/mthca_qp.c
drivers/infiniband/hw/mthca/mthca_qp.c
+105
-257
drivers/infiniband/hw/mthca/mthca_srq.c
drivers/infiniband/hw/mthca/mthca_srq.c
+591
-0
drivers/infiniband/hw/mthca/mthca_user.h
drivers/infiniband/hw/mthca/mthca_user.h
+11
-0
drivers/infiniband/hw/mthca/mthca_wqe.h
drivers/infiniband/hw/mthca/mthca_wqe.h
+114
-0
drivers/infiniband/ulp/ipoib/Makefile
drivers/infiniband/ulp/ipoib/Makefile
+0
-2
drivers/infiniband/ulp/ipoib/ipoib.h
drivers/infiniband/ulp/ipoib/ipoib.h
+7
-5
drivers/infiniband/ulp/ipoib/ipoib_fs.c
drivers/infiniband/ulp/ipoib/ipoib_fs.c
+1
-1
drivers/infiniband/ulp/ipoib/ipoib_ib.c
drivers/infiniband/ulp/ipoib/ipoib_ib.c
+4
-1
drivers/infiniband/ulp/ipoib/ipoib_main.c
drivers/infiniband/ulp/ipoib/ipoib_main.c
+21
-12
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+5
-3
drivers/infiniband/ulp/ipoib/ipoib_verbs.c
drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+2
-1
drivers/infiniband/ulp/ipoib/ipoib_vlan.c
drivers/infiniband/ulp/ipoib/ipoib_vlan.c
+0
-1
include/rdma/ib_cache.h
include/rdma/ib_cache.h
+3
-1
include/rdma/ib_cm.h
include/rdma/ib_cm.h
+46
-47
include/rdma/ib_fmr_pool.h
include/rdma/ib_fmr_pool.h
+1
-1
include/rdma/ib_mad.h
include/rdma/ib_mad.h
+14
-12
include/rdma/ib_pack.h
include/rdma/ib_pack.h
+1
-1
include/rdma/ib_sa.h
include/rdma/ib_sa.h
+11
-11
include/rdma/ib_smi.h
include/rdma/ib_smi.h
+9
-11
include/rdma/ib_user_cm.h
include/rdma/ib_user_cm.h
+14
-14
include/rdma/ib_user_mad.h
include/rdma/ib_user_mad.h
+4
-6
include/rdma/ib_user_verbs.h
include/rdma/ib_user_verbs.h
+36
-3
include/rdma/ib_verbs.h
include/rdma/ib_verbs.h
+107
-11
未找到文件。
drivers/infiniband/core/Makefile
浏览文件 @
a78b3371
EXTRA_CFLAGS
+=
-Idrivers
/infiniband/include
obj-$(CONFIG_INFINIBAND)
+=
ib_core.o ib_mad.o ib_sa.o
\
ib_cm.o ib_umad.o ib_ucm.o
obj-$(CONFIG_INFINIBAND_USER_VERBS)
+=
ib_uverbs.o
...
...
drivers/infiniband/core/agent.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -40,7 +41,7 @@
#include <asm/bug.h>
#include <ib_smi.h>
#include <
rdma/
ib_smi.h>
#include "smi.h"
#include "agent_priv.h"
...
...
drivers/infiniband/core/agent_priv.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2004
, 2005
Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004
, 2005
Infinicon Corporation. All rights reserved.
* Copyright (c) 2004
, 2005
Intel Corporation. All rights reserved.
* Copyright (c) 2004
, 2005
Topspin Corporation. All rights reserved.
* Copyright (c) 2004
, 2005
Voltaire Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
drivers/infiniband/core/cache.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -32,12 +35,11 @@
* $Id: cache.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <linux/version.h>
#include <linux/module.h>
#include <linux/errno.h>
#include <linux/slab.h>
#include <ib_cache.h>
#include <
rdma/
ib_cache.h>
#include "core_priv.h"
...
...
drivers/infiniband/core/cm.c
浏览文件 @
a78b3371
...
...
@@ -43,8 +43,8 @@
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <ib_cache.h>
#include <ib_cm.h>
#include <
rdma/
ib_cache.h>
#include <
rdma/
ib_cm.h>
#include "cm_msgs.h"
MODULE_AUTHOR
(
"Sean Hefty"
);
...
...
@@ -83,7 +83,7 @@ struct cm_port {
struct
cm_device
{
struct
list_head
list
;
struct
ib_device
*
device
;
u
64
ca_guid
;
__be
64
ca_guid
;
struct
cm_port
port
[
0
];
};
...
...
@@ -100,8 +100,8 @@ struct cm_work {
struct
list_head
list
;
struct
cm_port
*
port
;
struct
ib_mad_recv_wc
*
mad_recv_wc
;
/* Received MADs */
u32
local_id
;
/* Established / timewait */
u
32
remote_id
;
__be32
local_id
;
/* Established / timewait */
__be
32
remote_id
;
struct
ib_cm_event
cm_event
;
struct
ib_sa_path_rec
path
[
0
];
};
...
...
@@ -110,8 +110,8 @@ struct cm_timewait_info {
struct
cm_work
work
;
/* Must be first. */
struct
rb_node
remote_qp_node
;
struct
rb_node
remote_id_node
;
u
64
remote_ca_guid
;
u
32
remote_qpn
;
__be
64
remote_ca_guid
;
__be
32
remote_qpn
;
u8
inserted_remote_qp
;
u8
inserted_remote_id
;
};
...
...
@@ -132,11 +132,11 @@ struct cm_id_private {
struct
cm_av
alt_av
;
void
*
private_data
;
u
64
tid
;
u
32
local_qpn
;
u
32
remote_qpn
;
u
32
sq_psn
;
u
32
rq_psn
;
__be
64
tid
;
__be
32
local_qpn
;
__be
32
remote_qpn
;
__be
32
sq_psn
;
__be
32
rq_psn
;
int
timeout_ms
;
enum
ib_mtu
path_mtu
;
u8
private_data_len
;
...
...
@@ -253,7 +253,7 @@ static void cm_set_ah_attr(struct ib_ah_attr *ah_attr, u8 port_num,
u16
dlid
,
u8
sl
,
u16
src_path_bits
)
{
memset
(
ah_attr
,
0
,
sizeof
ah_attr
);
ah_attr
->
dlid
=
be16_to_cpu
(
dlid
)
;
ah_attr
->
dlid
=
dlid
;
ah_attr
->
sl
=
sl
;
ah_attr
->
src_path_bits
=
src_path_bits
;
ah_attr
->
port_num
=
port_num
;
...
...
@@ -264,7 +264,7 @@ static void cm_init_av_for_response(struct cm_port *port,
{
av
->
port
=
port
;
av
->
pkey_index
=
wc
->
pkey_index
;
cm_set_ah_attr
(
&
av
->
ah_attr
,
port
->
port_num
,
cpu_to_be16
(
wc
->
slid
)
,
cm_set_ah_attr
(
&
av
->
ah_attr
,
port
->
port_num
,
wc
->
slid
,
wc
->
sl
,
wc
->
dlid_path_bits
);
}
...
...
@@ -295,8 +295,9 @@ static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av)
return
ret
;
av
->
port
=
port
;
cm_set_ah_attr
(
&
av
->
ah_attr
,
av
->
port
->
port_num
,
path
->
dlid
,
path
->
sl
,
path
->
slid
&
0x7F
);
cm_set_ah_attr
(
&
av
->
ah_attr
,
av
->
port
->
port_num
,
be16_to_cpu
(
path
->
dlid
),
path
->
sl
,
be16_to_cpu
(
path
->
slid
)
&
0x7F
);
av
->
packet_life_time
=
path
->
packet_life_time
;
return
0
;
}
...
...
@@ -309,26 +310,26 @@ static int cm_alloc_id(struct cm_id_private *cm_id_priv)
do
{
spin_lock_irqsave
(
&
cm
.
lock
,
flags
);
ret
=
idr_get_new_above
(
&
cm
.
local_id_table
,
cm_id_priv
,
1
,
(
int
*
)
&
cm_id_priv
->
id
.
local_id
);
(
__force
int
*
)
&
cm_id_priv
->
id
.
local_id
);
spin_unlock_irqrestore
(
&
cm
.
lock
,
flags
);
}
while
(
(
ret
==
-
EAGAIN
)
&&
idr_pre_get
(
&
cm
.
local_id_table
,
GFP_KERNEL
)
);
return
ret
;
}
static
void
cm_free_id
(
u
32
local_id
)
static
void
cm_free_id
(
__be
32
local_id
)
{
unsigned
long
flags
;
spin_lock_irqsave
(
&
cm
.
lock
,
flags
);
idr_remove
(
&
cm
.
local_id_table
,
(
int
)
local_id
);
idr_remove
(
&
cm
.
local_id_table
,
(
__force
int
)
local_id
);
spin_unlock_irqrestore
(
&
cm
.
lock
,
flags
);
}
static
struct
cm_id_private
*
cm_get_id
(
u32
local_id
,
u
32
remote_id
)
static
struct
cm_id_private
*
cm_get_id
(
__be32
local_id
,
__be
32
remote_id
)
{
struct
cm_id_private
*
cm_id_priv
;
cm_id_priv
=
idr_find
(
&
cm
.
local_id_table
,
(
int
)
local_id
);
cm_id_priv
=
idr_find
(
&
cm
.
local_id_table
,
(
__force
int
)
local_id
);
if
(
cm_id_priv
)
{
if
(
cm_id_priv
->
id
.
remote_id
==
remote_id
)
atomic_inc
(
&
cm_id_priv
->
refcount
);
...
...
@@ -339,7 +340,7 @@ static struct cm_id_private * cm_get_id(u32 local_id, u32 remote_id)
return
cm_id_priv
;
}
static
struct
cm_id_private
*
cm_acquire_id
(
u32
local_id
,
u
32
remote_id
)
static
struct
cm_id_private
*
cm_acquire_id
(
__be32
local_id
,
__be
32
remote_id
)
{
struct
cm_id_private
*
cm_id_priv
;
unsigned
long
flags
;
...
...
@@ -356,8 +357,8 @@ static struct cm_id_private * cm_insert_listen(struct cm_id_private *cm_id_priv)
struct
rb_node
**
link
=
&
cm
.
listen_service_table
.
rb_node
;
struct
rb_node
*
parent
=
NULL
;
struct
cm_id_private
*
cur_cm_id_priv
;
u
64
service_id
=
cm_id_priv
->
id
.
service_id
;
u
64
service_mask
=
cm_id_priv
->
id
.
service_mask
;
__be
64
service_id
=
cm_id_priv
->
id
.
service_id
;
__be
64
service_mask
=
cm_id_priv
->
id
.
service_mask
;
while
(
*
link
)
{
parent
=
*
link
;
...
...
@@ -376,7 +377,7 @@ static struct cm_id_private * cm_insert_listen(struct cm_id_private *cm_id_priv)
return
NULL
;
}
static
struct
cm_id_private
*
cm_find_listen
(
u
64
service_id
)
static
struct
cm_id_private
*
cm_find_listen
(
__be
64
service_id
)
{
struct
rb_node
*
node
=
cm
.
listen_service_table
.
rb_node
;
struct
cm_id_private
*
cm_id_priv
;
...
...
@@ -400,8 +401,8 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info
struct
rb_node
**
link
=
&
cm
.
remote_id_table
.
rb_node
;
struct
rb_node
*
parent
=
NULL
;
struct
cm_timewait_info
*
cur_timewait_info
;
u
64
remote_ca_guid
=
timewait_info
->
remote_ca_guid
;
u
32
remote_id
=
timewait_info
->
work
.
remote_id
;
__be
64
remote_ca_guid
=
timewait_info
->
remote_ca_guid
;
__be
32
remote_id
=
timewait_info
->
work
.
remote_id
;
while
(
*
link
)
{
parent
=
*
link
;
...
...
@@ -424,8 +425,8 @@ static struct cm_timewait_info * cm_insert_remote_id(struct cm_timewait_info
return
NULL
;
}
static
struct
cm_timewait_info
*
cm_find_remote_id
(
u
64
remote_ca_guid
,
u
32
remote_id
)
static
struct
cm_timewait_info
*
cm_find_remote_id
(
__be
64
remote_ca_guid
,
__be
32
remote_id
)
{
struct
rb_node
*
node
=
cm
.
remote_id_table
.
rb_node
;
struct
cm_timewait_info
*
timewait_info
;
...
...
@@ -453,8 +454,8 @@ static struct cm_timewait_info * cm_insert_remote_qpn(struct cm_timewait_info
struct
rb_node
**
link
=
&
cm
.
remote_qp_table
.
rb_node
;
struct
rb_node
*
parent
=
NULL
;
struct
cm_timewait_info
*
cur_timewait_info
;
u
64
remote_ca_guid
=
timewait_info
->
remote_ca_guid
;
u
32
remote_qpn
=
timewait_info
->
remote_qpn
;
__be
64
remote_ca_guid
=
timewait_info
->
remote_ca_guid
;
__be
32
remote_qpn
=
timewait_info
->
remote_qpn
;
while
(
*
link
)
{
parent
=
*
link
;
...
...
@@ -484,7 +485,7 @@ static struct cm_id_private * cm_insert_remote_sidr(struct cm_id_private
struct
rb_node
*
parent
=
NULL
;
struct
cm_id_private
*
cur_cm_id_priv
;
union
ib_gid
*
port_gid
=
&
cm_id_priv
->
av
.
dgid
;
u
32
remote_id
=
cm_id_priv
->
id
.
remote_id
;
__be
32
remote_id
=
cm_id_priv
->
id
.
remote_id
;
while
(
*
link
)
{
parent
=
*
link
;
...
...
@@ -598,7 +599,7 @@ static void cm_cleanup_timewait(struct cm_timewait_info *timewait_info)
spin_unlock_irqrestore
(
&
cm
.
lock
,
flags
);
}
static
struct
cm_timewait_info
*
cm_create_timewait_info
(
u
32
local_id
)
static
struct
cm_timewait_info
*
cm_create_timewait_info
(
__be
32
local_id
)
{
struct
cm_timewait_info
*
timewait_info
;
...
...
@@ -715,14 +716,15 @@ void ib_destroy_cm_id(struct ib_cm_id *cm_id)
EXPORT_SYMBOL
(
ib_destroy_cm_id
);
int
ib_cm_listen
(
struct
ib_cm_id
*
cm_id
,
u
64
service_id
,
u
64
service_mask
)
__be
64
service_id
,
__be
64
service_mask
)
{
struct
cm_id_private
*
cm_id_priv
,
*
cur_cm_id_priv
;
unsigned
long
flags
;
int
ret
=
0
;
service_mask
=
service_mask
?
service_mask
:
~
0ULL
;
service_mask
=
service_mask
?
service_mask
:
__constant_cpu_to_be64
(
~
0ULL
);
service_id
&=
service_mask
;
if
((
service_id
&
IB_SERVICE_ID_AGN_MASK
)
==
IB_CM_ASSIGN_SERVICE_ID
&&
(
service_id
!=
IB_CM_ASSIGN_SERVICE_ID
))
...
...
@@ -735,8 +737,8 @@ int ib_cm_listen(struct ib_cm_id *cm_id,
spin_lock_irqsave
(
&
cm
.
lock
,
flags
);
if
(
service_id
==
IB_CM_ASSIGN_SERVICE_ID
)
{
cm_id
->
service_id
=
__
cpu_to_be64
(
cm
.
listen_service_id
++
);
cm_id
->
service_mask
=
~
0ULL
;
cm_id
->
service_id
=
cpu_to_be64
(
cm
.
listen_service_id
++
);
cm_id
->
service_mask
=
__constant_cpu_to_be64
(
~
0ULL
)
;
}
else
{
cm_id
->
service_id
=
service_id
;
cm_id
->
service_mask
=
service_mask
;
...
...
@@ -752,18 +754,19 @@ int ib_cm_listen(struct ib_cm_id *cm_id,
}
EXPORT_SYMBOL
(
ib_cm_listen
);
static
u
64
cm_form_tid
(
struct
cm_id_private
*
cm_id_priv
,
enum
cm_msg_sequence
msg_seq
)
static
__be
64
cm_form_tid
(
struct
cm_id_private
*
cm_id_priv
,
enum
cm_msg_sequence
msg_seq
)
{
u64
hi_tid
,
low_tid
;
hi_tid
=
((
u64
)
cm_id_priv
->
av
.
port
->
mad_agent
->
hi_tid
)
<<
32
;
low_tid
=
(
u64
)
(
cm_id_priv
->
id
.
local_id
|
(
msg_seq
<<
30
));
low_tid
=
(
u64
)
((
__force
u32
)
cm_id_priv
->
id
.
local_id
|
(
msg_seq
<<
30
));
return
cpu_to_be64
(
hi_tid
|
low_tid
);
}
static
void
cm_format_mad_hdr
(
struct
ib_mad_hdr
*
hdr
,
enum
cm_msg_attr_id
attr_id
,
u
64
tid
)
__be16
attr_id
,
__be
64
tid
)
{
hdr
->
base_version
=
IB_MGMT_BASE_VERSION
;
hdr
->
mgmt_class
=
IB_MGMT_CLASS_CM
;
...
...
@@ -896,7 +899,7 @@ int ib_send_cm_req(struct ib_cm_id *cm_id,
goto
error1
;
}
cm_id
->
service_id
=
param
->
service_id
;
cm_id
->
service_mask
=
~
0ULL
;
cm_id
->
service_mask
=
__constant_cpu_to_be64
(
~
0ULL
)
;
cm_id_priv
->
timeout_ms
=
cm_convert_to_ms
(
param
->
primary_path
->
packet_life_time
)
*
2
+
cm_convert_to_ms
(
...
...
@@ -963,7 +966,7 @@ static int cm_issue_rej(struct cm_port *port,
rej_msg
->
remote_comm_id
=
rcv_msg
->
local_comm_id
;
rej_msg
->
local_comm_id
=
rcv_msg
->
remote_comm_id
;
cm_rej_set_msg_rejected
(
rej_msg
,
msg_rejected
);
rej_msg
->
reason
=
reason
;
rej_msg
->
reason
=
cpu_to_be16
(
reason
)
;
if
(
ari
&&
ari_length
)
{
cm_rej_set_reject_info_len
(
rej_msg
,
ari_length
);
...
...
@@ -977,8 +980,8 @@ static int cm_issue_rej(struct cm_port *port,
return
ret
;
}
static
inline
int
cm_is_active_peer
(
u64
local_ca_guid
,
u
64
remote_ca_guid
,
u32
local_qpn
,
u
32
remote_qpn
)
static
inline
int
cm_is_active_peer
(
__be64
local_ca_guid
,
__be
64
remote_ca_guid
,
__be32
local_qpn
,
__be
32
remote_qpn
)
{
return
(
be64_to_cpu
(
local_ca_guid
)
>
be64_to_cpu
(
remote_ca_guid
)
||
((
local_ca_guid
==
remote_ca_guid
)
&&
...
...
@@ -1137,7 +1140,7 @@ static void cm_format_rej(struct cm_rej_msg *rej_msg,
break
;
}
rej_msg
->
reason
=
reason
;
rej_msg
->
reason
=
cpu_to_be16
(
reason
)
;
if
(
ari
&&
ari_length
)
{
cm_rej_set_reject_info_len
(
rej_msg
,
ari_length
);
memcpy
(
rej_msg
->
ari
,
ari
,
ari_length
);
...
...
@@ -1276,7 +1279,7 @@ static int cm_req_handler(struct cm_work *work)
cm_id_priv
->
id
.
cm_handler
=
listen_cm_id_priv
->
id
.
cm_handler
;
cm_id_priv
->
id
.
context
=
listen_cm_id_priv
->
id
.
context
;
cm_id_priv
->
id
.
service_id
=
req_msg
->
service_id
;
cm_id_priv
->
id
.
service_mask
=
~
0ULL
;
cm_id_priv
->
id
.
service_mask
=
__constant_cpu_to_be64
(
~
0ULL
)
;
cm_format_paths_from_req
(
req_msg
,
&
work
->
path
[
0
],
&
work
->
path
[
1
]);
ret
=
cm_init_av_by_path
(
&
work
->
path
[
0
],
&
cm_id_priv
->
av
);
...
...
@@ -1969,7 +1972,7 @@ static void cm_format_rej_event(struct cm_work *work)
param
=
&
work
->
cm_event
.
param
.
rej_rcvd
;
param
->
ari
=
rej_msg
->
ari
;
param
->
ari_length
=
cm_rej_get_reject_info_len
(
rej_msg
);
param
->
reason
=
rej_msg
->
reason
;
param
->
reason
=
__be16_to_cpu
(
rej_msg
->
reason
)
;
work
->
cm_event
.
private_data
=
&
rej_msg
->
private_data
;
}
...
...
@@ -1978,20 +1981,20 @@ static struct cm_id_private * cm_acquire_rejected_id(struct cm_rej_msg *rej_msg)
struct
cm_timewait_info
*
timewait_info
;
struct
cm_id_private
*
cm_id_priv
;
unsigned
long
flags
;
u
32
remote_id
;
__be
32
remote_id
;
remote_id
=
rej_msg
->
local_comm_id
;
if
(
rej_msg
->
reason
==
IB_CM_REJ_TIMEOUT
)
{
if
(
__be16_to_cpu
(
rej_msg
->
reason
)
==
IB_CM_REJ_TIMEOUT
)
{
spin_lock_irqsave
(
&
cm
.
lock
,
flags
);
timewait_info
=
cm_find_remote_id
(
*
((
u
64
*
)
rej_msg
->
ari
),
timewait_info
=
cm_find_remote_id
(
*
((
__be
64
*
)
rej_msg
->
ari
),
remote_id
);
if
(
!
timewait_info
)
{
spin_unlock_irqrestore
(
&
cm
.
lock
,
flags
);
return
NULL
;
}
cm_id_priv
=
idr_find
(
&
cm
.
local_id_table
,
(
int
)
timewait_info
->
work
.
local_id
);
(
__force
int
)
timewait_info
->
work
.
local_id
);
if
(
cm_id_priv
)
{
if
(
cm_id_priv
->
id
.
remote_id
==
remote_id
)
atomic_inc
(
&
cm_id_priv
->
refcount
);
...
...
@@ -2032,7 +2035,7 @@ static int cm_rej_handler(struct cm_work *work)
/* fall through */
case
IB_CM_REQ_RCVD
:
case
IB_CM_MRA_REQ_SENT
:
if
(
rej_msg
->
reason
==
IB_CM_REJ_STALE_CONN
)
if
(
__be16_to_cpu
(
rej_msg
->
reason
)
==
IB_CM_REJ_STALE_CONN
)
cm_enter_timewait
(
cm_id_priv
);
else
cm_reset_to_idle
(
cm_id_priv
);
...
...
@@ -2553,7 +2556,7 @@ static void cm_format_sidr_req(struct cm_sidr_req_msg *sidr_req_msg,
cm_format_mad_hdr
(
&
sidr_req_msg
->
hdr
,
CM_SIDR_REQ_ATTR_ID
,
cm_form_tid
(
cm_id_priv
,
CM_MSG_SEQUENCE_SIDR
));
sidr_req_msg
->
request_id
=
cm_id_priv
->
id
.
local_id
;
sidr_req_msg
->
pkey
=
param
->
pkey
;
sidr_req_msg
->
pkey
=
cpu_to_be16
(
param
->
pkey
)
;
sidr_req_msg
->
service_id
=
param
->
service_id
;
if
(
param
->
private_data
&&
param
->
private_data_len
)
...
...
@@ -2580,7 +2583,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id,
goto
out
;
cm_id
->
service_id
=
param
->
service_id
;
cm_id
->
service_mask
=
~
0ULL
;
cm_id
->
service_mask
=
__constant_cpu_to_be64
(
~
0ULL
)
;
cm_id_priv
->
timeout_ms
=
param
->
timeout_ms
;
cm_id_priv
->
max_cm_retries
=
param
->
max_cm_retries
;
ret
=
cm_alloc_msg
(
cm_id_priv
,
&
msg
);
...
...
@@ -2621,7 +2624,7 @@ static void cm_format_sidr_req_event(struct cm_work *work,
sidr_req_msg
=
(
struct
cm_sidr_req_msg
*
)
work
->
mad_recv_wc
->
recv_buf
.
mad
;
param
=
&
work
->
cm_event
.
param
.
sidr_req_rcvd
;
param
->
pkey
=
sidr_req_msg
->
pkey
;
param
->
pkey
=
__be16_to_cpu
(
sidr_req_msg
->
pkey
)
;
param
->
listen_id
=
listen_id
;
param
->
device
=
work
->
port
->
mad_agent
->
device
;
param
->
port
=
work
->
port
->
port_num
;
...
...
@@ -2645,7 +2648,7 @@ static int cm_sidr_req_handler(struct cm_work *work)
sidr_req_msg
=
(
struct
cm_sidr_req_msg
*
)
work
->
mad_recv_wc
->
recv_buf
.
mad
;
wc
=
work
->
mad_recv_wc
->
wc
;
cm_id_priv
->
av
.
dgid
.
global
.
subnet_prefix
=
wc
->
slid
;
cm_id_priv
->
av
.
dgid
.
global
.
subnet_prefix
=
cpu_to_be64
(
wc
->
slid
)
;
cm_id_priv
->
av
.
dgid
.
global
.
interface_id
=
0
;
cm_init_av_for_response
(
work
->
port
,
work
->
mad_recv_wc
->
wc
,
&
cm_id_priv
->
av
);
...
...
@@ -2673,7 +2676,7 @@ static int cm_sidr_req_handler(struct cm_work *work)
cm_id_priv
->
id
.
cm_handler
=
cur_cm_id_priv
->
id
.
cm_handler
;
cm_id_priv
->
id
.
context
=
cur_cm_id_priv
->
id
.
context
;
cm_id_priv
->
id
.
service_id
=
sidr_req_msg
->
service_id
;
cm_id_priv
->
id
.
service_mask
=
~
0ULL
;
cm_id_priv
->
id
.
service_mask
=
__constant_cpu_to_be64
(
~
0ULL
)
;
cm_format_sidr_req_event
(
work
,
&
cur_cm_id_priv
->
id
);
cm_process_work
(
cm_id_priv
,
work
);
...
...
@@ -3175,10 +3178,10 @@ int ib_cm_init_qp_attr(struct ib_cm_id *cm_id,
}
EXPORT_SYMBOL
(
ib_cm_init_qp_attr
);
static
u
64
cm_get_ca_guid
(
struct
ib_device
*
device
)
static
__be
64
cm_get_ca_guid
(
struct
ib_device
*
device
)
{
struct
ib_device_attr
*
device_attr
;
u
64
guid
;
__be
64
guid
;
int
ret
;
device_attr
=
kmalloc
(
sizeof
*
device_attr
,
GFP_KERNEL
);
...
...
drivers/infiniband/core/cm_msgs.h
浏览文件 @
a78b3371
...
...
@@ -34,7 +34,7 @@
#if !defined(CM_MSGS_H)
#define CM_MSGS_H
#include <ib_mad.h>
#include <
rdma/
ib_mad.h>
/*
* Parameters to routines below should be in network-byte order, and values
...
...
@@ -43,19 +43,17 @@
#define IB_CM_CLASS_VERSION 2
/* IB specification 1.2 */
enum
cm_msg_attr_id
{
CM_REQ_ATTR_ID
=
__constant_htons
(
0x0010
),
CM_MRA_ATTR_ID
=
__constant_htons
(
0x0011
),
CM_REJ_ATTR_ID
=
__constant_htons
(
0x0012
),
CM_REP_ATTR_ID
=
__constant_htons
(
0x0013
),
CM_RTU_ATTR_ID
=
__constant_htons
(
0x0014
),
CM_DREQ_ATTR_ID
=
__constant_htons
(
0x0015
),
CM_DREP_ATTR_ID
=
__constant_htons
(
0x0016
),
CM_SIDR_REQ_ATTR_ID
=
__constant_htons
(
0x0017
),
CM_SIDR_REP_ATTR_ID
=
__constant_htons
(
0x0018
),
CM_LAP_ATTR_ID
=
__constant_htons
(
0x0019
),
CM_APR_ATTR_ID
=
__constant_htons
(
0x001A
)
};
#define CM_REQ_ATTR_ID __constant_htons(0x0010)
#define CM_MRA_ATTR_ID __constant_htons(0x0011)
#define CM_REJ_ATTR_ID __constant_htons(0x0012)
#define CM_REP_ATTR_ID __constant_htons(0x0013)
#define CM_RTU_ATTR_ID __constant_htons(0x0014)
#define CM_DREQ_ATTR_ID __constant_htons(0x0015)
#define CM_DREP_ATTR_ID __constant_htons(0x0016)
#define CM_SIDR_REQ_ATTR_ID __constant_htons(0x0017)
#define CM_SIDR_REP_ATTR_ID __constant_htons(0x0018)
#define CM_LAP_ATTR_ID __constant_htons(0x0019)
#define CM_APR_ATTR_ID __constant_htons(0x001A)
enum
cm_msg_sequence
{
CM_MSG_SEQUENCE_REQ
,
...
...
@@ -67,35 +65,35 @@ enum cm_msg_sequence {
struct
cm_req_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
rsvd4
;
u
64
service_id
;
u
64
local_ca_guid
;
u
32
rsvd24
;
u
32
local_qkey
;
__be
32
local_comm_id
;
__be
32
rsvd4
;
__be
64
service_id
;
__be
64
local_ca_guid
;
__be
32
rsvd24
;
__be
32
local_qkey
;
/* local QPN:24, responder resources:8 */
u
32
offset32
;
__be
32
offset32
;
/* local EECN:24, initiator depth:8 */
u
32
offset36
;
__be
32
offset36
;
/*
* remote EECN:24, remote CM response timeout:5,
* transport service type:2, end-to-end flow control:1
*/
u
32
offset40
;
__be
32
offset40
;
/* starting PSN:24, local CM response timeout:5, retry count:3 */
u
32
offset44
;
u
16
pkey
;
__be
32
offset44
;
__be
16
pkey
;
/* path MTU:4, RDC exists:1, RNR retry count:3. */
u8
offset50
;
/* max CM Retries:4, SRQ:1, rsvd:3 */
u8
offset51
;
u
16
primary_local_lid
;
u
16
primary_remote_lid
;
__be
16
primary_local_lid
;
__be
16
primary_remote_lid
;
union
ib_gid
primary_local_gid
;
union
ib_gid
primary_remote_gid
;
/* flow label:20, rsvd:6, packet rate:6 */
u
32
primary_offset88
;
__be
32
primary_offset88
;
u8
primary_traffic_class
;
u8
primary_hop_limit
;
/* SL:4, subnet local:1, rsvd:3 */
...
...
@@ -103,12 +101,12 @@ struct cm_req_msg {
/* local ACK timeout:5, rsvd:3 */
u8
primary_offset95
;
u
16
alt_local_lid
;
u
16
alt_remote_lid
;
__be
16
alt_local_lid
;
__be
16
alt_remote_lid
;
union
ib_gid
alt_local_gid
;
union
ib_gid
alt_remote_gid
;
/* flow label:20, rsvd:6, packet rate:6 */
u
32
alt_offset132
;
__be
32
alt_offset132
;
u8
alt_traffic_class
;
u8
alt_hop_limit
;
/* SL:4, subnet local:1, rsvd:3 */
...
...
@@ -120,12 +118,12 @@ struct cm_req_msg {
}
__attribute__
((
packed
));
static
inline
u
32
cm_req_get_local_qpn
(
struct
cm_req_msg
*
req_msg
)
static
inline
__be
32
cm_req_get_local_qpn
(
struct
cm_req_msg
*
req_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
req_msg
->
offset32
)
>>
8
);
}
static
inline
void
cm_req_set_local_qpn
(
struct
cm_req_msg
*
req_msg
,
u
32
qpn
)
static
inline
void
cm_req_set_local_qpn
(
struct
cm_req_msg
*
req_msg
,
__be
32
qpn
)
{
req_msg
->
offset32
=
cpu_to_be32
((
be32_to_cpu
(
qpn
)
<<
8
)
|
(
be32_to_cpu
(
req_msg
->
offset32
)
&
...
...
@@ -208,13 +206,13 @@ static inline void cm_req_set_flow_ctrl(struct cm_req_msg *req_msg,
0xFFFFFFFE
));
}
static
inline
u
32
cm_req_get_starting_psn
(
struct
cm_req_msg
*
req_msg
)
static
inline
__be
32
cm_req_get_starting_psn
(
struct
cm_req_msg
*
req_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
req_msg
->
offset44
)
>>
8
);
}
static
inline
void
cm_req_set_starting_psn
(
struct
cm_req_msg
*
req_msg
,
u
32
starting_psn
)
__be
32
starting_psn
)
{
req_msg
->
offset44
=
cpu_to_be32
((
be32_to_cpu
(
starting_psn
)
<<
8
)
|
(
be32_to_cpu
(
req_msg
->
offset44
)
&
0x000000FF
));
...
...
@@ -288,13 +286,13 @@ static inline void cm_req_set_srq(struct cm_req_msg *req_msg, u8 srq)
((
srq
&
0x1
)
<<
3
));
}
static
inline
u
32
cm_req_get_primary_flow_label
(
struct
cm_req_msg
*
req_msg
)
static
inline
__be
32
cm_req_get_primary_flow_label
(
struct
cm_req_msg
*
req_msg
)
{
return
cpu_to_be32
(
(
be32_to_cpu
(
req_msg
->
primary_offset88
)
>>
12
)
);
return
cpu_to_be32
(
be32_to_cpu
(
req_msg
->
primary_offset88
)
>>
12
);
}
static
inline
void
cm_req_set_primary_flow_label
(
struct
cm_req_msg
*
req_msg
,
u
32
flow_label
)
__be
32
flow_label
)
{
req_msg
->
primary_offset88
=
cpu_to_be32
(
(
be32_to_cpu
(
req_msg
->
primary_offset88
)
&
...
...
@@ -350,13 +348,13 @@ static inline void cm_req_set_primary_local_ack_timeout(struct cm_req_msg *req_m
(
local_ack_timeout
<<
3
));
}
static
inline
u
32
cm_req_get_alt_flow_label
(
struct
cm_req_msg
*
req_msg
)
static
inline
__be
32
cm_req_get_alt_flow_label
(
struct
cm_req_msg
*
req_msg
)
{
return
cpu_to_be32
(
(
be32_to_cpu
(
req_msg
->
alt_offset132
)
>>
12
)
);
return
cpu_to_be32
(
be32_to_cpu
(
req_msg
->
alt_offset132
)
>>
12
);
}
static
inline
void
cm_req_set_alt_flow_label
(
struct
cm_req_msg
*
req_msg
,
u
32
flow_label
)
__be
32
flow_label
)
{
req_msg
->
alt_offset132
=
cpu_to_be32
(
(
be32_to_cpu
(
req_msg
->
alt_offset132
)
&
...
...
@@ -422,8 +420,8 @@ enum cm_msg_response {
struct
cm_mra_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
/* message MRAed:2, rsvd:6 */
u8
offset8
;
/* service timeout:5, rsvd:3 */
...
...
@@ -458,13 +456,13 @@ static inline void cm_mra_set_service_timeout(struct cm_mra_msg *mra_msg,
struct
cm_rej_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
/* message REJected:2, rsvd:6 */
u8
offset8
;
/* reject info length:7, rsvd:1. */
u8
offset9
;
u
16
reason
;
__be
16
reason
;
u8
ari
[
IB_CM_REJ_ARI_LENGTH
];
u8
private_data
[
IB_CM_REJ_PRIVATE_DATA_SIZE
];
...
...
@@ -495,45 +493,45 @@ static inline void cm_rej_set_reject_info_len(struct cm_rej_msg *rej_msg,
struct
cm_rep_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
u
32
local_qkey
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
__be
32
local_qkey
;
/* local QPN:24, rsvd:8 */
u
32
offset12
;
__be
32
offset12
;
/* local EECN:24, rsvd:8 */
u
32
offset16
;
__be
32
offset16
;
/* starting PSN:24 rsvd:8 */
u
32
offset20
;
__be
32
offset20
;
u8
resp_resources
;
u8
initiator_depth
;
/* target ACK delay:5, failover accepted:2, end-to-end flow control:1 */
u8
offset26
;
/* RNR retry count:3, SRQ:1, rsvd:5 */
u8
offset27
;
u
64
local_ca_guid
;
__be
64
local_ca_guid
;
u8
private_data
[
IB_CM_REP_PRIVATE_DATA_SIZE
];
}
__attribute__
((
packed
));
static
inline
u
32
cm_rep_get_local_qpn
(
struct
cm_rep_msg
*
rep_msg
)
static
inline
__be
32
cm_rep_get_local_qpn
(
struct
cm_rep_msg
*
rep_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
rep_msg
->
offset12
)
>>
8
);
}
static
inline
void
cm_rep_set_local_qpn
(
struct
cm_rep_msg
*
rep_msg
,
u
32
qpn
)
static
inline
void
cm_rep_set_local_qpn
(
struct
cm_rep_msg
*
rep_msg
,
__be
32
qpn
)
{
rep_msg
->
offset12
=
cpu_to_be32
((
be32_to_cpu
(
qpn
)
<<
8
)
|
(
be32_to_cpu
(
rep_msg
->
offset12
)
&
0x000000FF
));
}
static
inline
u
32
cm_rep_get_starting_psn
(
struct
cm_rep_msg
*
rep_msg
)
static
inline
__be
32
cm_rep_get_starting_psn
(
struct
cm_rep_msg
*
rep_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
rep_msg
->
offset20
)
>>
8
);
}
static
inline
void
cm_rep_set_starting_psn
(
struct
cm_rep_msg
*
rep_msg
,
u
32
starting_psn
)
__be
32
starting_psn
)
{
rep_msg
->
offset20
=
cpu_to_be32
((
be32_to_cpu
(
starting_psn
)
<<
8
)
|
(
be32_to_cpu
(
rep_msg
->
offset20
)
&
0x000000FF
));
...
...
@@ -600,8 +598,8 @@ static inline void cm_rep_set_srq(struct cm_rep_msg *rep_msg, u8 srq)
struct
cm_rtu_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
u8
private_data
[
IB_CM_RTU_PRIVATE_DATA_SIZE
];
...
...
@@ -610,21 +608,21 @@ struct cm_rtu_msg {
struct
cm_dreq_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
/* remote QPN/EECN:24, rsvd:8 */
u
32
offset8
;
__be
32
offset8
;
u8
private_data
[
IB_CM_DREQ_PRIVATE_DATA_SIZE
];
}
__attribute__
((
packed
));
static
inline
u
32
cm_dreq_get_remote_qpn
(
struct
cm_dreq_msg
*
dreq_msg
)
static
inline
__be
32
cm_dreq_get_remote_qpn
(
struct
cm_dreq_msg
*
dreq_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
dreq_msg
->
offset8
)
>>
8
);
}
static
inline
void
cm_dreq_set_remote_qpn
(
struct
cm_dreq_msg
*
dreq_msg
,
u
32
qpn
)
static
inline
void
cm_dreq_set_remote_qpn
(
struct
cm_dreq_msg
*
dreq_msg
,
__be
32
qpn
)
{
dreq_msg
->
offset8
=
cpu_to_be32
((
be32_to_cpu
(
qpn
)
<<
8
)
|
(
be32_to_cpu
(
dreq_msg
->
offset8
)
&
0x000000FF
));
...
...
@@ -633,8 +631,8 @@ static inline void cm_dreq_set_remote_qpn(struct cm_dreq_msg *dreq_msg, u32 qpn)
struct
cm_drep_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
u8
private_data
[
IB_CM_DREP_PRIVATE_DATA_SIZE
];
...
...
@@ -643,37 +641,37 @@ struct cm_drep_msg {
struct
cm_lap_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
u
32
rsvd8
;
__be
32
rsvd8
;
/* remote QPN/EECN:24, remote CM response timeout:5, rsvd:3 */
u
32
offset12
;
u
32
rsvd16
;
__be
32
offset12
;
__be
32
rsvd16
;
u
16
alt_local_lid
;
u
16
alt_remote_lid
;
__be
16
alt_local_lid
;
__be
16
alt_remote_lid
;
union
ib_gid
alt_local_gid
;
union
ib_gid
alt_remote_gid
;
/* flow label:20, rsvd:4, traffic class:8 */
u
32
offset56
;
__be
32
offset56
;
u8
alt_hop_limit
;
/* rsvd:2, packet rate:6 */
u
int8_t
offset61
;
u
8
offset61
;
/* SL:4, subnet local:1, rsvd:3 */
u
int8_t
offset62
;
u
8
offset62
;
/* local ACK timeout:5, rsvd:3 */
u
int8_t
offset63
;
u
8
offset63
;
u8
private_data
[
IB_CM_LAP_PRIVATE_DATA_SIZE
];
}
__attribute__
((
packed
));
static
inline
u
32
cm_lap_get_remote_qpn
(
struct
cm_lap_msg
*
lap_msg
)
static
inline
__be
32
cm_lap_get_remote_qpn
(
struct
cm_lap_msg
*
lap_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
lap_msg
->
offset12
)
>>
8
);
}
static
inline
void
cm_lap_set_remote_qpn
(
struct
cm_lap_msg
*
lap_msg
,
u
32
qpn
)
static
inline
void
cm_lap_set_remote_qpn
(
struct
cm_lap_msg
*
lap_msg
,
__be
32
qpn
)
{
lap_msg
->
offset12
=
cpu_to_be32
((
be32_to_cpu
(
qpn
)
<<
8
)
|
(
be32_to_cpu
(
lap_msg
->
offset12
)
&
...
...
@@ -693,17 +691,17 @@ static inline void cm_lap_set_remote_resp_timeout(struct cm_lap_msg *lap_msg,
0xFFFFFF07
));
}
static
inline
u
32
cm_lap_get_flow_label
(
struct
cm_lap_msg
*
lap_msg
)
static
inline
__be
32
cm_lap_get_flow_label
(
struct
cm_lap_msg
*
lap_msg
)
{
return
be32_to_cpu
(
lap_msg
->
offset56
)
>>
12
;
return
cpu_to_be32
(
be32_to_cpu
(
lap_msg
->
offset56
)
>>
12
)
;
}
static
inline
void
cm_lap_set_flow_label
(
struct
cm_lap_msg
*
lap_msg
,
u
32
flow_label
)
__be
32
flow_label
)
{
lap_msg
->
offset56
=
cpu_to_be32
(
(
flow_label
<<
12
)
|
(
be32_to_cpu
(
lap_msg
->
offset56
)
&
0x00000FFF
));
lap_msg
->
offset56
=
cpu_to_be32
(
(
be32_to_cpu
(
lap_msg
->
offset56
)
&
0x00000FFF
)
|
(
be32_to_cpu
(
flow_label
)
<<
12
));
}
static
inline
u8
cm_lap_get_traffic_class
(
struct
cm_lap_msg
*
lap_msg
)
...
...
@@ -766,8 +764,8 @@ static inline void cm_lap_set_local_ack_timeout(struct cm_lap_msg *lap_msg,
struct
cm_apr_msg
{
struct
ib_mad_hdr
hdr
;
u
32
local_comm_id
;
u
32
remote_comm_id
;
__be
32
local_comm_id
;
__be
32
remote_comm_id
;
u8
info_length
;
u8
ap_status
;
...
...
@@ -779,10 +777,10 @@ struct cm_apr_msg {
struct
cm_sidr_req_msg
{
struct
ib_mad_hdr
hdr
;
u
32
request_id
;
u
16
pkey
;
u
16
rsvd
;
u
64
service_id
;
__be
32
request_id
;
__be
16
pkey
;
__be
16
rsvd
;
__be
64
service_id
;
u8
private_data
[
IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE
];
}
__attribute__
((
packed
));
...
...
@@ -790,26 +788,26 @@ struct cm_sidr_req_msg {
struct
cm_sidr_rep_msg
{
struct
ib_mad_hdr
hdr
;
u
32
request_id
;
__be
32
request_id
;
u8
status
;
u8
info_length
;
u
16
rsvd
;
__be
16
rsvd
;
/* QPN:24, rsvd:8 */
u
32
offset8
;
u
64
service_id
;
u
32
qkey
;
__be
32
offset8
;
__be
64
service_id
;
__be
32
qkey
;
u8
info
[
IB_CM_SIDR_REP_INFO_LENGTH
];
u8
private_data
[
IB_CM_SIDR_REP_PRIVATE_DATA_SIZE
];
}
__attribute__
((
packed
));
static
inline
u
32
cm_sidr_rep_get_qpn
(
struct
cm_sidr_rep_msg
*
sidr_rep_msg
)
static
inline
__be
32
cm_sidr_rep_get_qpn
(
struct
cm_sidr_rep_msg
*
sidr_rep_msg
)
{
return
cpu_to_be32
(
be32_to_cpu
(
sidr_rep_msg
->
offset8
)
>>
8
);
}
static
inline
void
cm_sidr_rep_set_qpn
(
struct
cm_sidr_rep_msg
*
sidr_rep_msg
,
u
32
qpn
)
__be
32
qpn
)
{
sidr_rep_msg
->
offset8
=
cpu_to_be32
((
be32_to_cpu
(
qpn
)
<<
8
)
|
(
be32_to_cpu
(
sidr_rep_msg
->
offset8
)
&
...
...
drivers/infiniband/core/core_priv.h
浏览文件 @
a78b3371
...
...
@@ -38,7 +38,7 @@
#include <linux/list.h>
#include <linux/spinlock.h>
#include <ib_verbs.h>
#include <
rdma/
ib_verbs.h>
int
ib_device_register_sysfs
(
struct
ib_device
*
device
);
void
ib_device_unregister_sysfs
(
struct
ib_device
*
device
);
...
...
drivers/infiniband/core/device.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
drivers/infiniband/core/fmr_pool.c
浏览文件 @
a78b3371
...
...
@@ -39,7 +39,7 @@
#include <linux/jhash.h>
#include <linux/kthread.h>
#include <ib_fmr_pool.h>
#include <
rdma/
ib_fmr_pool.h>
#include "core_priv.h"
...
...
@@ -334,6 +334,7 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool)
{
struct
ib_pool_fmr
*
fmr
;
struct
ib_pool_fmr
*
tmp
;
LIST_HEAD
(
fmr_list
);
int
i
;
kthread_stop
(
pool
->
thread
);
...
...
@@ -341,6 +342,11 @@ void ib_destroy_fmr_pool(struct ib_fmr_pool *pool)
i
=
0
;
list_for_each_entry_safe
(
fmr
,
tmp
,
&
pool
->
free_list
,
list
)
{
if
(
fmr
->
remap_count
)
{
INIT_LIST_HEAD
(
&
fmr_list
);
list_add_tail
(
&
fmr
->
fmr
->
list
,
&
fmr_list
);
ib_unmap_fmr
(
&
fmr_list
);
}
ib_dealloc_fmr
(
fmr
->
fmr
);
list_del
(
&
fmr
->
list
);
kfree
(
fmr
);
...
...
drivers/infiniband/core/mad.c
浏览文件 @
a78b3371
...
...
@@ -693,7 +693,8 @@ static int handle_outgoing_dr_smp(struct ib_mad_agent_private *mad_agent_priv,
goto
out
;
}
build_smp_wc
(
send_wr
->
wr_id
,
smp
->
dr_slid
,
send_wr
->
wr
.
ud
.
pkey_index
,
build_smp_wc
(
send_wr
->
wr_id
,
be16_to_cpu
(
smp
->
dr_slid
),
send_wr
->
wr
.
ud
.
pkey_index
,
send_wr
->
wr
.
ud
.
port_num
,
&
mad_wc
);
/* No GRH for DR SMP */
...
...
@@ -1554,7 +1555,7 @@ static int is_data_mad(struct ib_mad_agent_private *mad_agent_priv,
}
struct
ib_mad_send_wr_private
*
ib_find_send_mad
(
struct
ib_mad_agent_private
*
mad_agent_priv
,
u
64
tid
)
ib_find_send_mad
(
struct
ib_mad_agent_private
*
mad_agent_priv
,
__be
64
tid
)
{
struct
ib_mad_send_wr_private
*
mad_send_wr
;
...
...
@@ -1597,7 +1598,7 @@ static void ib_mad_complete_recv(struct ib_mad_agent_private *mad_agent_priv,
struct
ib_mad_send_wr_private
*
mad_send_wr
;
struct
ib_mad_send_wc
mad_send_wc
;
unsigned
long
flags
;
u
64
tid
;
__be
64
tid
;
INIT_LIST_HEAD
(
&
mad_recv_wc
->
rmpp_list
);
list_add
(
&
mad_recv_wc
->
recv_buf
.
list
,
&
mad_recv_wc
->
rmpp_list
);
...
...
@@ -2165,7 +2166,8 @@ static void local_completions(void *data)
* Defined behavior is to complete response
* before request
*/
build_smp_wc
(
local
->
wr_id
,
IB_LID_PERMISSIVE
,
build_smp_wc
(
local
->
wr_id
,
be16_to_cpu
(
IB_LID_PERMISSIVE
),
0
/* pkey index */
,
recv_mad_agent
->
agent
.
port_num
,
&
wc
);
...
...
@@ -2294,7 +2296,7 @@ static void timeout_sends(void *data)
spin_unlock_irqrestore
(
&
mad_agent_priv
->
lock
,
flags
);
}
static
void
ib_mad_thread_completion_handler
(
struct
ib_cq
*
cq
)
static
void
ib_mad_thread_completion_handler
(
struct
ib_cq
*
cq
,
void
*
arg
)
{
struct
ib_mad_port_private
*
port_priv
=
cq
->
cq_context
;
...
...
@@ -2574,8 +2576,7 @@ static int ib_mad_port_open(struct ib_device *device,
cq_size
=
(
IB_MAD_QP_SEND_SIZE
+
IB_MAD_QP_RECV_SIZE
)
*
2
;
port_priv
->
cq
=
ib_create_cq
(
port_priv
->
device
,
(
ib_comp_handler
)
ib_mad_thread_completion_handler
,
ib_mad_thread_completion_handler
,
NULL
,
port_priv
,
cq_size
);
if
(
IS_ERR
(
port_priv
->
cq
))
{
printk
(
KERN_ERR
PFX
"Couldn't create ib_mad CQ
\n
"
);
...
...
drivers/infiniband/core/mad_priv.h
浏览文件 @
a78b3371
...
...
@@ -40,8 +40,8 @@
#include <linux/pci.h>
#include <linux/kthread.h>
#include <linux/workqueue.h>
#include <ib_mad.h>
#include <ib_smi.h>
#include <
rdma/
ib_mad.h>
#include <
rdma/
ib_smi.h>
#define PFX "ib_mad: "
...
...
@@ -121,7 +121,7 @@ struct ib_mad_send_wr_private {
struct
ib_send_wr
send_wr
;
struct
ib_sge
sg_list
[
IB_MAD_SEND_REQ_MAX_SG
];
u64
wr_id
;
/* client WR ID */
u
64
tid
;
__be
64
tid
;
unsigned
long
timeout
;
int
retries
;
int
retry
;
...
...
@@ -144,7 +144,7 @@ struct ib_mad_local_private {
struct
ib_send_wr
send_wr
;
struct
ib_sge
sg_list
[
IB_MAD_SEND_REQ_MAX_SG
];
u64
wr_id
;
/* client WR ID */
u
64
tid
;
__be
64
tid
;
};
struct
ib_mad_mgmt_method_table
{
...
...
@@ -210,7 +210,7 @@ extern kmem_cache_t *ib_mad_cache;
int
ib_send_mad
(
struct
ib_mad_send_wr_private
*
mad_send_wr
);
struct
ib_mad_send_wr_private
*
ib_find_send_mad
(
struct
ib_mad_agent_private
*
mad_agent_priv
,
u
64
tid
);
ib_find_send_mad
(
struct
ib_mad_agent_private
*
mad_agent_priv
,
__be
64
tid
);
void
ib_mad_complete_send_wr
(
struct
ib_mad_send_wr_private
*
mad_send_wr
,
struct
ib_mad_send_wc
*
mad_send_wc
);
...
...
drivers/infiniband/core/mad_rmpp.c
浏览文件 @
a78b3371
...
...
@@ -61,7 +61,7 @@ struct mad_rmpp_recv {
int
seg_num
;
int
newwin
;
u
64
tid
;
__be
64
tid
;
u32
src_qp
;
u16
slid
;
u8
mgmt_class
;
...
...
@@ -100,6 +100,121 @@ void ib_cancel_rmpp_recvs(struct ib_mad_agent_private *agent)
}
}
static
int
data_offset
(
u8
mgmt_class
)
{
if
(
mgmt_class
==
IB_MGMT_CLASS_SUBN_ADM
)
return
offsetof
(
struct
ib_sa_mad
,
data
);
else
if
((
mgmt_class
>=
IB_MGMT_CLASS_VENDOR_RANGE2_START
)
&&
(
mgmt_class
<=
IB_MGMT_CLASS_VENDOR_RANGE2_END
))
return
offsetof
(
struct
ib_vendor_mad
,
data
);
else
return
offsetof
(
struct
ib_rmpp_mad
,
data
);
}
static
void
format_ack
(
struct
ib_rmpp_mad
*
ack
,
struct
ib_rmpp_mad
*
data
,
struct
mad_rmpp_recv
*
rmpp_recv
)
{
unsigned
long
flags
;
memcpy
(
&
ack
->
mad_hdr
,
&
data
->
mad_hdr
,
data_offset
(
data
->
mad_hdr
.
mgmt_class
));
ack
->
mad_hdr
.
method
^=
IB_MGMT_METHOD_RESP
;
ack
->
rmpp_hdr
.
rmpp_type
=
IB_MGMT_RMPP_TYPE_ACK
;
ib_set_rmpp_flags
(
&
ack
->
rmpp_hdr
,
IB_MGMT_RMPP_FLAG_ACTIVE
);
spin_lock_irqsave
(
&
rmpp_recv
->
lock
,
flags
);
rmpp_recv
->
last_ack
=
rmpp_recv
->
seg_num
;
ack
->
rmpp_hdr
.
seg_num
=
cpu_to_be32
(
rmpp_recv
->
seg_num
);
ack
->
rmpp_hdr
.
paylen_newwin
=
cpu_to_be32
(
rmpp_recv
->
newwin
);
spin_unlock_irqrestore
(
&
rmpp_recv
->
lock
,
flags
);
}
static
void
ack_recv
(
struct
mad_rmpp_recv
*
rmpp_recv
,
struct
ib_mad_recv_wc
*
recv_wc
)
{
struct
ib_mad_send_buf
*
msg
;
struct
ib_send_wr
*
bad_send_wr
;
int
hdr_len
,
ret
;
hdr_len
=
sizeof
(
struct
ib_mad_hdr
)
+
sizeof
(
struct
ib_rmpp_hdr
);
msg
=
ib_create_send_mad
(
&
rmpp_recv
->
agent
->
agent
,
recv_wc
->
wc
->
src_qp
,
recv_wc
->
wc
->
pkey_index
,
rmpp_recv
->
ah
,
1
,
hdr_len
,
sizeof
(
struct
ib_rmpp_mad
)
-
hdr_len
,
GFP_KERNEL
);
if
(
!
msg
)
return
;
format_ack
((
struct
ib_rmpp_mad
*
)
msg
->
mad
,
(
struct
ib_rmpp_mad
*
)
recv_wc
->
recv_buf
.
mad
,
rmpp_recv
);
ret
=
ib_post_send_mad
(
&
rmpp_recv
->
agent
->
agent
,
&
msg
->
send_wr
,
&
bad_send_wr
);
if
(
ret
)
ib_free_send_mad
(
msg
);
}
static
int
alloc_response_msg
(
struct
ib_mad_agent
*
agent
,
struct
ib_mad_recv_wc
*
recv_wc
,
struct
ib_mad_send_buf
**
msg
)
{
struct
ib_mad_send_buf
*
m
;
struct
ib_ah
*
ah
;
int
hdr_len
;
ah
=
ib_create_ah_from_wc
(
agent
->
qp
->
pd
,
recv_wc
->
wc
,
recv_wc
->
recv_buf
.
grh
,
agent
->
port_num
);
if
(
IS_ERR
(
ah
))
return
PTR_ERR
(
ah
);
hdr_len
=
sizeof
(
struct
ib_mad_hdr
)
+
sizeof
(
struct
ib_rmpp_hdr
);
m
=
ib_create_send_mad
(
agent
,
recv_wc
->
wc
->
src_qp
,
recv_wc
->
wc
->
pkey_index
,
ah
,
1
,
hdr_len
,
sizeof
(
struct
ib_rmpp_mad
)
-
hdr_len
,
GFP_KERNEL
);
if
(
IS_ERR
(
m
))
{
ib_destroy_ah
(
ah
);
return
PTR_ERR
(
m
);
}
*
msg
=
m
;
return
0
;
}
static
void
free_msg
(
struct
ib_mad_send_buf
*
msg
)
{
ib_destroy_ah
(
msg
->
send_wr
.
wr
.
ud
.
ah
);
ib_free_send_mad
(
msg
);
}
static
void
nack_recv
(
struct
ib_mad_agent_private
*
agent
,
struct
ib_mad_recv_wc
*
recv_wc
,
u8
rmpp_status
)
{
struct
ib_mad_send_buf
*
msg
;
struct
ib_rmpp_mad
*
rmpp_mad
;
struct
ib_send_wr
*
bad_send_wr
;
int
ret
;
ret
=
alloc_response_msg
(
&
agent
->
agent
,
recv_wc
,
&
msg
);
if
(
ret
)
return
;
rmpp_mad
=
(
struct
ib_rmpp_mad
*
)
msg
->
mad
;
memcpy
(
rmpp_mad
,
recv_wc
->
recv_buf
.
mad
,
data_offset
(
recv_wc
->
recv_buf
.
mad
->
mad_hdr
.
mgmt_class
));
rmpp_mad
->
mad_hdr
.
method
^=
IB_MGMT_METHOD_RESP
;
rmpp_mad
->
rmpp_hdr
.
rmpp_version
=
IB_MGMT_RMPP_VERSION
;
rmpp_mad
->
rmpp_hdr
.
rmpp_type
=
IB_MGMT_RMPP_TYPE_ABORT
;
ib_set_rmpp_flags
(
&
rmpp_mad
->
rmpp_hdr
,
IB_MGMT_RMPP_FLAG_ACTIVE
);
rmpp_mad
->
rmpp_hdr
.
rmpp_status
=
rmpp_status
;
rmpp_mad
->
rmpp_hdr
.
seg_num
=
0
;
rmpp_mad
->
rmpp_hdr
.
paylen_newwin
=
0
;
ret
=
ib_post_send_mad
(
&
agent
->
agent
,
&
msg
->
send_wr
,
&
bad_send_wr
);
if
(
ret
)
free_msg
(
msg
);
}
static
void
recv_timeout_handler
(
void
*
data
)
{
struct
mad_rmpp_recv
*
rmpp_recv
=
data
;
...
...
@@ -115,8 +230,8 @@ static void recv_timeout_handler(void *data)
list_del
(
&
rmpp_recv
->
list
);
spin_unlock_irqrestore
(
&
rmpp_recv
->
agent
->
lock
,
flags
);
/* TODO: send abort. */
rmpp_wc
=
rmpp_recv
->
rmpp_wc
;
nack_recv
(
rmpp_recv
->
agent
,
rmpp_wc
,
IB_MGMT_RMPP_STATUS_T2L
);
destroy_rmpp_recv
(
rmpp_recv
);
ib_free_recv_mad
(
rmpp_wc
);
}
...
...
@@ -230,60 +345,6 @@ insert_rmpp_recv(struct ib_mad_agent_private *agent,
return
cur_rmpp_recv
;
}
static
int
data_offset
(
u8
mgmt_class
)
{
if
(
mgmt_class
==
IB_MGMT_CLASS_SUBN_ADM
)
return
offsetof
(
struct
ib_sa_mad
,
data
);
else
if
((
mgmt_class
>=
IB_MGMT_CLASS_VENDOR_RANGE2_START
)
&&
(
mgmt_class
<=
IB_MGMT_CLASS_VENDOR_RANGE2_END
))
return
offsetof
(
struct
ib_vendor_mad
,
data
);
else
return
offsetof
(
struct
ib_rmpp_mad
,
data
);
}
static
void
format_ack
(
struct
ib_rmpp_mad
*
ack
,
struct
ib_rmpp_mad
*
data
,
struct
mad_rmpp_recv
*
rmpp_recv
)
{
unsigned
long
flags
;
memcpy
(
&
ack
->
mad_hdr
,
&
data
->
mad_hdr
,
data_offset
(
data
->
mad_hdr
.
mgmt_class
));
ack
->
mad_hdr
.
method
^=
IB_MGMT_METHOD_RESP
;
ack
->
rmpp_hdr
.
rmpp_type
=
IB_MGMT_RMPP_TYPE_ACK
;
ib_set_rmpp_flags
(
&
ack
->
rmpp_hdr
,
IB_MGMT_RMPP_FLAG_ACTIVE
);
spin_lock_irqsave
(
&
rmpp_recv
->
lock
,
flags
);
rmpp_recv
->
last_ack
=
rmpp_recv
->
seg_num
;
ack
->
rmpp_hdr
.
seg_num
=
cpu_to_be32
(
rmpp_recv
->
seg_num
);
ack
->
rmpp_hdr
.
paylen_newwin
=
cpu_to_be32
(
rmpp_recv
->
newwin
);
spin_unlock_irqrestore
(
&
rmpp_recv
->
lock
,
flags
);
}
static
void
ack_recv
(
struct
mad_rmpp_recv
*
rmpp_recv
,
struct
ib_mad_recv_wc
*
recv_wc
)
{
struct
ib_mad_send_buf
*
msg
;
struct
ib_send_wr
*
bad_send_wr
;
int
hdr_len
,
ret
;
hdr_len
=
sizeof
(
struct
ib_mad_hdr
)
+
sizeof
(
struct
ib_rmpp_hdr
);
msg
=
ib_create_send_mad
(
&
rmpp_recv
->
agent
->
agent
,
recv_wc
->
wc
->
src_qp
,
recv_wc
->
wc
->
pkey_index
,
rmpp_recv
->
ah
,
1
,
hdr_len
,
sizeof
(
struct
ib_rmpp_mad
)
-
hdr_len
,
GFP_KERNEL
);
if
(
!
msg
)
return
;
format_ack
((
struct
ib_rmpp_mad
*
)
msg
->
mad
,
(
struct
ib_rmpp_mad
*
)
recv_wc
->
recv_buf
.
mad
,
rmpp_recv
);
ret
=
ib_post_send_mad
(
&
rmpp_recv
->
agent
->
agent
,
&
msg
->
send_wr
,
&
bad_send_wr
);
if
(
ret
)
ib_free_send_mad
(
msg
);
}
static
inline
int
get_last_flag
(
struct
ib_mad_recv_buf
*
seg
)
{
struct
ib_rmpp_mad
*
rmpp_mad
;
...
...
@@ -559,6 +620,34 @@ static int send_next_seg(struct ib_mad_send_wr_private *mad_send_wr)
return
ib_send_mad
(
mad_send_wr
);
}
static
void
abort_send
(
struct
ib_mad_agent_private
*
agent
,
__be64
tid
,
u8
rmpp_status
)
{
struct
ib_mad_send_wr_private
*
mad_send_wr
;
struct
ib_mad_send_wc
wc
;
unsigned
long
flags
;
spin_lock_irqsave
(
&
agent
->
lock
,
flags
);
mad_send_wr
=
ib_find_send_mad
(
agent
,
tid
);
if
(
!
mad_send_wr
)
goto
out
;
/* Unmatched send */
if
((
mad_send_wr
->
last_ack
==
mad_send_wr
->
total_seg
)
||
(
!
mad_send_wr
->
timeout
)
||
(
mad_send_wr
->
status
!=
IB_WC_SUCCESS
))
goto
out
;
/* Send is already done */
ib_mark_mad_done
(
mad_send_wr
);
spin_unlock_irqrestore
(
&
agent
->
lock
,
flags
);
wc
.
status
=
IB_WC_REM_ABORT_ERR
;
wc
.
vendor_err
=
rmpp_status
;
wc
.
wr_id
=
mad_send_wr
->
wr_id
;
ib_mad_complete_send_wr
(
mad_send_wr
,
&
wc
);
return
;
out:
spin_unlock_irqrestore
(
&
agent
->
lock
,
flags
);
}
static
void
process_rmpp_ack
(
struct
ib_mad_agent_private
*
agent
,
struct
ib_mad_recv_wc
*
mad_recv_wc
)
{
...
...
@@ -568,11 +657,21 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent,
int
seg_num
,
newwin
,
ret
;
rmpp_mad
=
(
struct
ib_rmpp_mad
*
)
mad_recv_wc
->
recv_buf
.
mad
;
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_status
)
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_status
)
{
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_BAD_STATUS
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_BAD_STATUS
);
return
;
}
seg_num
=
be32_to_cpu
(
rmpp_mad
->
rmpp_hdr
.
seg_num
);
newwin
=
be32_to_cpu
(
rmpp_mad
->
rmpp_hdr
.
paylen_newwin
);
if
(
newwin
<
seg_num
)
{
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_W2S
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_W2S
);
return
;
}
spin_lock_irqsave
(
&
agent
->
lock
,
flags
);
mad_send_wr
=
ib_find_send_mad
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
);
...
...
@@ -583,8 +682,13 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent,
(
!
mad_send_wr
->
timeout
)
||
(
mad_send_wr
->
status
!=
IB_WC_SUCCESS
))
goto
out
;
/* Send is already done */
if
(
seg_num
>
mad_send_wr
->
total_seg
)
goto
out
;
/* Bad ACK */
if
(
seg_num
>
mad_send_wr
->
total_seg
||
seg_num
>
mad_send_wr
->
newwin
)
{
spin_unlock_irqrestore
(
&
agent
->
lock
,
flags
);
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_S2B
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_S2B
);
return
;
}
if
(
newwin
<
mad_send_wr
->
newwin
||
seg_num
<
mad_send_wr
->
last_ack
)
goto
out
;
/* Old ACK */
...
...
@@ -628,6 +732,72 @@ static void process_rmpp_ack(struct ib_mad_agent_private *agent,
spin_unlock_irqrestore
(
&
agent
->
lock
,
flags
);
}
static
struct
ib_mad_recv_wc
*
process_rmpp_data
(
struct
ib_mad_agent_private
*
agent
,
struct
ib_mad_recv_wc
*
mad_recv_wc
)
{
struct
ib_rmpp_hdr
*
rmpp_hdr
;
u8
rmpp_status
;
rmpp_hdr
=
&
((
struct
ib_rmpp_mad
*
)
mad_recv_wc
->
recv_buf
.
mad
)
->
rmpp_hdr
;
if
(
rmpp_hdr
->
rmpp_status
)
{
rmpp_status
=
IB_MGMT_RMPP_STATUS_BAD_STATUS
;
goto
bad
;
}
if
(
rmpp_hdr
->
seg_num
==
__constant_htonl
(
1
))
{
if
(
!
(
ib_get_rmpp_flags
(
rmpp_hdr
)
&
IB_MGMT_RMPP_FLAG_FIRST
))
{
rmpp_status
=
IB_MGMT_RMPP_STATUS_BAD_SEG
;
goto
bad
;
}
return
start_rmpp
(
agent
,
mad_recv_wc
);
}
else
{
if
(
ib_get_rmpp_flags
(
rmpp_hdr
)
&
IB_MGMT_RMPP_FLAG_FIRST
)
{
rmpp_status
=
IB_MGMT_RMPP_STATUS_BAD_SEG
;
goto
bad
;
}
return
continue_rmpp
(
agent
,
mad_recv_wc
);
}
bad:
nack_recv
(
agent
,
mad_recv_wc
,
rmpp_status
);
ib_free_recv_mad
(
mad_recv_wc
);
return
NULL
;
}
static
void
process_rmpp_stop
(
struct
ib_mad_agent_private
*
agent
,
struct
ib_mad_recv_wc
*
mad_recv_wc
)
{
struct
ib_rmpp_mad
*
rmpp_mad
;
rmpp_mad
=
(
struct
ib_rmpp_mad
*
)
mad_recv_wc
->
recv_buf
.
mad
;
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_status
!=
IB_MGMT_RMPP_STATUS_RESX
)
{
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_BAD_STATUS
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_BAD_STATUS
);
}
else
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
rmpp_mad
->
rmpp_hdr
.
rmpp_status
);
}
static
void
process_rmpp_abort
(
struct
ib_mad_agent_private
*
agent
,
struct
ib_mad_recv_wc
*
mad_recv_wc
)
{
struct
ib_rmpp_mad
*
rmpp_mad
;
rmpp_mad
=
(
struct
ib_rmpp_mad
*
)
mad_recv_wc
->
recv_buf
.
mad
;
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_status
<
IB_MGMT_RMPP_STATUS_ABORT_MIN
||
rmpp_mad
->
rmpp_hdr
.
rmpp_status
>
IB_MGMT_RMPP_STATUS_ABORT_MAX
)
{
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_BAD_STATUS
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_BAD_STATUS
);
}
else
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
rmpp_mad
->
rmpp_hdr
.
rmpp_status
);
}
struct
ib_mad_recv_wc
*
ib_process_rmpp_recv_wc
(
struct
ib_mad_agent_private
*
agent
,
struct
ib_mad_recv_wc
*
mad_recv_wc
)
...
...
@@ -638,23 +808,29 @@ ib_process_rmpp_recv_wc(struct ib_mad_agent_private *agent,
if
(
!
(
rmpp_mad
->
rmpp_hdr
.
rmpp_rtime_flags
&
IB_MGMT_RMPP_FLAG_ACTIVE
))
return
mad_recv_wc
;
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_version
!=
IB_MGMT_RMPP_VERSION
)
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_version
!=
IB_MGMT_RMPP_VERSION
)
{
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_UNV
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_UNV
);
goto
out
;
}
switch
(
rmpp_mad
->
rmpp_hdr
.
rmpp_type
)
{
case
IB_MGMT_RMPP_TYPE_DATA
:
if
(
rmpp_mad
->
rmpp_hdr
.
seg_num
==
__constant_htonl
(
1
))
return
start_rmpp
(
agent
,
mad_recv_wc
);
else
return
continue_rmpp
(
agent
,
mad_recv_wc
);
return
process_rmpp_data
(
agent
,
mad_recv_wc
);
case
IB_MGMT_RMPP_TYPE_ACK
:
process_rmpp_ack
(
agent
,
mad_recv_wc
);
break
;
case
IB_MGMT_RMPP_TYPE_STOP
:
process_rmpp_stop
(
agent
,
mad_recv_wc
);
break
;
case
IB_MGMT_RMPP_TYPE_ABORT
:
/* TODO: process_rmpp_nack(agent, mad_recv_wc); */
process_rmpp_abort
(
agent
,
mad_recv_wc
);
break
;
default:
abort_send
(
agent
,
rmpp_mad
->
mad_hdr
.
tid
,
IB_MGMT_RMPP_STATUS_BADT
);
nack_recv
(
agent
,
mad_recv_wc
,
IB_MGMT_RMPP_STATUS_BADT
);
break
;
}
out:
...
...
@@ -714,7 +890,10 @@ int ib_process_rmpp_send_wc(struct ib_mad_send_wr_private *mad_send_wr,
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_type
!=
IB_MGMT_RMPP_TYPE_DATA
)
{
msg
=
(
struct
ib_mad_send_buf
*
)
(
unsigned
long
)
mad_send_wc
->
wr_id
;
ib_free_send_mad
(
msg
);
if
(
rmpp_mad
->
rmpp_hdr
.
rmpp_type
==
IB_MGMT_RMPP_TYPE_ACK
)
ib_free_send_mad
(
msg
);
else
free_msg
(
msg
);
return
IB_RMPP_RESULT_INTERNAL
;
/* ACK, STOP, or ABORT */
}
...
...
drivers/infiniband/core/packer.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -32,7 +33,7 @@
* $Id: packer.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <ib_pack.h>
#include <
rdma/
ib_pack.h>
static
u64
value_read
(
int
offset
,
int
size
,
void
*
structure
)
{
...
...
drivers/infiniband/core/sa_query.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc.
All rights reserved.
* Copyright (c) 2005 Voltaire, Inc.
All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -44,8 +44,8 @@
#include <linux/kref.h>
#include <linux/idr.h>
#include <ib_pack.h>
#include <ib_sa.h>
#include <
rdma/
ib_pack.h>
#include <
rdma/
ib_sa.h>
MODULE_AUTHOR
(
"Roland Dreier"
);
MODULE_DESCRIPTION
(
"InfiniBand subnet administration query support"
);
...
...
drivers/infiniband/core/smi.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2004, 2005 Infinicon Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Intel Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Topspin Corporation. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -36,7 +37,7 @@
* $Id: smi.c 1389 2004-12-27 22:56:47Z roland $
*/
#include <ib_smi.h>
#include <
rdma/
ib_smi.h>
#include "smi.h"
/*
...
...
drivers/infiniband/core/sysfs.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies Ltd. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -34,7 +36,7 @@
#include "core_priv.h"
#include <ib_mad.h>
#include <
rdma/
ib_mad.h>
struct
ib_port
{
struct
kobject
kobj
;
...
...
@@ -253,14 +255,14 @@ static ssize_t show_port_gid(struct ib_port *p, struct port_attribute *attr,
return
ret
;
return
sprintf
(
buf
,
"%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x
\n
"
,
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
0
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
1
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
2
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
3
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
4
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
5
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
6
]),
be16_to_cpu
(((
u
16
*
)
gid
.
raw
)[
7
]));
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
0
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
1
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
2
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
3
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
4
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
5
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
6
]),
be16_to_cpu
(((
__be
16
*
)
gid
.
raw
)[
7
]));
}
static
ssize_t
show_port_pkey
(
struct
ib_port
*
p
,
struct
port_attribute
*
attr
,
...
...
@@ -332,11 +334,11 @@ static ssize_t show_pma_counter(struct ib_port *p, struct port_attribute *attr,
break
;
case
16
:
ret
=
sprintf
(
buf
,
"%u
\n
"
,
be16_to_cpup
((
u
16
*
)(
out_mad
->
data
+
40
+
offset
/
8
)));
be16_to_cpup
((
__be
16
*
)(
out_mad
->
data
+
40
+
offset
/
8
)));
break
;
case
32
:
ret
=
sprintf
(
buf
,
"%u
\n
"
,
be32_to_cpup
((
u
32
*
)(
out_mad
->
data
+
40
+
offset
/
8
)));
be32_to_cpup
((
__be
32
*
)(
out_mad
->
data
+
40
+
offset
/
8
)));
break
;
default:
ret
=
0
;
...
...
@@ -598,10 +600,10 @@ static ssize_t show_sys_image_guid(struct class_device *cdev, char *buf)
return
ret
;
return
sprintf
(
buf
,
"%04x:%04x:%04x:%04x
\n
"
,
be16_to_cpu
(((
u
16
*
)
&
attr
.
sys_image_guid
)[
0
]),
be16_to_cpu
(((
u
16
*
)
&
attr
.
sys_image_guid
)[
1
]),
be16_to_cpu
(((
u
16
*
)
&
attr
.
sys_image_guid
)[
2
]),
be16_to_cpu
(((
u
16
*
)
&
attr
.
sys_image_guid
)[
3
]));
be16_to_cpu
(((
__be
16
*
)
&
attr
.
sys_image_guid
)[
0
]),
be16_to_cpu
(((
__be
16
*
)
&
attr
.
sys_image_guid
)[
1
]),
be16_to_cpu
(((
__be
16
*
)
&
attr
.
sys_image_guid
)[
2
]),
be16_to_cpu
(((
__be
16
*
)
&
attr
.
sys_image_guid
)[
3
]));
}
static
ssize_t
show_node_guid
(
struct
class_device
*
cdev
,
char
*
buf
)
...
...
@@ -615,10 +617,10 @@ static ssize_t show_node_guid(struct class_device *cdev, char *buf)
return
ret
;
return
sprintf
(
buf
,
"%04x:%04x:%04x:%04x
\n
"
,
be16_to_cpu
(((
u
16
*
)
&
attr
.
node_guid
)[
0
]),
be16_to_cpu
(((
u
16
*
)
&
attr
.
node_guid
)[
1
]),
be16_to_cpu
(((
u
16
*
)
&
attr
.
node_guid
)[
2
]),
be16_to_cpu
(((
u
16
*
)
&
attr
.
node_guid
)[
3
]));
be16_to_cpu
(((
__be
16
*
)
&
attr
.
node_guid
)[
0
]),
be16_to_cpu
(((
__be
16
*
)
&
attr
.
node_guid
)[
1
]),
be16_to_cpu
(((
__be
16
*
)
&
attr
.
node_guid
)[
2
]),
be16_to_cpu
(((
__be
16
*
)
&
attr
.
node_guid
)[
3
]));
}
static
CLASS_DEVICE_ATTR
(
node_type
,
S_IRUGO
,
show_node_type
,
NULL
);
...
...
drivers/infiniband/core/ucm.c
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/core/ucm.h
浏览文件 @
a78b3371
...
...
@@ -40,17 +40,15 @@
#include <linux/cdev.h>
#include <linux/idr.h>
#include <ib_cm.h>
#include <ib_user_cm.h>
#include <
rdma/
ib_cm.h>
#include <
rdma/
ib_user_cm.h>
#define IB_UCM_CM_ID_INVALID 0xffffffff
struct
ib_ucm_file
{
struct
semaphore
mutex
;
struct
file
*
filp
;
/*
* list of pending events
*/
struct
list_head
ctxs
;
/* list of active connections */
struct
list_head
events
;
/* list of pending events */
wait_queue_head_t
poll_wait
;
...
...
@@ -58,12 +56,11 @@ struct ib_ucm_file {
struct
ib_ucm_context
{
int
id
;
int
ref
;
int
error
;
wait_queue_head_t
wait
;
atomic_t
ref
;
struct
ib_ucm_file
*
file
;
struct
ib_cm_id
*
cm_id
;
struct
semaphore
mutex
;
struct
list_head
events
;
/* list of pending events. */
struct
list_head
file_list
;
/* member in file ctx list */
...
...
drivers/infiniband/core/ud_header.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -34,7 +35,7 @@
#include <linux/errno.h>
#include <ib_pack.h>
#include <
rdma/
ib_pack.h>
#define STRUCT_FIELD(header, field) \
.struct_offset_bytes = offsetof(struct ib_unpacked_ ## header, field), \
...
...
@@ -194,6 +195,7 @@ void ib_ud_header_init(int payload_bytes,
struct
ib_ud_header
*
header
)
{
int
header_len
;
u16
packet_length
;
memset
(
header
,
0
,
sizeof
*
header
);
...
...
@@ -208,7 +210,7 @@ void ib_ud_header_init(int payload_bytes,
header
->
lrh
.
link_version
=
0
;
header
->
lrh
.
link_next_header
=
grh_present
?
IB_LNH_IBA_GLOBAL
:
IB_LNH_IBA_LOCAL
;
header
->
lrh
.
packet_length
=
(
IB_LRH_BYTES
+
packet_length
=
(
IB_LRH_BYTES
+
IB_BTH_BYTES
+
IB_DETH_BYTES
+
payload_bytes
+
...
...
@@ -217,8 +219,7 @@ void ib_ud_header_init(int payload_bytes,
header
->
grh_present
=
grh_present
;
if
(
grh_present
)
{
header
->
lrh
.
packet_length
+=
IB_GRH_BYTES
/
4
;
packet_length
+=
IB_GRH_BYTES
/
4
;
header
->
grh
.
ip_version
=
6
;
header
->
grh
.
payload_length
=
cpu_to_be16
((
IB_BTH_BYTES
+
...
...
@@ -229,7 +230,7 @@ void ib_ud_header_init(int payload_bytes,
header
->
grh
.
next_header
=
0x1b
;
}
cpu_to_be16s
(
&
header
->
lrh
.
packet_length
);
header
->
lrh
.
packet_length
=
cpu_to_be16
(
packet_length
);
if
(
header
->
immediate_present
)
header
->
bth
.
opcode
=
IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE
;
...
...
drivers/infiniband/core/user_mad.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
...
...
@@ -49,8 +49,8 @@
#include <asm/uaccess.h>
#include <asm/semaphore.h>
#include <ib_mad.h>
#include <ib_user_mad.h>
#include <
rdma/
ib_mad.h>
#include <
rdma/
ib_user_mad.h>
MODULE_AUTHOR
(
"Roland Dreier"
);
MODULE_DESCRIPTION
(
"InfiniBand userspace MAD packet access"
);
...
...
@@ -271,7 +271,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct
ib_send_wr
*
bad_wr
;
struct
ib_rmpp_mad
*
rmpp_mad
;
u8
method
;
u
64
*
tid
;
__be
64
*
tid
;
int
ret
,
length
,
hdr_len
,
data_len
,
rmpp_hdr_size
;
int
rmpp_active
=
0
;
...
...
@@ -316,7 +316,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
if
(
packet
->
mad
.
hdr
.
grh_present
)
{
ah_attr
.
ah_flags
=
IB_AH_GRH
;
memcpy
(
ah_attr
.
grh
.
dgid
.
raw
,
packet
->
mad
.
hdr
.
gid
,
16
);
ah_attr
.
grh
.
flow_label
=
packet
->
mad
.
hdr
.
flow_label
;
ah_attr
.
grh
.
flow_label
=
be32_to_cpu
(
packet
->
mad
.
hdr
.
flow_label
)
;
ah_attr
.
grh
.
hop_limit
=
packet
->
mad
.
hdr
.
hop_limit
;
ah_attr
.
grh
.
traffic_class
=
packet
->
mad
.
hdr
.
traffic_class
;
}
...
...
drivers/infiniband/core/uverbs.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -43,8 +45,8 @@
#include <linux/kref.h>
#include <linux/idr.h>
#include <ib_verbs.h>
#include <ib_user_verbs.h>
#include <
rdma/
ib_verbs.h>
#include <
rdma/
ib_user_verbs.h>
struct
ib_uverbs_device
{
int
devnum
;
...
...
@@ -97,10 +99,12 @@ extern struct idr ib_uverbs_mw_idr;
extern
struct
idr
ib_uverbs_ah_idr
;
extern
struct
idr
ib_uverbs_cq_idr
;
extern
struct
idr
ib_uverbs_qp_idr
;
extern
struct
idr
ib_uverbs_srq_idr
;
void
ib_uverbs_comp_handler
(
struct
ib_cq
*
cq
,
void
*
cq_context
);
void
ib_uverbs_cq_event_handler
(
struct
ib_event
*
event
,
void
*
context_ptr
);
void
ib_uverbs_qp_event_handler
(
struct
ib_event
*
event
,
void
*
context_ptr
);
void
ib_uverbs_srq_event_handler
(
struct
ib_event
*
event
,
void
*
context_ptr
);
int
ib_umem_get
(
struct
ib_device
*
dev
,
struct
ib_umem
*
mem
,
void
*
addr
,
size_t
size
,
int
write
);
...
...
@@ -129,5 +133,8 @@ IB_UVERBS_DECLARE_CMD(modify_qp);
IB_UVERBS_DECLARE_CMD
(
destroy_qp
);
IB_UVERBS_DECLARE_CMD
(
attach_mcast
);
IB_UVERBS_DECLARE_CMD
(
detach_mcast
);
IB_UVERBS_DECLARE_CMD
(
create_srq
);
IB_UVERBS_DECLARE_CMD
(
modify_srq
);
IB_UVERBS_DECLARE_CMD
(
destroy_srq
);
#endif
/* UVERBS_H */
drivers/infiniband/core/uverbs_cmd.c
浏览文件 @
a78b3371
...
...
@@ -724,6 +724,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
struct
ib_uobject
*
uobj
;
struct
ib_pd
*
pd
;
struct
ib_cq
*
scq
,
*
rcq
;
struct
ib_srq
*
srq
;
struct
ib_qp
*
qp
;
struct
ib_qp_init_attr
attr
;
int
ret
;
...
...
@@ -747,10 +748,12 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
pd
=
idr_find
(
&
ib_uverbs_pd_idr
,
cmd
.
pd_handle
);
scq
=
idr_find
(
&
ib_uverbs_cq_idr
,
cmd
.
send_cq_handle
);
rcq
=
idr_find
(
&
ib_uverbs_cq_idr
,
cmd
.
recv_cq_handle
);
srq
=
cmd
.
is_srq
?
idr_find
(
&
ib_uverbs_srq_idr
,
cmd
.
srq_handle
)
:
NULL
;
if
(
!
pd
||
pd
->
uobject
->
context
!=
file
->
ucontext
||
!
scq
||
scq
->
uobject
->
context
!=
file
->
ucontext
||
!
rcq
||
rcq
->
uobject
->
context
!=
file
->
ucontext
)
{
!
rcq
||
rcq
->
uobject
->
context
!=
file
->
ucontext
||
(
cmd
.
is_srq
&&
(
!
srq
||
srq
->
uobject
->
context
!=
file
->
ucontext
)))
{
ret
=
-
EINVAL
;
goto
err_up
;
}
...
...
@@ -759,7 +762,7 @@ ssize_t ib_uverbs_create_qp(struct ib_uverbs_file *file,
attr
.
qp_context
=
file
;
attr
.
send_cq
=
scq
;
attr
.
recv_cq
=
rcq
;
attr
.
srq
=
NULL
;
attr
.
srq
=
srq
;
attr
.
sq_sig_type
=
cmd
.
sq_sig_all
?
IB_SIGNAL_ALL_WR
:
IB_SIGNAL_REQ_WR
;
attr
.
qp_type
=
cmd
.
qp_type
;
...
...
@@ -1004,3 +1007,178 @@ ssize_t ib_uverbs_detach_mcast(struct ib_uverbs_file *file,
return
ret
?
ret
:
in_len
;
}
ssize_t
ib_uverbs_create_srq
(
struct
ib_uverbs_file
*
file
,
const
char
__user
*
buf
,
int
in_len
,
int
out_len
)
{
struct
ib_uverbs_create_srq
cmd
;
struct
ib_uverbs_create_srq_resp
resp
;
struct
ib_udata
udata
;
struct
ib_uobject
*
uobj
;
struct
ib_pd
*
pd
;
struct
ib_srq
*
srq
;
struct
ib_srq_init_attr
attr
;
int
ret
;
if
(
out_len
<
sizeof
resp
)
return
-
ENOSPC
;
if
(
copy_from_user
(
&
cmd
,
buf
,
sizeof
cmd
))
return
-
EFAULT
;
INIT_UDATA
(
&
udata
,
buf
+
sizeof
cmd
,
(
unsigned
long
)
cmd
.
response
+
sizeof
resp
,
in_len
-
sizeof
cmd
,
out_len
-
sizeof
resp
);
uobj
=
kmalloc
(
sizeof
*
uobj
,
GFP_KERNEL
);
if
(
!
uobj
)
return
-
ENOMEM
;
down
(
&
ib_uverbs_idr_mutex
);
pd
=
idr_find
(
&
ib_uverbs_pd_idr
,
cmd
.
pd_handle
);
if
(
!
pd
||
pd
->
uobject
->
context
!=
file
->
ucontext
)
{
ret
=
-
EINVAL
;
goto
err_up
;
}
attr
.
event_handler
=
ib_uverbs_srq_event_handler
;
attr
.
srq_context
=
file
;
attr
.
attr
.
max_wr
=
cmd
.
max_wr
;
attr
.
attr
.
max_sge
=
cmd
.
max_sge
;
attr
.
attr
.
srq_limit
=
cmd
.
srq_limit
;
uobj
->
user_handle
=
cmd
.
user_handle
;
uobj
->
context
=
file
->
ucontext
;
srq
=
pd
->
device
->
create_srq
(
pd
,
&
attr
,
&
udata
);
if
(
IS_ERR
(
srq
))
{
ret
=
PTR_ERR
(
srq
);
goto
err_up
;
}
srq
->
device
=
pd
->
device
;
srq
->
pd
=
pd
;
srq
->
uobject
=
uobj
;
srq
->
event_handler
=
attr
.
event_handler
;
srq
->
srq_context
=
attr
.
srq_context
;
atomic_inc
(
&
pd
->
usecnt
);
atomic_set
(
&
srq
->
usecnt
,
0
);
memset
(
&
resp
,
0
,
sizeof
resp
);
retry:
if
(
!
idr_pre_get
(
&
ib_uverbs_srq_idr
,
GFP_KERNEL
))
{
ret
=
-
ENOMEM
;
goto
err_destroy
;
}
ret
=
idr_get_new
(
&
ib_uverbs_srq_idr
,
srq
,
&
uobj
->
id
);
if
(
ret
==
-
EAGAIN
)
goto
retry
;
if
(
ret
)
goto
err_destroy
;
resp
.
srq_handle
=
uobj
->
id
;
spin_lock_irq
(
&
file
->
ucontext
->
lock
);
list_add_tail
(
&
uobj
->
list
,
&
file
->
ucontext
->
srq_list
);
spin_unlock_irq
(
&
file
->
ucontext
->
lock
);
if
(
copy_to_user
((
void
__user
*
)
(
unsigned
long
)
cmd
.
response
,
&
resp
,
sizeof
resp
))
{
ret
=
-
EFAULT
;
goto
err_list
;
}
up
(
&
ib_uverbs_idr_mutex
);
return
in_len
;
err_list:
spin_lock_irq
(
&
file
->
ucontext
->
lock
);
list_del
(
&
uobj
->
list
);
spin_unlock_irq
(
&
file
->
ucontext
->
lock
);
err_destroy:
ib_destroy_srq
(
srq
);
err_up:
up
(
&
ib_uverbs_idr_mutex
);
kfree
(
uobj
);
return
ret
;
}
ssize_t
ib_uverbs_modify_srq
(
struct
ib_uverbs_file
*
file
,
const
char
__user
*
buf
,
int
in_len
,
int
out_len
)
{
struct
ib_uverbs_modify_srq
cmd
;
struct
ib_srq
*
srq
;
struct
ib_srq_attr
attr
;
int
ret
;
if
(
copy_from_user
(
&
cmd
,
buf
,
sizeof
cmd
))
return
-
EFAULT
;
down
(
&
ib_uverbs_idr_mutex
);
srq
=
idr_find
(
&
ib_uverbs_srq_idr
,
cmd
.
srq_handle
);
if
(
!
srq
||
srq
->
uobject
->
context
!=
file
->
ucontext
)
{
ret
=
-
EINVAL
;
goto
out
;
}
attr
.
max_wr
=
cmd
.
max_wr
;
attr
.
max_sge
=
cmd
.
max_sge
;
attr
.
srq_limit
=
cmd
.
srq_limit
;
ret
=
ib_modify_srq
(
srq
,
&
attr
,
cmd
.
attr_mask
);
out:
up
(
&
ib_uverbs_idr_mutex
);
return
ret
?
ret
:
in_len
;
}
ssize_t
ib_uverbs_destroy_srq
(
struct
ib_uverbs_file
*
file
,
const
char
__user
*
buf
,
int
in_len
,
int
out_len
)
{
struct
ib_uverbs_destroy_srq
cmd
;
struct
ib_srq
*
srq
;
struct
ib_uobject
*
uobj
;
int
ret
=
-
EINVAL
;
if
(
copy_from_user
(
&
cmd
,
buf
,
sizeof
cmd
))
return
-
EFAULT
;
down
(
&
ib_uverbs_idr_mutex
);
srq
=
idr_find
(
&
ib_uverbs_srq_idr
,
cmd
.
srq_handle
);
if
(
!
srq
||
srq
->
uobject
->
context
!=
file
->
ucontext
)
goto
out
;
uobj
=
srq
->
uobject
;
ret
=
ib_destroy_srq
(
srq
);
if
(
ret
)
goto
out
;
idr_remove
(
&
ib_uverbs_srq_idr
,
cmd
.
srq_handle
);
spin_lock_irq
(
&
file
->
ucontext
->
lock
);
list_del
(
&
uobj
->
list
);
spin_unlock_irq
(
&
file
->
ucontext
->
lock
);
kfree
(
uobj
);
out:
up
(
&
ib_uverbs_idr_mutex
);
return
ret
?
ret
:
in_len
;
}
drivers/infiniband/core/uverbs_main.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -67,6 +69,7 @@ DEFINE_IDR(ib_uverbs_mw_idr);
DEFINE_IDR
(
ib_uverbs_ah_idr
);
DEFINE_IDR
(
ib_uverbs_cq_idr
);
DEFINE_IDR
(
ib_uverbs_qp_idr
);
DEFINE_IDR
(
ib_uverbs_srq_idr
);
static
spinlock_t
map_lock
;
static
DECLARE_BITMAP
(
dev_map
,
IB_UVERBS_MAX_DEVICES
);
...
...
@@ -91,6 +94,9 @@ static ssize_t (*uverbs_cmd_table[])(struct ib_uverbs_file *file,
[
IB_USER_VERBS_CMD_DESTROY_QP
]
=
ib_uverbs_destroy_qp
,
[
IB_USER_VERBS_CMD_ATTACH_MCAST
]
=
ib_uverbs_attach_mcast
,
[
IB_USER_VERBS_CMD_DETACH_MCAST
]
=
ib_uverbs_detach_mcast
,
[
IB_USER_VERBS_CMD_CREATE_SRQ
]
=
ib_uverbs_create_srq
,
[
IB_USER_VERBS_CMD_MODIFY_SRQ
]
=
ib_uverbs_modify_srq
,
[
IB_USER_VERBS_CMD_DESTROY_SRQ
]
=
ib_uverbs_destroy_srq
,
};
static
struct
vfsmount
*
uverbs_event_mnt
;
...
...
@@ -125,7 +131,14 @@ static int ib_dealloc_ucontext(struct ib_ucontext *context)
kfree
(
uobj
);
}
/* XXX Free SRQs */
list_for_each_entry_safe
(
uobj
,
tmp
,
&
context
->
srq_list
,
list
)
{
struct
ib_srq
*
srq
=
idr_find
(
&
ib_uverbs_srq_idr
,
uobj
->
id
);
idr_remove
(
&
ib_uverbs_srq_idr
,
uobj
->
id
);
ib_destroy_srq
(
srq
);
list_del
(
&
uobj
->
list
);
kfree
(
uobj
);
}
/* XXX Free MWs */
list_for_each_entry_safe
(
uobj
,
tmp
,
&
context
->
mr_list
,
list
)
{
...
...
@@ -344,6 +357,13 @@ void ib_uverbs_qp_event_handler(struct ib_event *event, void *context_ptr)
event
->
event
);
}
void
ib_uverbs_srq_event_handler
(
struct
ib_event
*
event
,
void
*
context_ptr
)
{
ib_uverbs_async_handler
(
context_ptr
,
event
->
element
.
srq
->
uobject
->
user_handle
,
event
->
event
);
}
static
void
ib_uverbs_event_handler
(
struct
ib_event_handler
*
handler
,
struct
ib_event
*
event
)
{
...
...
drivers/infiniband/core/uverbs_mem.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
drivers/infiniband/core/verbs.c
浏览文件 @
a78b3371
...
...
@@ -4,6 +4,7 @@
* Copyright (c) 2004 Intel Corporation. All rights reserved.
* Copyright (c) 2004 Topspin Corporation. All rights reserved.
* Copyright (c) 2004 Voltaire Corporation. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
*
* This software is available to you under a choice of one of two
...
...
@@ -40,8 +41,8 @@
#include <linux/errno.h>
#include <linux/err.h>
#include <ib_verbs.h>
#include <ib_cache.h>
#include <
rdma/
ib_verbs.h>
#include <
rdma/
ib_cache.h>
/* Protection domains */
...
...
@@ -153,6 +154,66 @@ int ib_destroy_ah(struct ib_ah *ah)
}
EXPORT_SYMBOL
(
ib_destroy_ah
);
/* Shared receive queues */
struct
ib_srq
*
ib_create_srq
(
struct
ib_pd
*
pd
,
struct
ib_srq_init_attr
*
srq_init_attr
)
{
struct
ib_srq
*
srq
;
if
(
!
pd
->
device
->
create_srq
)
return
ERR_PTR
(
-
ENOSYS
);
srq
=
pd
->
device
->
create_srq
(
pd
,
srq_init_attr
,
NULL
);
if
(
!
IS_ERR
(
srq
))
{
srq
->
device
=
pd
->
device
;
srq
->
pd
=
pd
;
srq
->
uobject
=
NULL
;
srq
->
event_handler
=
srq_init_attr
->
event_handler
;
srq
->
srq_context
=
srq_init_attr
->
srq_context
;
atomic_inc
(
&
pd
->
usecnt
);
atomic_set
(
&
srq
->
usecnt
,
0
);
}
return
srq
;
}
EXPORT_SYMBOL
(
ib_create_srq
);
int
ib_modify_srq
(
struct
ib_srq
*
srq
,
struct
ib_srq_attr
*
srq_attr
,
enum
ib_srq_attr_mask
srq_attr_mask
)
{
return
srq
->
device
->
modify_srq
(
srq
,
srq_attr
,
srq_attr_mask
);
}
EXPORT_SYMBOL
(
ib_modify_srq
);
int
ib_query_srq
(
struct
ib_srq
*
srq
,
struct
ib_srq_attr
*
srq_attr
)
{
return
srq
->
device
->
query_srq
?
srq
->
device
->
query_srq
(
srq
,
srq_attr
)
:
-
ENOSYS
;
}
EXPORT_SYMBOL
(
ib_query_srq
);
int
ib_destroy_srq
(
struct
ib_srq
*
srq
)
{
struct
ib_pd
*
pd
;
int
ret
;
if
(
atomic_read
(
&
srq
->
usecnt
))
return
-
EBUSY
;
pd
=
srq
->
pd
;
ret
=
srq
->
device
->
destroy_srq
(
srq
);
if
(
!
ret
)
atomic_dec
(
&
pd
->
usecnt
);
return
ret
;
}
EXPORT_SYMBOL
(
ib_destroy_srq
);
/* Queue pairs */
struct
ib_qp
*
ib_create_qp
(
struct
ib_pd
*
pd
,
...
...
drivers/infiniband/hw/mthca/Makefile
浏览文件 @
a78b3371
EXTRA_CFLAGS
+=
-Idrivers
/infiniband/include
ifdef
CONFIG_INFINIBAND_MTHCA_DEBUG
EXTRA_CFLAGS
+=
-DDEBUG
endif
...
...
@@ -9,4 +7,4 @@ obj-$(CONFIG_INFINIBAND_MTHCA) += ib_mthca.o
ib_mthca-y
:=
mthca_main.o mthca_cmd.o mthca_profile.o mthca_reset.o
\
mthca_allocator.o mthca_eq.o mthca_pd.o mthca_cq.o
\
mthca_mr.o mthca_qp.o mthca_av.o mthca_mcg.o mthca_mad.o
\
mthca_provider.o mthca_memfree.o mthca_uar.o
mthca_provider.o mthca_memfree.o mthca_uar.o
mthca_srq.o
drivers/infiniband/hw/mthca/mthca_allocator.c
浏览文件 @
a78b3371
...
...
@@ -177,3 +177,119 @@ void mthca_array_cleanup(struct mthca_array *array, int nent)
kfree
(
array
->
page_list
);
}
/*
* Handling for queue buffers -- we allocate a bunch of memory and
* register it in a memory region at HCA virtual address 0. If the
* requested size is > max_direct, we split the allocation into
* multiple pages, so we don't require too much contiguous memory.
*/
int
mthca_buf_alloc
(
struct
mthca_dev
*
dev
,
int
size
,
int
max_direct
,
union
mthca_buf
*
buf
,
int
*
is_direct
,
struct
mthca_pd
*
pd
,
int
hca_write
,
struct
mthca_mr
*
mr
)
{
int
err
=
-
ENOMEM
;
int
npages
,
shift
;
u64
*
dma_list
=
NULL
;
dma_addr_t
t
;
int
i
;
if
(
size
<=
max_direct
)
{
*
is_direct
=
1
;
npages
=
1
;
shift
=
get_order
(
size
)
+
PAGE_SHIFT
;
buf
->
direct
.
buf
=
dma_alloc_coherent
(
&
dev
->
pdev
->
dev
,
size
,
&
t
,
GFP_KERNEL
);
if
(
!
buf
->
direct
.
buf
)
return
-
ENOMEM
;
pci_unmap_addr_set
(
&
buf
->
direct
,
mapping
,
t
);
memset
(
buf
->
direct
.
buf
,
0
,
size
);
while
(
t
&
((
1
<<
shift
)
-
1
))
{
--
shift
;
npages
*=
2
;
}
dma_list
=
kmalloc
(
npages
*
sizeof
*
dma_list
,
GFP_KERNEL
);
if
(
!
dma_list
)
goto
err_free
;
for
(
i
=
0
;
i
<
npages
;
++
i
)
dma_list
[
i
]
=
t
+
i
*
(
1
<<
shift
);
}
else
{
*
is_direct
=
0
;
npages
=
(
size
+
PAGE_SIZE
-
1
)
/
PAGE_SIZE
;
shift
=
PAGE_SHIFT
;
dma_list
=
kmalloc
(
npages
*
sizeof
*
dma_list
,
GFP_KERNEL
);
if
(
!
dma_list
)
return
-
ENOMEM
;
buf
->
page_list
=
kmalloc
(
npages
*
sizeof
*
buf
->
page_list
,
GFP_KERNEL
);
if
(
!
buf
->
page_list
)
goto
err_out
;
for
(
i
=
0
;
i
<
npages
;
++
i
)
buf
->
page_list
[
i
].
buf
=
NULL
;
for
(
i
=
0
;
i
<
npages
;
++
i
)
{
buf
->
page_list
[
i
].
buf
=
dma_alloc_coherent
(
&
dev
->
pdev
->
dev
,
PAGE_SIZE
,
&
t
,
GFP_KERNEL
);
if
(
!
buf
->
page_list
[
i
].
buf
)
goto
err_free
;
dma_list
[
i
]
=
t
;
pci_unmap_addr_set
(
&
buf
->
page_list
[
i
],
mapping
,
t
);
memset
(
buf
->
page_list
[
i
].
buf
,
0
,
PAGE_SIZE
);
}
}
err
=
mthca_mr_alloc_phys
(
dev
,
pd
->
pd_num
,
dma_list
,
shift
,
npages
,
0
,
size
,
MTHCA_MPT_FLAG_LOCAL_READ
|
(
hca_write
?
MTHCA_MPT_FLAG_LOCAL_WRITE
:
0
),
mr
);
if
(
err
)
goto
err_free
;
kfree
(
dma_list
);
return
0
;
err_free:
mthca_buf_free
(
dev
,
size
,
buf
,
*
is_direct
,
NULL
);
err_out:
kfree
(
dma_list
);
return
err
;
}
void
mthca_buf_free
(
struct
mthca_dev
*
dev
,
int
size
,
union
mthca_buf
*
buf
,
int
is_direct
,
struct
mthca_mr
*
mr
)
{
int
i
;
if
(
mr
)
mthca_free_mr
(
dev
,
mr
);
if
(
is_direct
)
dma_free_coherent
(
&
dev
->
pdev
->
dev
,
size
,
buf
->
direct
.
buf
,
pci_unmap_addr
(
&
buf
->
direct
,
mapping
));
else
{
for
(
i
=
0
;
i
<
(
size
+
PAGE_SIZE
-
1
)
/
PAGE_SIZE
;
++
i
)
dma_free_coherent
(
&
dev
->
pdev
->
dev
,
PAGE_SIZE
,
buf
->
page_list
[
i
].
buf
,
pci_unmap_addr
(
&
buf
->
page_list
[
i
],
mapping
));
kfree
(
buf
->
page_list
);
}
}
drivers/infiniband/hw/mthca/mthca_av.c
浏览文件 @
a78b3371
...
...
@@ -35,22 +35,22 @@
#include <linux/init.h>
#include <ib_verbs.h>
#include <ib_cache.h>
#include <
rdma/
ib_verbs.h>
#include <
rdma/
ib_cache.h>
#include "mthca_dev.h"
struct
mthca_av
{
u
32
port_pd
;
u8
reserved1
;
u8
g_slid
;
u
16
dlid
;
u8
reserved2
;
u8
gid_index
;
u8
msg_sr
;
u8
hop_limit
;
u
32
sl_tclass_flowlabel
;
u
32
dgid
[
4
];
__be
32
port_pd
;
u8
reserved1
;
u8
g_slid
;
__be
16
dlid
;
u8
reserved2
;
u8
gid_index
;
u8
msg_sr
;
u8
hop_limit
;
__be
32
sl_tclass_flowlabel
;
__be
32
dgid
[
4
];
};
int
mthca_create_ah
(
struct
mthca_dev
*
dev
,
...
...
@@ -128,7 +128,7 @@ int mthca_create_ah(struct mthca_dev *dev,
av
,
(
unsigned
long
)
ah
->
avdma
);
for
(
j
=
0
;
j
<
8
;
++
j
)
printk
(
KERN_DEBUG
" [%2x] %08x
\n
"
,
j
*
4
,
be32_to_cpu
(((
u
32
*
)
av
)[
j
]));
j
*
4
,
be32_to_cpu
(((
__be
32
*
)
av
)[
j
]));
}
if
(
ah
->
type
==
MTHCA_AH_ON_HCA
)
{
...
...
@@ -169,7 +169,7 @@ int mthca_read_ah(struct mthca_dev *dev, struct mthca_ah *ah,
header
->
lrh
.
service_level
=
be32_to_cpu
(
ah
->
av
->
sl_tclass_flowlabel
)
>>
28
;
header
->
lrh
.
destination_lid
=
ah
->
av
->
dlid
;
header
->
lrh
.
source_lid
=
ah
->
av
->
g_slid
&
0x7f
;
header
->
lrh
.
source_lid
=
cpu_to_be16
(
ah
->
av
->
g_slid
&
0x7f
)
;
if
(
ah
->
av
->
g_slid
&
0x80
)
{
header
->
grh_present
=
1
;
header
->
grh
.
traffic_class
=
...
...
drivers/infiniband/hw/mthca/mthca_cmd.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -36,7 +37,7 @@
#include <linux/pci.h>
#include <linux/errno.h>
#include <asm/io.h>
#include <ib_mad.h>
#include <
rdma/
ib_mad.h>
#include "mthca_dev.h"
#include "mthca_config_reg.h"
...
...
@@ -108,6 +109,7 @@ enum {
CMD_SW2HW_SRQ
=
0x35
,
CMD_HW2SW_SRQ
=
0x36
,
CMD_QUERY_SRQ
=
0x37
,
CMD_ARM_SRQ
=
0x40
,
/* QP/EE commands */
CMD_RST2INIT_QPEE
=
0x19
,
...
...
@@ -219,20 +221,20 @@ static int mthca_cmd_post(struct mthca_dev *dev,
* (and some architectures such as ia64 implement memcpy_toio
* in terms of writeb).
*/
__raw_writel
(
cpu_to_be32
(
in_param
>>
32
),
dev
->
hcr
+
0
*
4
);
__raw_writel
(
cpu_to_be32
(
in_param
&
0xfffffffful
),
dev
->
hcr
+
1
*
4
);
__raw_writel
(
cpu_to_be32
(
in_modifier
),
dev
->
hcr
+
2
*
4
);
__raw_writel
(
cpu_to_be32
(
out_param
>>
32
),
dev
->
hcr
+
3
*
4
);
__raw_writel
(
cpu_to_be32
(
out_param
&
0xfffffffful
),
dev
->
hcr
+
4
*
4
);
__raw_writel
(
cpu_to_be32
(
token
<<
16
),
dev
->
hcr
+
5
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
in_param
>>
32
),
dev
->
hcr
+
0
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
in_param
&
0xfffffffful
),
dev
->
hcr
+
1
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
in_modifier
),
dev
->
hcr
+
2
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
out_param
>>
32
),
dev
->
hcr
+
3
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
out_param
&
0xfffffffful
),
dev
->
hcr
+
4
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
token
<<
16
),
dev
->
hcr
+
5
*
4
);
/* __raw_writel may not order writes. */
wmb
();
__raw_writel
(
cpu_to_be32
((
1
<<
HCR_GO_BIT
)
|
(
event
?
(
1
<<
HCA_E_BIT
)
:
0
)
|
(
op_modifier
<<
HCR_OPMOD_SHIFT
)
|
op
),
dev
->
hcr
+
6
*
4
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
((
1
<<
HCR_GO_BIT
)
|
(
event
?
(
1
<<
HCA_E_BIT
)
:
0
)
|
(
op_modifier
<<
HCR_OPMOD_SHIFT
)
|
op
),
dev
->
hcr
+
6
*
4
);
out:
up
(
&
dev
->
cmd
.
hcr_sem
);
...
...
@@ -273,12 +275,14 @@ static int mthca_cmd_poll(struct mthca_dev *dev,
goto
out
;
}
if
(
out_is_imm
)
{
memcpy_fromio
(
out_param
,
dev
->
hcr
+
HCR_OUT_PARAM_OFFSET
,
sizeof
(
u64
));
be64_to_cpus
(
out_param
);
}
if
(
out_is_imm
)
*
out_param
=
(
u64
)
be32_to_cpu
((
__force
__be32
)
__raw_readl
(
dev
->
hcr
+
HCR_OUT_PARAM_OFFSET
))
<<
32
|
(
u64
)
be32_to_cpu
((
__force
__be32
)
__raw_readl
(
dev
->
hcr
+
HCR_OUT_PARAM_OFFSET
+
4
));
*
status
=
be32_to_cpu
(
__raw_readl
(
dev
->
hcr
+
HCR_STATUS_OFFSET
))
>>
24
;
*
status
=
be32_to_cpu
(
(
__force
__be32
)
__raw_readl
(
dev
->
hcr
+
HCR_STATUS_OFFSET
))
>>
24
;
out:
up
(
&
dev
->
cmd
.
poll_sem
);
...
...
@@ -1029,6 +1033,8 @@ int mthca_QUERY_DEV_LIM(struct mthca_dev *dev,
mthca_dbg
(
dev
,
"Max QPs: %d, reserved QPs: %d, entry size: %d
\n
"
,
dev_lim
->
max_qps
,
dev_lim
->
reserved_qps
,
dev_lim
->
qpc_entry_sz
);
mthca_dbg
(
dev
,
"Max SRQs: %d, reserved SRQs: %d, entry size: %d
\n
"
,
dev_lim
->
max_srqs
,
dev_lim
->
reserved_srqs
,
dev_lim
->
srq_entry_sz
);
mthca_dbg
(
dev
,
"Max CQs: %d, reserved CQs: %d, entry size: %d
\n
"
,
dev_lim
->
max_cqs
,
dev_lim
->
reserved_cqs
,
dev_lim
->
cqc_entry_sz
);
mthca_dbg
(
dev
,
"Max EQs: %d, reserved EQs: %d, entry size: %d
\n
"
,
...
...
@@ -1082,6 +1088,34 @@ int mthca_QUERY_DEV_LIM(struct mthca_dev *dev,
return
err
;
}
static
void
get_board_id
(
void
*
vsd
,
char
*
board_id
)
{
int
i
;
#define VSD_OFFSET_SIG1 0x00
#define VSD_OFFSET_SIG2 0xde
#define VSD_OFFSET_MLX_BOARD_ID 0xd0
#define VSD_OFFSET_TS_BOARD_ID 0x20
#define VSD_SIGNATURE_TOPSPIN 0x5ad
memset
(
board_id
,
0
,
MTHCA_BOARD_ID_LEN
);
if
(
be16_to_cpup
(
vsd
+
VSD_OFFSET_SIG1
)
==
VSD_SIGNATURE_TOPSPIN
&&
be16_to_cpup
(
vsd
+
VSD_OFFSET_SIG2
)
==
VSD_SIGNATURE_TOPSPIN
)
{
strlcpy
(
board_id
,
vsd
+
VSD_OFFSET_TS_BOARD_ID
,
MTHCA_BOARD_ID_LEN
);
}
else
{
/*
* The board ID is a string but the firmware byte
* swaps each 4-byte word before passing it back to
* us. Therefore we need to swab it before printing.
*/
for
(
i
=
0
;
i
<
4
;
++
i
)
((
u32
*
)
board_id
)[
i
]
=
swab32
(
*
(
u32
*
)
(
vsd
+
VSD_OFFSET_MLX_BOARD_ID
+
i
*
4
));
}
}
int
mthca_QUERY_ADAPTER
(
struct
mthca_dev
*
dev
,
struct
mthca_adapter
*
adapter
,
u8
*
status
)
{
...
...
@@ -1094,6 +1128,7 @@ int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
#define QUERY_ADAPTER_DEVICE_ID_OFFSET 0x04
#define QUERY_ADAPTER_REVISION_ID_OFFSET 0x08
#define QUERY_ADAPTER_INTA_PIN_OFFSET 0x10
#define QUERY_ADAPTER_VSD_OFFSET 0x20
mailbox
=
mthca_alloc_mailbox
(
dev
,
GFP_KERNEL
);
if
(
IS_ERR
(
mailbox
))
...
...
@@ -1111,6 +1146,9 @@ int mthca_QUERY_ADAPTER(struct mthca_dev *dev,
MTHCA_GET
(
adapter
->
revision_id
,
outbox
,
QUERY_ADAPTER_REVISION_ID_OFFSET
);
MTHCA_GET
(
adapter
->
inta_pin
,
outbox
,
QUERY_ADAPTER_INTA_PIN_OFFSET
);
get_board_id
(
outbox
+
QUERY_ADAPTER_VSD_OFFSET
/
4
,
adapter
->
board_id
);
out:
mthca_free_mailbox
(
dev
,
mailbox
);
return
err
;
...
...
@@ -1121,7 +1159,7 @@ int mthca_INIT_HCA(struct mthca_dev *dev,
u8
*
status
)
{
struct
mthca_mailbox
*
mailbox
;
u
32
*
inbox
;
__be
32
*
inbox
;
int
err
;
#define INIT_HCA_IN_SIZE 0x200
...
...
@@ -1247,10 +1285,8 @@ int mthca_INIT_IB(struct mthca_dev *dev,
#define INIT_IB_FLAG_SIG (1 << 18)
#define INIT_IB_FLAG_NG (1 << 17)
#define INIT_IB_FLAG_G0 (1 << 16)
#define INIT_IB_FLAG_1X (1 << 8)
#define INIT_IB_FLAG_4X (1 << 9)
#define INIT_IB_FLAG_12X (1 << 11)
#define INIT_IB_VL_SHIFT 4
#define INIT_IB_PORT_WIDTH_SHIFT 8
#define INIT_IB_MTU_SHIFT 12
#define INIT_IB_MAX_GID_OFFSET 0x06
#define INIT_IB_MAX_PKEY_OFFSET 0x0a
...
...
@@ -1266,12 +1302,11 @@ int mthca_INIT_IB(struct mthca_dev *dev,
memset
(
inbox
,
0
,
INIT_IB_IN_SIZE
);
flags
=
0
;
flags
|=
param
->
enable_1x
?
INIT_IB_FLAG_1X
:
0
;
flags
|=
param
->
enable_4x
?
INIT_IB_FLAG_4X
:
0
;
flags
|=
param
->
set_guid0
?
INIT_IB_FLAG_G0
:
0
;
flags
|=
param
->
set_node_guid
?
INIT_IB_FLAG_NG
:
0
;
flags
|=
param
->
set_si_guid
?
INIT_IB_FLAG_SIG
:
0
;
flags
|=
param
->
vl_cap
<<
INIT_IB_VL_SHIFT
;
flags
|=
param
->
port_width
<<
INIT_IB_PORT_WIDTH_SHIFT
;
flags
|=
param
->
mtu_cap
<<
INIT_IB_MTU_SHIFT
;
MTHCA_PUT
(
inbox
,
flags
,
INIT_IB_FLAGS_OFFSET
);
...
...
@@ -1342,7 +1377,7 @@ int mthca_MAP_ICM(struct mthca_dev *dev, struct mthca_icm *icm, u64 virt, u8 *st
int
mthca_MAP_ICM_page
(
struct
mthca_dev
*
dev
,
u64
dma_addr
,
u64
virt
,
u8
*
status
)
{
struct
mthca_mailbox
*
mailbox
;
u
64
*
inbox
;
__be
64
*
inbox
;
int
err
;
mailbox
=
mthca_alloc_mailbox
(
dev
,
GFP_KERNEL
);
...
...
@@ -1468,6 +1503,27 @@ int mthca_HW2SW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
CMD_TIME_CLASS_A
,
status
);
}
int
mthca_SW2HW_SRQ
(
struct
mthca_dev
*
dev
,
struct
mthca_mailbox
*
mailbox
,
int
srq_num
,
u8
*
status
)
{
return
mthca_cmd
(
dev
,
mailbox
->
dma
,
srq_num
,
0
,
CMD_SW2HW_SRQ
,
CMD_TIME_CLASS_A
,
status
);
}
int
mthca_HW2SW_SRQ
(
struct
mthca_dev
*
dev
,
struct
mthca_mailbox
*
mailbox
,
int
srq_num
,
u8
*
status
)
{
return
mthca_cmd_box
(
dev
,
0
,
mailbox
->
dma
,
srq_num
,
0
,
CMD_HW2SW_SRQ
,
CMD_TIME_CLASS_A
,
status
);
}
int
mthca_ARM_SRQ
(
struct
mthca_dev
*
dev
,
int
srq_num
,
int
limit
,
u8
*
status
)
{
return
mthca_cmd
(
dev
,
limit
,
srq_num
,
0
,
CMD_ARM_SRQ
,
CMD_TIME_CLASS_B
,
status
);
}
int
mthca_MODIFY_QP
(
struct
mthca_dev
*
dev
,
int
trans
,
u32
num
,
int
is_ee
,
struct
mthca_mailbox
*
mailbox
,
u32
optmask
,
u8
*
status
)
...
...
@@ -1513,7 +1569,7 @@ int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
if
(
i
%
8
==
0
)
printk
(
" [%02x] "
,
i
*
4
);
printk
(
" %08x"
,
be32_to_cpu
(((
u
32
*
)
mailbox
->
buf
)[
i
+
2
]));
be32_to_cpu
(((
__be
32
*
)
mailbox
->
buf
)[
i
+
2
]));
if
((
i
+
1
)
%
8
==
0
)
printk
(
"
\n
"
);
}
...
...
@@ -1533,7 +1589,7 @@ int mthca_MODIFY_QP(struct mthca_dev *dev, int trans, u32 num,
if
(
i
%
8
==
0
)
printk
(
"[%02x] "
,
i
*
4
);
printk
(
" %08x"
,
be32_to_cpu
(((
u
32
*
)
mailbox
->
buf
)[
i
+
2
]));
be32_to_cpu
(((
__be
32
*
)
mailbox
->
buf
)[
i
+
2
]));
if
((
i
+
1
)
%
8
==
0
)
printk
(
"
\n
"
);
}
...
...
drivers/infiniband/hw/mthca/mthca_cmd.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -35,7 +36,7 @@
#ifndef MTHCA_CMD_H
#define MTHCA_CMD_H
#include <ib_verbs.h>
#include <
rdma/
ib_verbs.h>
#define MTHCA_MAILBOX_SIZE 4096
...
...
@@ -183,10 +184,11 @@ struct mthca_dev_lim {
};
struct
mthca_adapter
{
u32
vendor_id
;
u32
device_id
;
u32
revision_id
;
u8
inta_pin
;
u32
vendor_id
;
u32
device_id
;
u32
revision_id
;
char
board_id
[
MTHCA_BOARD_ID_LEN
];
u8
inta_pin
;
};
struct
mthca_init_hca_param
{
...
...
@@ -218,8 +220,7 @@ struct mthca_init_hca_param {
};
struct
mthca_init_ib_param
{
int
enable_1x
;
int
enable_4x
;
int
port_width
;
int
vl_cap
;
int
mtu_cap
;
u16
gid_cap
;
...
...
@@ -297,6 +298,11 @@ int mthca_SW2HW_CQ(struct mthca_dev *dev, struct mthca_mailbox *mailbox,
int
cq_num
,
u8
*
status
);
int
mthca_HW2SW_CQ
(
struct
mthca_dev
*
dev
,
struct
mthca_mailbox
*
mailbox
,
int
cq_num
,
u8
*
status
);
int
mthca_SW2HW_SRQ
(
struct
mthca_dev
*
dev
,
struct
mthca_mailbox
*
mailbox
,
int
srq_num
,
u8
*
status
);
int
mthca_HW2SW_SRQ
(
struct
mthca_dev
*
dev
,
struct
mthca_mailbox
*
mailbox
,
int
srq_num
,
u8
*
status
);
int
mthca_ARM_SRQ
(
struct
mthca_dev
*
dev
,
int
srq_num
,
int
limit
,
u8
*
status
);
int
mthca_MODIFY_QP
(
struct
mthca_dev
*
dev
,
int
trans
,
u32
num
,
int
is_ee
,
struct
mthca_mailbox
*
mailbox
,
u32
optmask
,
u8
*
status
);
...
...
drivers/infiniband/hw/mthca/mthca_config_reg.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
drivers/infiniband/hw/mthca/mthca_cq.c
浏览文件 @
a78b3371
...
...
@@ -2,6 +2,8 @@
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -37,7 +39,7 @@
#include <linux/init.h>
#include <linux/hardirq.h>
#include <ib_pack.h>
#include <
rdma/
ib_pack.h>
#include "mthca_dev.h"
#include "mthca_cmd.h"
...
...
@@ -55,21 +57,21 @@ enum {
* Must be packed because start is 64 bits but only aligned to 32 bits.
*/
struct
mthca_cq_context
{
u
32
flags
;
u
64
start
;
u
32
logsize_usrpage
;
u32
error_eqn
;
/* Tavor only */
u
32
comp_eqn
;
u
32
pd
;
u
32
lkey
;
u
32
last_notified_index
;
u
32
solicit_producer_index
;
u
32
consumer_index
;
u
32
producer_index
;
u
32
cqn
;
u
32
ci_db
;
/* Arbel only */
u32
state_db
;
/* Arbel only */
u32
reserved
;
__be
32
flags
;
__be
64
start
;
__be
32
logsize_usrpage
;
__be32
error_eqn
;
/* Tavor only */
__be
32
comp_eqn
;
__be
32
pd
;
__be
32
lkey
;
__be
32
last_notified_index
;
__be
32
solicit_producer_index
;
__be
32
consumer_index
;
__be
32
producer_index
;
__be
32
cqn
;
__be
32
ci_db
;
/* Arbel only */
__be32
state_db
;
/* Arbel only */
u32
reserved
;
}
__attribute__
((
packed
));
#define MTHCA_CQ_STATUS_OK ( 0 << 28)
...
...
@@ -108,31 +110,31 @@ enum {
};
struct
mthca_cqe
{
u
32
my_qpn
;
u
32
my_ee
;
u
32
rqpn
;
u
16
sl_g_mlpath
;
u
16
rlid
;
u
32
imm_etype_pkey_eec
;
u
32
byte_cnt
;
u
32
wqe
;
u8
opcode
;
u8
is_send
;
u8
reserved
;
u8
owner
;
__be
32
my_qpn
;
__be
32
my_ee
;
__be
32
rqpn
;
__be
16
sl_g_mlpath
;
__be
16
rlid
;
__be
32
imm_etype_pkey_eec
;
__be
32
byte_cnt
;
__be
32
wqe
;
u8
opcode
;
u8
is_send
;
u8
reserved
;
u8
owner
;
};
struct
mthca_err_cqe
{
u
32
my_qpn
;
u32
reserved1
[
3
];
u8
syndrome
;
u8
reserved2
;
u
16
db_cnt
;
u32
reserved3
;
u
32
wqe
;
u8
opcode
;
u8
reserved4
[
2
];
u8
owner
;
__be
32
my_qpn
;
u32
reserved1
[
3
];
u8
syndrome
;
u8
reserved2
;
__be
16
db_cnt
;
u32
reserved3
;
__be
32
wqe
;
u8
opcode
;
u8
reserved4
[
2
];
u8
owner
;
};
#define MTHCA_CQ_ENTRY_OWNER_SW (0 << 7)
...
...
@@ -191,7 +193,7 @@ static void dump_cqe(struct mthca_dev *dev, void *cqe_ptr)
static
inline
void
update_cons_index
(
struct
mthca_dev
*
dev
,
struct
mthca_cq
*
cq
,
int
incr
)
{
u
32
doorbell
[
2
];
__be
32
doorbell
[
2
];
if
(
mthca_is_memfree
(
dev
))
{
*
cq
->
set_ci_db
=
cpu_to_be32
(
cq
->
cons_index
);
...
...
@@ -222,7 +224,8 @@ void mthca_cq_event(struct mthca_dev *dev, u32 cqn)
cq
->
ibcq
.
comp_handler
(
&
cq
->
ibcq
,
cq
->
ibcq
.
cq_context
);
}
void
mthca_cq_clean
(
struct
mthca_dev
*
dev
,
u32
cqn
,
u32
qpn
)
void
mthca_cq_clean
(
struct
mthca_dev
*
dev
,
u32
cqn
,
u32
qpn
,
struct
mthca_srq
*
srq
)
{
struct
mthca_cq
*
cq
;
struct
mthca_cqe
*
cqe
;
...
...
@@ -263,8 +266,11 @@ void mthca_cq_clean(struct mthca_dev *dev, u32 cqn, u32 qpn)
*/
while
(
prod_index
>
cq
->
cons_index
)
{
cqe
=
get_cqe
(
cq
,
(
prod_index
-
1
)
&
cq
->
ibcq
.
cqe
);
if
(
cqe
->
my_qpn
==
cpu_to_be32
(
qpn
))
if
(
cqe
->
my_qpn
==
cpu_to_be32
(
qpn
))
{
if
(
srq
)
mthca_free_srq_wqe
(
srq
,
be32_to_cpu
(
cqe
->
wqe
));
++
nfreed
;
}
else
if
(
nfreed
)
memcpy
(
get_cqe
(
cq
,
(
prod_index
-
1
+
nfreed
)
&
cq
->
ibcq
.
cqe
),
...
...
@@ -291,7 +297,7 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
{
int
err
;
int
dbd
;
u
32
new_wqe
;
__be
32
new_wqe
;
if
(
cqe
->
syndrome
==
SYNDROME_LOCAL_QP_OP_ERR
)
{
mthca_dbg
(
dev
,
"local QP operation err "
...
...
@@ -365,6 +371,13 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
break
;
}
/*
* Mem-free HCAs always generate one CQE per WQE, even in the
* error case, so we don't have to check the doorbell count, etc.
*/
if
(
mthca_is_memfree
(
dev
))
return
0
;
err
=
mthca_free_err_wqe
(
dev
,
qp
,
is_send
,
wqe_index
,
&
dbd
,
&
new_wqe
);
if
(
err
)
return
err
;
...
...
@@ -373,12 +386,8 @@ static int handle_error_cqe(struct mthca_dev *dev, struct mthca_cq *cq,
* If we're at the end of the WQE chain, or we've used up our
* doorbell count, free the CQE. Otherwise just update it for
* the next poll operation.
*
* This does not apply to mem-free HCAs: they don't use the
* doorbell count field, and so we should always free the CQE.
*/
if
(
mthca_is_memfree
(
dev
)
||
!
(
new_wqe
&
cpu_to_be32
(
0x3f
))
||
(
!
cqe
->
db_cnt
&&
dbd
))
if
(
!
(
new_wqe
&
cpu_to_be32
(
0x3f
))
||
(
!
cqe
->
db_cnt
&&
dbd
))
return
0
;
cqe
->
db_cnt
=
cpu_to_be16
(
be16_to_cpu
(
cqe
->
db_cnt
)
-
dbd
);
...
...
@@ -450,23 +459,27 @@ static inline int mthca_poll_one(struct mthca_dev *dev,
>>
wq
->
wqe_shift
);
entry
->
wr_id
=
(
*
cur_qp
)
->
wrid
[
wqe_index
+
(
*
cur_qp
)
->
rq
.
max
];
}
else
if
((
*
cur_qp
)
->
ibqp
.
srq
)
{
struct
mthca_srq
*
srq
=
to_msrq
((
*
cur_qp
)
->
ibqp
.
srq
);
u32
wqe
=
be32_to_cpu
(
cqe
->
wqe
);
wq
=
NULL
;
wqe_index
=
wqe
>>
srq
->
wqe_shift
;
entry
->
wr_id
=
srq
->
wrid
[
wqe_index
];
mthca_free_srq_wqe
(
srq
,
wqe
);
}
else
{
wq
=
&
(
*
cur_qp
)
->
rq
;
wqe_index
=
be32_to_cpu
(
cqe
->
wqe
)
>>
wq
->
wqe_shift
;
entry
->
wr_id
=
(
*
cur_qp
)
->
wrid
[
wqe_index
];
}
if
(
wq
->
last_comp
<
wqe_index
)
wq
->
tail
+=
wqe_index
-
wq
->
last_comp
;
else
wq
->
tail
+=
wqe_index
+
wq
->
max
-
wq
->
last_comp
;
wq
->
last_comp
=
wqe_index
;
if
(
wq
)
{
if
(
wq
->
last_comp
<
wqe_index
)
wq
->
tail
+=
wqe_index
-
wq
->
last_comp
;
else
wq
->
tail
+=
wqe_index
+
wq
->
max
-
wq
->
last_comp
;
if
(
0
)
mthca_dbg
(
dev
,
"%s completion for QP %06x, index %d (nr %d)
\n
"
,
is_send
?
"Send"
:
"Receive"
,
(
*
cur_qp
)
->
qpn
,
wqe_index
,
wq
->
max
);
wq
->
last_comp
=
wqe_index
;
}
if
(
is_error
)
{
err
=
handle_error_cqe
(
dev
,
cq
,
*
cur_qp
,
wqe_index
,
is_send
,
...
...
@@ -584,13 +597,13 @@ int mthca_poll_cq(struct ib_cq *ibcq, int num_entries,
int
mthca_tavor_arm_cq
(
struct
ib_cq
*
cq
,
enum
ib_cq_notify
notify
)
{
u
32
doorbell
[
2
];
__be
32
doorbell
[
2
];
doorbell
[
0
]
=
cpu_to_be32
((
notify
==
IB_CQ_SOLICITED
?
MTHCA_TAVOR_CQ_DB_REQ_NOT_SOL
:
MTHCA_TAVOR_CQ_DB_REQ_NOT
)
|
to_mcq
(
cq
)
->
cqn
);
doorbell
[
1
]
=
0xffffffff
;
doorbell
[
1
]
=
(
__force
__be32
)
0xffffffff
;
mthca_write64
(
doorbell
,
to_mdev
(
cq
->
device
)
->
kar
+
MTHCA_CQ_DOORBELL
,
...
...
@@ -602,9 +615,9 @@ int mthca_tavor_arm_cq(struct ib_cq *cq, enum ib_cq_notify notify)
int
mthca_arbel_arm_cq
(
struct
ib_cq
*
ibcq
,
enum
ib_cq_notify
notify
)
{
struct
mthca_cq
*
cq
=
to_mcq
(
ibcq
);
u
32
doorbell
[
2
];
__be
32
doorbell
[
2
];
u32
sn
;
u
32
ci
;
__be
32
ci
;
sn
=
cq
->
arm_sn
&
3
;
ci
=
cpu_to_be32
(
cq
->
cons_index
);
...
...
@@ -637,113 +650,8 @@ int mthca_arbel_arm_cq(struct ib_cq *ibcq, enum ib_cq_notify notify)
static
void
mthca_free_cq_buf
(
struct
mthca_dev
*
dev
,
struct
mthca_cq
*
cq
)
{
int
i
;
int
size
;
if
(
cq
->
is_direct
)
dma_free_coherent
(
&
dev
->
pdev
->
dev
,
(
cq
->
ibcq
.
cqe
+
1
)
*
MTHCA_CQ_ENTRY_SIZE
,
cq
->
queue
.
direct
.
buf
,
pci_unmap_addr
(
&
cq
->
queue
.
direct
,
mapping
));
else
{
size
=
(
cq
->
ibcq
.
cqe
+
1
)
*
MTHCA_CQ_ENTRY_SIZE
;
for
(
i
=
0
;
i
<
(
size
+
PAGE_SIZE
-
1
)
/
PAGE_SIZE
;
++
i
)
if
(
cq
->
queue
.
page_list
[
i
].
buf
)
dma_free_coherent
(
&
dev
->
pdev
->
dev
,
PAGE_SIZE
,
cq
->
queue
.
page_list
[
i
].
buf
,
pci_unmap_addr
(
&
cq
->
queue
.
page_list
[
i
],
mapping
));
kfree
(
cq
->
queue
.
page_list
);
}
}
static
int
mthca_alloc_cq_buf
(
struct
mthca_dev
*
dev
,
int
size
,
struct
mthca_cq
*
cq
)
{
int
err
=
-
ENOMEM
;
int
npages
,
shift
;
u64
*
dma_list
=
NULL
;
dma_addr_t
t
;
int
i
;
if
(
size
<=
MTHCA_MAX_DIRECT_CQ_SIZE
)
{
cq
->
is_direct
=
1
;
npages
=
1
;
shift
=
get_order
(
size
)
+
PAGE_SHIFT
;
cq
->
queue
.
direct
.
buf
=
dma_alloc_coherent
(
&
dev
->
pdev
->
dev
,
size
,
&
t
,
GFP_KERNEL
);
if
(
!
cq
->
queue
.
direct
.
buf
)
return
-
ENOMEM
;
pci_unmap_addr_set
(
&
cq
->
queue
.
direct
,
mapping
,
t
);
memset
(
cq
->
queue
.
direct
.
buf
,
0
,
size
);
while
(
t
&
((
1
<<
shift
)
-
1
))
{
--
shift
;
npages
*=
2
;
}
dma_list
=
kmalloc
(
npages
*
sizeof
*
dma_list
,
GFP_KERNEL
);
if
(
!
dma_list
)
goto
err_free
;
for
(
i
=
0
;
i
<
npages
;
++
i
)
dma_list
[
i
]
=
t
+
i
*
(
1
<<
shift
);
}
else
{
cq
->
is_direct
=
0
;
npages
=
(
size
+
PAGE_SIZE
-
1
)
/
PAGE_SIZE
;
shift
=
PAGE_SHIFT
;
dma_list
=
kmalloc
(
npages
*
sizeof
*
dma_list
,
GFP_KERNEL
);
if
(
!
dma_list
)
return
-
ENOMEM
;
cq
->
queue
.
page_list
=
kmalloc
(
npages
*
sizeof
*
cq
->
queue
.
page_list
,
GFP_KERNEL
);
if
(
!
cq
->
queue
.
page_list
)
goto
err_out
;
for
(
i
=
0
;
i
<
npages
;
++
i
)
cq
->
queue
.
page_list
[
i
].
buf
=
NULL
;
for
(
i
=
0
;
i
<
npages
;
++
i
)
{
cq
->
queue
.
page_list
[
i
].
buf
=
dma_alloc_coherent
(
&
dev
->
pdev
->
dev
,
PAGE_SIZE
,
&
t
,
GFP_KERNEL
);
if
(
!
cq
->
queue
.
page_list
[
i
].
buf
)
goto
err_free
;
dma_list
[
i
]
=
t
;
pci_unmap_addr_set
(
&
cq
->
queue
.
page_list
[
i
],
mapping
,
t
);
memset
(
cq
->
queue
.
page_list
[
i
].
buf
,
0
,
PAGE_SIZE
);
}
}
err
=
mthca_mr_alloc_phys
(
dev
,
dev
->
driver_pd
.
pd_num
,
dma_list
,
shift
,
npages
,
0
,
size
,
MTHCA_MPT_FLAG_LOCAL_WRITE
|
MTHCA_MPT_FLAG_LOCAL_READ
,
&
cq
->
mr
);
if
(
err
)
goto
err_free
;
kfree
(
dma_list
);
return
0
;
err_free:
mthca_free_cq_buf
(
dev
,
cq
);
err_out:
kfree
(
dma_list
);
return
err
;
mthca_buf_free
(
dev
,
(
cq
->
ibcq
.
cqe
+
1
)
*
MTHCA_CQ_ENTRY_SIZE
,
&
cq
->
queue
,
cq
->
is_direct
,
&
cq
->
mr
);
}
int
mthca_init_cq
(
struct
mthca_dev
*
dev
,
int
nent
,
...
...
@@ -795,7 +703,9 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
cq_context
=
mailbox
->
buf
;
if
(
cq
->
is_kernel
)
{
err
=
mthca_alloc_cq_buf
(
dev
,
size
,
cq
);
err
=
mthca_buf_alloc
(
dev
,
size
,
MTHCA_MAX_DIRECT_CQ_SIZE
,
&
cq
->
queue
,
&
cq
->
is_direct
,
&
dev
->
driver_pd
,
1
,
&
cq
->
mr
);
if
(
err
)
goto
err_out_mailbox
;
...
...
@@ -811,7 +721,6 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
cq_context
->
flags
=
cpu_to_be32
(
MTHCA_CQ_STATUS_OK
|
MTHCA_CQ_STATE_DISARMED
|
MTHCA_CQ_FLAG_TR
);
cq_context
->
start
=
cpu_to_be64
(
0
);
cq_context
->
logsize_usrpage
=
cpu_to_be32
((
ffs
(
nent
)
-
1
)
<<
24
);
if
(
ctx
)
cq_context
->
logsize_usrpage
|=
cpu_to_be32
(
ctx
->
uar
.
index
);
...
...
@@ -857,10 +766,8 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
return
0
;
err_out_free_mr:
if
(
cq
->
is_kernel
)
{
mthca_free_mr
(
dev
,
&
cq
->
mr
);
if
(
cq
->
is_kernel
)
mthca_free_cq_buf
(
dev
,
cq
);
}
err_out_mailbox:
mthca_free_mailbox
(
dev
,
mailbox
);
...
...
@@ -904,7 +811,7 @@ void mthca_free_cq(struct mthca_dev *dev,
mthca_warn
(
dev
,
"HW2SW_CQ returned status 0x%02x
\n
"
,
status
);
if
(
0
)
{
u
32
*
ctx
=
mailbox
->
buf
;
__be
32
*
ctx
=
mailbox
->
buf
;
int
j
;
printk
(
KERN_ERR
"context for CQN %x (cons index %x, next sw %d)
\n
"
,
...
...
@@ -928,7 +835,6 @@ void mthca_free_cq(struct mthca_dev *dev,
wait_event
(
cq
->
wait
,
!
atomic_read
(
&
cq
->
refcount
));
if
(
cq
->
is_kernel
)
{
mthca_free_mr
(
dev
,
&
cq
->
mr
);
mthca_free_cq_buf
(
dev
,
cq
);
if
(
mthca_is_memfree
(
dev
))
{
mthca_free_db
(
dev
,
MTHCA_DB_TYPE_CQ_ARM
,
cq
->
arm_db_index
);
...
...
drivers/infiniband/hw/mthca/mthca_dev.h
浏览文件 @
a78b3371
...
...
@@ -2,6 +2,8 @@
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -66,6 +68,10 @@ enum {
MTHCA_MAX_PORTS
=
2
};
enum
{
MTHCA_BOARD_ID_LEN
=
64
};
enum
{
MTHCA_EQ_CONTEXT_SIZE
=
0x40
,
MTHCA_CQ_CONTEXT_SIZE
=
0x40
,
...
...
@@ -142,6 +148,7 @@ struct mthca_limits {
int
reserved_mcgs
;
int
num_pds
;
int
reserved_pds
;
u8
port_width_cap
;
};
struct
mthca_alloc
{
...
...
@@ -211,6 +218,13 @@ struct mthca_cq_table {
struct
mthca_icm_table
*
table
;
};
struct
mthca_srq_table
{
struct
mthca_alloc
alloc
;
spinlock_t
lock
;
struct
mthca_array
srq
;
struct
mthca_icm_table
*
table
;
};
struct
mthca_qp_table
{
struct
mthca_alloc
alloc
;
u32
rdb_base
;
...
...
@@ -246,6 +260,7 @@ struct mthca_dev {
unsigned
long
device_cap_flags
;
u32
rev_id
;
char
board_id
[
MTHCA_BOARD_ID_LEN
];
/* firmware info */
u64
fw_ver
;
...
...
@@ -291,6 +306,7 @@ struct mthca_dev {
struct
mthca_mr_table
mr_table
;
struct
mthca_eq_table
eq_table
;
struct
mthca_cq_table
cq_table
;
struct
mthca_srq_table
srq_table
;
struct
mthca_qp_table
qp_table
;
struct
mthca_av_table
av_table
;
struct
mthca_mcg_table
mcg_table
;
...
...
@@ -331,14 +347,13 @@ extern void __buggy_use_of_MTHCA_PUT(void);
#define MTHCA_PUT(dest, source, offset) \
do { \
__typeof__(source) *__p = \
(__typeof__(source) *) ((char *) (dest) + (offset)); \
void *__d = ((char *) (dest) + (offset)); \
switch (sizeof(source)) { \
case 1: *__p = (source); break;
\
case 2: *__p = cpu_to_be16(source); break;
\
case 4: *__p = cpu_to_be32(source); break;
\
case 8: *__p = cpu_to_be64(source); break;
\
default: __buggy_use_of_MTHCA_PUT();
\
case 1: *(u8 *) __d = (source); break;
\
case 2: *(__be16 *) __d = cpu_to_be16(source); break;
\
case 4: *(__be32 *) __d = cpu_to_be32(source); break;
\
case 8: *(__be64 *) __d = cpu_to_be64(source); break;
\
default: __buggy_use_of_MTHCA_PUT();
\
} \
} while (0)
...
...
@@ -354,12 +369,18 @@ int mthca_array_set(struct mthca_array *array, int index, void *value);
void
mthca_array_clear
(
struct
mthca_array
*
array
,
int
index
);
int
mthca_array_init
(
struct
mthca_array
*
array
,
int
nent
);
void
mthca_array_cleanup
(
struct
mthca_array
*
array
,
int
nent
);
int
mthca_buf_alloc
(
struct
mthca_dev
*
dev
,
int
size
,
int
max_direct
,
union
mthca_buf
*
buf
,
int
*
is_direct
,
struct
mthca_pd
*
pd
,
int
hca_write
,
struct
mthca_mr
*
mr
);
void
mthca_buf_free
(
struct
mthca_dev
*
dev
,
int
size
,
union
mthca_buf
*
buf
,
int
is_direct
,
struct
mthca_mr
*
mr
);
int
mthca_init_uar_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_pd_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_mr_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_eq_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_cq_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_srq_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_qp_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_av_table
(
struct
mthca_dev
*
dev
);
int
mthca_init_mcg_table
(
struct
mthca_dev
*
dev
);
...
...
@@ -369,6 +390,7 @@ void mthca_cleanup_pd_table(struct mthca_dev *dev);
void
mthca_cleanup_mr_table
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_eq_table
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_cq_table
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_srq_table
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_qp_table
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_av_table
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_mcg_table
(
struct
mthca_dev
*
dev
);
...
...
@@ -419,7 +441,19 @@ int mthca_init_cq(struct mthca_dev *dev, int nent,
void
mthca_free_cq
(
struct
mthca_dev
*
dev
,
struct
mthca_cq
*
cq
);
void
mthca_cq_event
(
struct
mthca_dev
*
dev
,
u32
cqn
);
void
mthca_cq_clean
(
struct
mthca_dev
*
dev
,
u32
cqn
,
u32
qpn
);
void
mthca_cq_clean
(
struct
mthca_dev
*
dev
,
u32
cqn
,
u32
qpn
,
struct
mthca_srq
*
srq
);
int
mthca_alloc_srq
(
struct
mthca_dev
*
dev
,
struct
mthca_pd
*
pd
,
struct
ib_srq_attr
*
attr
,
struct
mthca_srq
*
srq
);
void
mthca_free_srq
(
struct
mthca_dev
*
dev
,
struct
mthca_srq
*
srq
);
void
mthca_srq_event
(
struct
mthca_dev
*
dev
,
u32
srqn
,
enum
ib_event_type
event_type
);
void
mthca_free_srq_wqe
(
struct
mthca_srq
*
srq
,
u32
wqe_addr
);
int
mthca_tavor_post_srq_recv
(
struct
ib_srq
*
srq
,
struct
ib_recv_wr
*
wr
,
struct
ib_recv_wr
**
bad_wr
);
int
mthca_arbel_post_srq_recv
(
struct
ib_srq
*
srq
,
struct
ib_recv_wr
*
wr
,
struct
ib_recv_wr
**
bad_wr
);
void
mthca_qp_event
(
struct
mthca_dev
*
dev
,
u32
qpn
,
enum
ib_event_type
event_type
);
...
...
@@ -433,7 +467,7 @@ int mthca_arbel_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr,
int
mthca_arbel_post_receive
(
struct
ib_qp
*
ibqp
,
struct
ib_recv_wr
*
wr
,
struct
ib_recv_wr
**
bad_wr
);
int
mthca_free_err_wqe
(
struct
mthca_dev
*
dev
,
struct
mthca_qp
*
qp
,
int
is_send
,
int
index
,
int
*
dbd
,
u
32
*
new_wqe
);
int
index
,
int
*
dbd
,
__be
32
*
new_wqe
);
int
mthca_alloc_qp
(
struct
mthca_dev
*
dev
,
struct
mthca_pd
*
pd
,
struct
mthca_cq
*
send_cq
,
...
...
drivers/infiniband/hw/mthca/mthca_doorbell.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -57,13 +58,13 @@ static inline void mthca_write64_raw(__be64 val, void __iomem *dest)
__raw_writeq
((
__force
u64
)
val
,
dest
);
}
static
inline
void
mthca_write64
(
u
32
val
[
2
],
void
__iomem
*
dest
,
static
inline
void
mthca_write64
(
__be
32
val
[
2
],
void
__iomem
*
dest
,
spinlock_t
*
doorbell_lock
)
{
__raw_writeq
(
*
(
u64
*
)
val
,
dest
);
}
static
inline
void
mthca_write_db_rec
(
u32
val
[
2
],
u
32
*
db
)
static
inline
void
mthca_write_db_rec
(
__be32
val
[
2
],
__be
32
*
db
)
{
*
(
u64
*
)
db
=
*
(
u64
*
)
val
;
}
...
...
@@ -86,18 +87,18 @@ static inline void mthca_write64_raw(__be64 val, void __iomem *dest)
__raw_writel
(((
__force
u32
*
)
&
val
)[
1
],
dest
+
4
);
}
static
inline
void
mthca_write64
(
u
32
val
[
2
],
void
__iomem
*
dest
,
static
inline
void
mthca_write64
(
__be
32
val
[
2
],
void
__iomem
*
dest
,
spinlock_t
*
doorbell_lock
)
{
unsigned
long
flags
;
spin_lock_irqsave
(
doorbell_lock
,
flags
);
__raw_writel
(
val
[
0
],
dest
);
__raw_writel
(
val
[
1
],
dest
+
4
);
__raw_writel
(
(
__force
u32
)
val
[
0
],
dest
);
__raw_writel
(
(
__force
u32
)
val
[
1
],
dest
+
4
);
spin_unlock_irqrestore
(
doorbell_lock
,
flags
);
}
static
inline
void
mthca_write_db_rec
(
u32
val
[
2
],
u
32
*
db
)
static
inline
void
mthca_write_db_rec
(
__be32
val
[
2
],
__be
32
*
db
)
{
db
[
0
]
=
val
[
0
];
wmb
();
...
...
drivers/infiniband/hw/mthca/mthca_eq.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -51,18 +52,18 @@ enum {
* Must be packed because start is 64 bits but only aligned to 32 bits.
*/
struct
mthca_eq_context
{
u
32
flags
;
u
64
start
;
u
32
logsize_usrpage
;
u32
tavor_pd
;
/* reserved for Arbel */
u8
reserved1
[
3
];
u8
intr
;
u32
arbel_pd
;
/* lost_count for Tavor */
u
32
lkey
;
u32
reserved2
[
2
];
u
32
consumer_index
;
u
32
producer_index
;
u32
reserved3
[
4
];
__be
32
flags
;
__be
64
start
;
__be
32
logsize_usrpage
;
__be32
tavor_pd
;
/* reserved for Arbel */
u8
reserved1
[
3
];
u8
intr
;
__be32
arbel_pd
;
/* lost_count for Tavor */
__be
32
lkey
;
u32
reserved2
[
2
];
__be
32
consumer_index
;
__be
32
producer_index
;
u32
reserved3
[
4
];
}
__attribute__
((
packed
));
#define MTHCA_EQ_STATUS_OK ( 0 << 28)
...
...
@@ -127,28 +128,28 @@ struct mthca_eqe {
union
{
u32
raw
[
6
];
struct
{
u
32
cqn
;
__be
32
cqn
;
}
__attribute__
((
packed
))
comp
;
struct
{
u16
reserved1
;
u
16
token
;
u32
reserved2
;
u8
reserved3
[
3
];
u8
status
;
u
64
out_param
;
u16
reserved1
;
__be
16
token
;
u32
reserved2
;
u8
reserved3
[
3
];
u8
status
;
__be
64
out_param
;
}
__attribute__
((
packed
))
cmd
;
struct
{
u
32
qpn
;
__be
32
qpn
;
}
__attribute__
((
packed
))
qp
;
struct
{
u
32
cqn
;
u32
reserved1
;
u8
reserved2
[
3
];
u8
syndrome
;
__be
32
cqn
;
u32
reserved1
;
u8
reserved2
[
3
];
u8
syndrome
;
}
__attribute__
((
packed
))
cq_err
;
struct
{
u32
reserved1
[
2
];
u
32
port
;
u32
reserved1
[
2
];
__be
32
port
;
}
__attribute__
((
packed
))
port_change
;
}
event
;
u8
reserved3
[
3
];
...
...
@@ -167,7 +168,7 @@ static inline u64 async_mask(struct mthca_dev *dev)
static
inline
void
tavor_set_eq_ci
(
struct
mthca_dev
*
dev
,
struct
mthca_eq
*
eq
,
u32
ci
)
{
u
32
doorbell
[
2
];
__be
32
doorbell
[
2
];
doorbell
[
0
]
=
cpu_to_be32
(
MTHCA_EQ_DB_SET_CI
|
eq
->
eqn
);
doorbell
[
1
]
=
cpu_to_be32
(
ci
&
(
eq
->
nent
-
1
));
...
...
@@ -190,8 +191,8 @@ static inline void arbel_set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u
{
/* See comment in tavor_set_eq_ci() above. */
wmb
();
__raw_writel
(
cpu_to_be32
(
ci
),
dev
->
eq_regs
.
arbel
.
eq_set_ci_base
+
eq
->
eqn
*
8
);
__raw_writel
(
(
__force
u32
)
cpu_to_be32
(
ci
),
dev
->
eq_regs
.
arbel
.
eq_set_ci_base
+
eq
->
eqn
*
8
);
/* We still want ordering, just not swabbing, so add a barrier */
mb
();
}
...
...
@@ -206,7 +207,7 @@ static inline void set_eq_ci(struct mthca_dev *dev, struct mthca_eq *eq, u32 ci)
static
inline
void
tavor_eq_req_not
(
struct
mthca_dev
*
dev
,
int
eqn
)
{
u
32
doorbell
[
2
];
__be
32
doorbell
[
2
];
doorbell
[
0
]
=
cpu_to_be32
(
MTHCA_EQ_DB_REQ_NOT
|
eqn
);
doorbell
[
1
]
=
0
;
...
...
@@ -224,7 +225,7 @@ static inline void arbel_eq_req_not(struct mthca_dev *dev, u32 eqn_mask)
static
inline
void
disarm_cq
(
struct
mthca_dev
*
dev
,
int
eqn
,
int
cqn
)
{
if
(
!
mthca_is_memfree
(
dev
))
{
u
32
doorbell
[
2
];
__be
32
doorbell
[
2
];
doorbell
[
0
]
=
cpu_to_be32
(
MTHCA_EQ_DB_DISARM_CQ
|
eqn
);
doorbell
[
1
]
=
cpu_to_be32
(
cqn
);
...
...
drivers/infiniband/hw/mthca/mthca_mad.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -32,9 +34,9 @@
* $Id: mthca_mad.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <ib_verbs.h>
#include <ib_mad.h>
#include <ib_smi.h>
#include <
rdma/
ib_verbs.h>
#include <
rdma/
ib_mad.h>
#include <
rdma/
ib_smi.h>
#include "mthca_dev.h"
#include "mthca_cmd.h"
...
...
@@ -192,7 +194,7 @@ int mthca_process_mad(struct ib_device *ibdev,
{
int
err
;
u8
status
;
u16
slid
=
in_wc
?
in_wc
->
slid
:
IB_LID_PERMISSIVE
;
u16
slid
=
in_wc
?
in_wc
->
slid
:
be16_to_cpu
(
IB_LID_PERMISSIVE
)
;
/* Forward locally generated traps to the SM */
if
(
in_mad
->
mad_hdr
.
method
==
IB_MGMT_METHOD_TRAP
&&
...
...
drivers/infiniband/hw/mthca/mthca_main.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -34,7 +35,6 @@
*/
#include <linux/config.h>
#include <linux/version.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/errno.h>
...
...
@@ -171,6 +171,7 @@ static int __devinit mthca_dev_lim(struct mthca_dev *mdev, struct mthca_dev_lim
mdev
->
limits
.
reserved_mrws
=
dev_lim
->
reserved_mrws
;
mdev
->
limits
.
reserved_uars
=
dev_lim
->
reserved_uars
;
mdev
->
limits
.
reserved_pds
=
dev_lim
->
reserved_pds
;
mdev
->
limits
.
port_width_cap
=
dev_lim
->
max_port_width
;
/* IB_DEVICE_RESIZE_MAX_WR not supported by driver.
May be doable since hardware supports it for SRQ.
...
...
@@ -212,7 +213,6 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
struct
mthca_dev_lim
dev_lim
;
struct
mthca_profile
profile
;
struct
mthca_init_hca_param
init_hca
;
struct
mthca_adapter
adapter
;
err
=
mthca_SYS_EN
(
mdev
,
&
status
);
if
(
err
)
{
...
...
@@ -253,6 +253,8 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
profile
=
default_profile
;
profile
.
num_uar
=
dev_lim
.
uar_size
/
PAGE_SIZE
;
profile
.
uarc_size
=
0
;
if
(
mdev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
profile
.
num_srq
=
dev_lim
.
max_srqs
;
err
=
mthca_make_profile
(
mdev
,
&
profile
,
&
dev_lim
,
&
init_hca
);
if
(
err
<
0
)
...
...
@@ -270,26 +272,8 @@ static int __devinit mthca_init_tavor(struct mthca_dev *mdev)
goto
err_disable
;
}
err
=
mthca_QUERY_ADAPTER
(
mdev
,
&
adapter
,
&
status
);
if
(
err
)
{
mthca_err
(
mdev
,
"QUERY_ADAPTER command failed, aborting.
\n
"
);
goto
err_close
;
}
if
(
status
)
{
mthca_err
(
mdev
,
"QUERY_ADAPTER returned status 0x%02x, "
"aborting.
\n
"
,
status
);
err
=
-
EINVAL
;
goto
err_close
;
}
mdev
->
eq_table
.
inta_pin
=
adapter
.
inta_pin
;
mdev
->
rev_id
=
adapter
.
revision_id
;
return
0
;
err_close:
mthca_CLOSE_HCA
(
mdev
,
0
,
&
status
);
err_disable:
mthca_SYS_DIS
(
mdev
,
&
status
);
...
...
@@ -442,15 +426,29 @@ static int __devinit mthca_init_icm(struct mthca_dev *mdev,
}
mdev
->
cq_table
.
table
=
mthca_alloc_icm_table
(
mdev
,
init_hca
->
cqc_base
,
dev_lim
->
cqc_entry_sz
,
mdev
->
limits
.
num_cqs
,
mdev
->
limits
.
reserved_cqs
,
0
);
dev_lim
->
cqc_entry_sz
,
mdev
->
limits
.
num_cqs
,
mdev
->
limits
.
reserved_cqs
,
0
);
if
(
!
mdev
->
cq_table
.
table
)
{
mthca_err
(
mdev
,
"Failed to map CQ context memory, aborting.
\n
"
);
err
=
-
ENOMEM
;
goto
err_unmap_rdb
;
}
if
(
mdev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
{
mdev
->
srq_table
.
table
=
mthca_alloc_icm_table
(
mdev
,
init_hca
->
srqc_base
,
dev_lim
->
srq_entry_sz
,
mdev
->
limits
.
num_srqs
,
mdev
->
limits
.
reserved_srqs
,
0
);
if
(
!
mdev
->
srq_table
.
table
)
{
mthca_err
(
mdev
,
"Failed to map SRQ context memory, "
"aborting.
\n
"
);
err
=
-
ENOMEM
;
goto
err_unmap_cq
;
}
}
/*
* It's not strictly required, but for simplicity just map the
* whole multicast group table now. The table isn't very big
...
...
@@ -466,11 +464,15 @@ static int __devinit mthca_init_icm(struct mthca_dev *mdev,
if
(
!
mdev
->
mcg_table
.
table
)
{
mthca_err
(
mdev
,
"Failed to map MCG context memory, aborting.
\n
"
);
err
=
-
ENOMEM
;
goto
err_unmap_
c
q
;
goto
err_unmap_
sr
q
;
}
return
0
;
err_unmap_srq:
if
(
mdev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
mthca_free_icm_table
(
mdev
,
mdev
->
srq_table
.
table
);
err_unmap_cq:
mthca_free_icm_table
(
mdev
,
mdev
->
cq_table
.
table
);
...
...
@@ -506,7 +508,6 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
struct
mthca_dev_lim
dev_lim
;
struct
mthca_profile
profile
;
struct
mthca_init_hca_param
init_hca
;
struct
mthca_adapter
adapter
;
u64
icm_size
;
u8
status
;
int
err
;
...
...
@@ -551,6 +552,8 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
profile
=
default_profile
;
profile
.
num_uar
=
dev_lim
.
uar_size
/
PAGE_SIZE
;
profile
.
num_udav
=
0
;
if
(
mdev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
profile
.
num_srq
=
dev_lim
.
max_srqs
;
icm_size
=
mthca_make_profile
(
mdev
,
&
profile
,
&
dev_lim
,
&
init_hca
);
if
((
int
)
icm_size
<
0
)
{
...
...
@@ -574,24 +577,11 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
goto
err_free_icm
;
}
err
=
mthca_QUERY_ADAPTER
(
mdev
,
&
adapter
,
&
status
);
if
(
err
)
{
mthca_err
(
mdev
,
"QUERY_ADAPTER command failed, aborting.
\n
"
);
goto
err_free_icm
;
}
if
(
status
)
{
mthca_err
(
mdev
,
"QUERY_ADAPTER returned status 0x%02x, "
"aborting.
\n
"
,
status
);
err
=
-
EINVAL
;
goto
err_free_icm
;
}
mdev
->
eq_table
.
inta_pin
=
adapter
.
inta_pin
;
mdev
->
rev_id
=
adapter
.
revision_id
;
return
0
;
err_free_icm:
if
(
mdev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
mthca_free_icm_table
(
mdev
,
mdev
->
srq_table
.
table
);
mthca_free_icm_table
(
mdev
,
mdev
->
cq_table
.
table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
rdb_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
eqp_table
);
...
...
@@ -614,12 +604,70 @@ static int __devinit mthca_init_arbel(struct mthca_dev *mdev)
return
err
;
}
static
void
mthca_close_hca
(
struct
mthca_dev
*
mdev
)
{
u8
status
;
mthca_CLOSE_HCA
(
mdev
,
0
,
&
status
);
if
(
mthca_is_memfree
(
mdev
))
{
if
(
mdev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
mthca_free_icm_table
(
mdev
,
mdev
->
srq_table
.
table
);
mthca_free_icm_table
(
mdev
,
mdev
->
cq_table
.
table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
rdb_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
eqp_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
qp_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
mr_table
.
mpt_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
mr_table
.
mtt_table
);
mthca_unmap_eq_icm
(
mdev
);
mthca_UNMAP_ICM_AUX
(
mdev
,
&
status
);
mthca_free_icm
(
mdev
,
mdev
->
fw
.
arbel
.
aux_icm
);
mthca_UNMAP_FA
(
mdev
,
&
status
);
mthca_free_icm
(
mdev
,
mdev
->
fw
.
arbel
.
fw_icm
);
if
(
!
(
mdev
->
mthca_flags
&
MTHCA_FLAG_NO_LAM
))
mthca_DISABLE_LAM
(
mdev
,
&
status
);
}
else
mthca_SYS_DIS
(
mdev
,
&
status
);
}
static
int
__devinit
mthca_init_hca
(
struct
mthca_dev
*
mdev
)
{
u8
status
;
int
err
;
struct
mthca_adapter
adapter
;
if
(
mthca_is_memfree
(
mdev
))
return
mthca_init_arbel
(
mdev
);
err
=
mthca_init_arbel
(
mdev
);
else
return
mthca_init_tavor
(
mdev
);
err
=
mthca_init_tavor
(
mdev
);
if
(
err
)
return
err
;
err
=
mthca_QUERY_ADAPTER
(
mdev
,
&
adapter
,
&
status
);
if
(
err
)
{
mthca_err
(
mdev
,
"QUERY_ADAPTER command failed, aborting.
\n
"
);
goto
err_close
;
}
if
(
status
)
{
mthca_err
(
mdev
,
"QUERY_ADAPTER returned status 0x%02x, "
"aborting.
\n
"
,
status
);
err
=
-
EINVAL
;
goto
err_close
;
}
mdev
->
eq_table
.
inta_pin
=
adapter
.
inta_pin
;
mdev
->
rev_id
=
adapter
.
revision_id
;
memcpy
(
mdev
->
board_id
,
adapter
.
board_id
,
sizeof
mdev
->
board_id
);
return
0
;
err_close:
mthca_close_hca
(
mdev
);
return
err
;
}
static
int
__devinit
mthca_setup_hca
(
struct
mthca_dev
*
dev
)
...
...
@@ -709,11 +757,18 @@ static int __devinit mthca_setup_hca(struct mthca_dev *dev)
goto
err_cmd_poll
;
}
err
=
mthca_init_srq_table
(
dev
);
if
(
err
)
{
mthca_err
(
dev
,
"Failed to initialize "
"shared receive queue table, aborting.
\n
"
);
goto
err_cq_table_free
;
}
err
=
mthca_init_qp_table
(
dev
);
if
(
err
)
{
mthca_err
(
dev
,
"Failed to initialize "
"queue pair table, aborting.
\n
"
);
goto
err_
c
q_table_free
;
goto
err_
sr
q_table_free
;
}
err
=
mthca_init_av_table
(
dev
);
...
...
@@ -738,6 +793,9 @@ static int __devinit mthca_setup_hca(struct mthca_dev *dev)
err_qp_table_free:
mthca_cleanup_qp_table
(
dev
);
err_srq_table_free:
mthca_cleanup_srq_table
(
dev
);
err_cq_table_free:
mthca_cleanup_cq_table
(
dev
);
...
...
@@ -844,33 +902,6 @@ static int __devinit mthca_enable_msi_x(struct mthca_dev *mdev)
return
0
;
}
static
void
mthca_close_hca
(
struct
mthca_dev
*
mdev
)
{
u8
status
;
mthca_CLOSE_HCA
(
mdev
,
0
,
&
status
);
if
(
mthca_is_memfree
(
mdev
))
{
mthca_free_icm_table
(
mdev
,
mdev
->
cq_table
.
table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
rdb_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
eqp_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
qp_table
.
qp_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
mr_table
.
mpt_table
);
mthca_free_icm_table
(
mdev
,
mdev
->
mr_table
.
mtt_table
);
mthca_unmap_eq_icm
(
mdev
);
mthca_UNMAP_ICM_AUX
(
mdev
,
&
status
);
mthca_free_icm
(
mdev
,
mdev
->
fw
.
arbel
.
aux_icm
);
mthca_UNMAP_FA
(
mdev
,
&
status
);
mthca_free_icm
(
mdev
,
mdev
->
fw
.
arbel
.
fw_icm
);
if
(
!
(
mdev
->
mthca_flags
&
MTHCA_FLAG_NO_LAM
))
mthca_DISABLE_LAM
(
mdev
,
&
status
);
}
else
mthca_SYS_DIS
(
mdev
,
&
status
);
}
/* Types of supported HCA */
enum
{
TAVOR
,
/* MT23108 */
...
...
@@ -887,9 +918,9 @@ static struct {
int
is_memfree
;
int
is_pcie
;
}
mthca_hca_table
[]
=
{
[
TAVOR
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
3
,
3
,
2
),
.
is_memfree
=
0
,
.
is_pcie
=
0
},
[
ARBEL_COMPAT
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
4
,
6
,
2
),
.
is_memfree
=
0
,
.
is_pcie
=
1
},
[
ARBEL_NATIVE
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
5
,
0
,
1
),
.
is_memfree
=
1
,
.
is_pcie
=
1
},
[
TAVOR
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
3
,
3
,
3
),
.
is_memfree
=
0
,
.
is_pcie
=
0
},
[
ARBEL_COMPAT
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
4
,
7
,
0
),
.
is_memfree
=
0
,
.
is_pcie
=
1
},
[
ARBEL_NATIVE
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
5
,
1
,
0
),
.
is_memfree
=
1
,
.
is_pcie
=
1
},
[
SINAI
]
=
{
.
latest_fw
=
MTHCA_FW_VER
(
1
,
0
,
1
),
.
is_memfree
=
1
,
.
is_pcie
=
1
}
};
...
...
@@ -1051,6 +1082,7 @@ static int __devinit mthca_init_one(struct pci_dev *pdev,
mthca_cleanup_mcg_table
(
mdev
);
mthca_cleanup_av_table
(
mdev
);
mthca_cleanup_qp_table
(
mdev
);
mthca_cleanup_srq_table
(
mdev
);
mthca_cleanup_cq_table
(
mdev
);
mthca_cmd_use_polling
(
mdev
);
mthca_cleanup_eq_table
(
mdev
);
...
...
@@ -1100,6 +1132,7 @@ static void __devexit mthca_remove_one(struct pci_dev *pdev)
mthca_cleanup_mcg_table
(
mdev
);
mthca_cleanup_av_table
(
mdev
);
mthca_cleanup_qp_table
(
mdev
);
mthca_cleanup_srq_table
(
mdev
);
mthca_cleanup_cq_table
(
mdev
);
mthca_cmd_use_polling
(
mdev
);
mthca_cleanup_eq_table
(
mdev
);
...
...
drivers/infiniband/hw/mthca/mthca_mcg.c
浏览文件 @
a78b3371
...
...
@@ -42,10 +42,10 @@ enum {
};
struct
mthca_mgm
{
u
32
next_gid_index
;
u32
reserved
[
3
];
u8
gid
[
16
];
u
32
qp
[
MTHCA_QP_PER_MGM
];
__be
32
next_gid_index
;
u32
reserved
[
3
];
u8
gid
[
16
];
__be
32
qp
[
MTHCA_QP_PER_MGM
];
};
static
const
u8
zero_gid
[
16
];
/* automatically initialized to 0 */
...
...
@@ -94,10 +94,14 @@ static int find_mgm(struct mthca_dev *dev,
if
(
0
)
mthca_dbg
(
dev
,
"Hash for %04x:%04x:%04x:%04x:"
"%04x:%04x:%04x:%04x is %04x
\n
"
,
be16_to_cpu
(((
u16
*
)
gid
)[
0
]),
be16_to_cpu
(((
u16
*
)
gid
)[
1
]),
be16_to_cpu
(((
u16
*
)
gid
)[
2
]),
be16_to_cpu
(((
u16
*
)
gid
)[
3
]),
be16_to_cpu
(((
u16
*
)
gid
)[
4
]),
be16_to_cpu
(((
u16
*
)
gid
)[
5
]),
be16_to_cpu
(((
u16
*
)
gid
)[
6
]),
be16_to_cpu
(((
u16
*
)
gid
)[
7
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
0
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
1
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
2
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
3
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
4
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
5
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
6
]),
be16_to_cpu
(((
__be16
*
)
gid
)[
7
]),
*
hash
);
*
index
=
*
hash
;
...
...
@@ -258,14 +262,14 @@ int mthca_multicast_detach(struct ib_qp *ibqp, union ib_gid *gid, u16 lid)
if
(
index
==
-
1
)
{
mthca_err
(
dev
,
"MGID %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x "
"not found
\n
"
,
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
0
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
1
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
2
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
3
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
4
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
5
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
6
]),
be16_to_cpu
(((
u
16
*
)
gid
->
raw
)[
7
]));
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
0
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
1
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
2
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
3
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
4
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
5
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
6
]),
be16_to_cpu
(((
__be
16
*
)
gid
->
raw
)[
7
]));
err
=
-
EINVAL
;
goto
out
;
}
...
...
drivers/infiniband/hw/mthca/mthca_memfree.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -285,6 +286,7 @@ struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
{
struct
mthca_icm_table
*
table
;
int
num_icm
;
unsigned
chunk_size
;
int
i
;
u8
status
;
...
...
@@ -305,7 +307,11 @@ struct mthca_icm_table *mthca_alloc_icm_table(struct mthca_dev *dev,
table
->
icm
[
i
]
=
NULL
;
for
(
i
=
0
;
i
*
MTHCA_TABLE_CHUNK_SIZE
<
reserved
*
obj_size
;
++
i
)
{
table
->
icm
[
i
]
=
mthca_alloc_icm
(
dev
,
MTHCA_TABLE_CHUNK_SIZE
>>
PAGE_SHIFT
,
chunk_size
=
MTHCA_TABLE_CHUNK_SIZE
;
if
((
i
+
1
)
*
MTHCA_TABLE_CHUNK_SIZE
>
nobj
*
obj_size
)
chunk_size
=
nobj
*
obj_size
-
i
*
MTHCA_TABLE_CHUNK_SIZE
;
table
->
icm
[
i
]
=
mthca_alloc_icm
(
dev
,
chunk_size
>>
PAGE_SHIFT
,
(
use_lowmem
?
GFP_KERNEL
:
GFP_HIGHUSER
)
|
__GFP_NOWARN
);
if
(
!
table
->
icm
[
i
])
...
...
@@ -481,7 +487,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
}
}
int
mthca_alloc_db
(
struct
mthca_dev
*
dev
,
int
type
,
u32
qn
,
u
32
**
db
)
int
mthca_alloc_db
(
struct
mthca_dev
*
dev
,
int
type
,
u32
qn
,
__be
32
**
db
)
{
int
group
;
int
start
,
end
,
dir
;
...
...
@@ -564,7 +570,7 @@ int mthca_alloc_db(struct mthca_dev *dev, int type, u32 qn, u32 **db)
page
->
db_rec
[
j
]
=
cpu_to_be64
((
qn
<<
8
)
|
(
type
<<
5
));
*
db
=
(
u
32
*
)
&
page
->
db_rec
[
j
];
*
db
=
(
__be
32
*
)
&
page
->
db_rec
[
j
];
out:
up
(
&
dev
->
db_tab
->
mutex
);
...
...
drivers/infiniband/hw/mthca/mthca_memfree.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -137,7 +138,7 @@ enum {
struct
mthca_db_page
{
DECLARE_BITMAP
(
used
,
MTHCA_DB_REC_PER_PAGE
);
u64
*
db_rec
;
__be64
*
db_rec
;
dma_addr_t
mapping
;
};
...
...
@@ -172,7 +173,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar,
int
mthca_init_db_tab
(
struct
mthca_dev
*
dev
);
void
mthca_cleanup_db_tab
(
struct
mthca_dev
*
dev
);
int
mthca_alloc_db
(
struct
mthca_dev
*
dev
,
int
type
,
u32
qn
,
u
32
**
db
);
int
mthca_alloc_db
(
struct
mthca_dev
*
dev
,
int
type
,
u32
qn
,
__be
32
**
db
);
void
mthca_free_db
(
struct
mthca_dev
*
dev
,
int
type
,
int
db_index
);
#endif
/* MTHCA_MEMFREE_H */
drivers/infiniband/hw/mthca/mthca_mr.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -50,18 +51,18 @@ struct mthca_mtt {
* Must be packed because mtt_seg is 64 bits but only aligned to 32 bits.
*/
struct
mthca_mpt_entry
{
u
32
flags
;
u
32
page_size
;
u
32
key
;
u
32
pd
;
u
64
start
;
u
64
length
;
u
32
lkey
;
u
32
window_count
;
u
32
window_count_limit
;
u
64
mtt_seg
;
u
32
mtt_sz
;
/* Arbel only */
u32
reserved
[
2
];
__be
32
flags
;
__be
32
page_size
;
__be
32
key
;
__be
32
pd
;
__be
64
start
;
__be
64
length
;
__be
32
lkey
;
__be
32
window_count
;
__be
32
window_count_limit
;
__be
64
mtt_seg
;
__be
32
mtt_sz
;
/* Arbel only */
u32
reserved
[
2
];
}
__attribute__
((
packed
));
#define MTHCA_MPT_FLAG_SW_OWNS (0xfUL << 28)
...
...
@@ -247,7 +248,7 @@ int mthca_write_mtt(struct mthca_dev *dev, struct mthca_mtt *mtt,
int
start_index
,
u64
*
buffer_list
,
int
list_len
)
{
struct
mthca_mailbox
*
mailbox
;
u
64
*
mtt_entry
;
__be
64
*
mtt_entry
;
int
err
=
0
;
u8
status
;
int
i
;
...
...
@@ -389,7 +390,7 @@ int mthca_mr_alloc(struct mthca_dev *dev, u32 pd, int buffer_size_shift,
for
(
i
=
0
;
i
<
sizeof
(
struct
mthca_mpt_entry
)
/
4
;
++
i
)
{
if
(
i
%
4
==
0
)
printk
(
"[%02x] "
,
i
*
4
);
printk
(
" %08x"
,
be32_to_cpu
(((
u
32
*
)
mpt_entry
)[
i
]));
printk
(
" %08x"
,
be32_to_cpu
(((
__be
32
*
)
mpt_entry
)[
i
]));
if
((
i
+
1
)
%
4
==
0
)
printk
(
"
\n
"
);
}
...
...
@@ -458,7 +459,7 @@ int mthca_mr_alloc_phys(struct mthca_dev *dev, u32 pd,
static
void
mthca_free_region
(
struct
mthca_dev
*
dev
,
u32
lkey
)
{
mthca_table_put
(
dev
,
dev
->
mr_table
.
mpt_table
,
arbel_key_to_hw_index
(
lkey
));
key_to_hw_index
(
dev
,
lkey
));
mthca_free
(
&
dev
->
mr_table
.
mpt_alloc
,
key_to_hw_index
(
dev
,
lkey
));
}
...
...
@@ -562,7 +563,7 @@ int mthca_fmr_alloc(struct mthca_dev *dev, u32 pd,
for
(
i
=
0
;
i
<
sizeof
(
struct
mthca_mpt_entry
)
/
4
;
++
i
)
{
if
(
i
%
4
==
0
)
printk
(
"[%02x] "
,
i
*
4
);
printk
(
" %08x"
,
be32_to_cpu
(((
u
32
*
)
mpt_entry
)[
i
]));
printk
(
" %08x"
,
be32_to_cpu
(((
__be
32
*
)
mpt_entry
)[
i
]));
if
((
i
+
1
)
%
4
==
0
)
printk
(
"
\n
"
);
}
...
...
@@ -669,7 +670,7 @@ int mthca_tavor_map_phys_fmr(struct ib_fmr *ibfmr, u64 *page_list,
mpt_entry
.
length
=
cpu_to_be64
(
list_len
*
(
1ull
<<
fmr
->
attr
.
page_size
));
mpt_entry
.
start
=
cpu_to_be64
(
iova
);
writel
(
mpt_entry
.
lkey
,
&
fmr
->
mem
.
tavor
.
mpt
->
key
);
__raw_writel
((
__force
u32
)
mpt_entry
.
lkey
,
&
fmr
->
mem
.
tavor
.
mpt
->
key
);
memcpy_toio
(
&
fmr
->
mem
.
tavor
.
mpt
->
start
,
&
mpt_entry
.
start
,
offsetof
(
struct
mthca_mpt_entry
,
window_count
)
-
offsetof
(
struct
mthca_mpt_entry
,
start
));
...
...
drivers/infiniband/hw/mthca/mthca_pd.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
drivers/infiniband/hw/mthca/mthca_profile.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -101,6 +102,7 @@ u64 mthca_make_profile(struct mthca_dev *dev,
profile
[
MTHCA_RES_UARC
].
size
=
request
->
uarc_size
;
profile
[
MTHCA_RES_QP
].
num
=
request
->
num_qp
;
profile
[
MTHCA_RES_SRQ
].
num
=
request
->
num_srq
;
profile
[
MTHCA_RES_EQP
].
num
=
request
->
num_qp
;
profile
[
MTHCA_RES_RDB
].
num
=
request
->
num_qp
*
request
->
rdb_per_qp
;
profile
[
MTHCA_RES_CQ
].
num
=
request
->
num_cq
;
...
...
drivers/infiniband/hw/mthca/mthca_profile.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -41,6 +42,7 @@
struct
mthca_profile
{
int
num_qp
;
int
rdb_per_qp
;
int
num_srq
;
int
num_cq
;
int
num_mcg
;
int
num_mpt
;
...
...
drivers/infiniband/hw/mthca/mthca_provider.c
浏览文件 @
a78b3371
...
...
@@ -2,6 +2,8 @@
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -34,7 +36,7 @@
* $Id: mthca_provider.c 1397 2004-12-28 05:09:00Z roland $
*/
#include <ib_smi.h>
#include <
rdma/
ib_smi.h>
#include <linux/mm.h>
#include "mthca_dev.h"
...
...
@@ -79,10 +81,10 @@ static int mthca_query_device(struct ib_device *ibdev,
}
props
->
device_cap_flags
=
mdev
->
device_cap_flags
;
props
->
vendor_id
=
be32_to_cpup
((
u
32
*
)
(
out_mad
->
data
+
36
))
&
props
->
vendor_id
=
be32_to_cpup
((
__be
32
*
)
(
out_mad
->
data
+
36
))
&
0xffffff
;
props
->
vendor_part_id
=
be16_to_cpup
((
u
16
*
)
(
out_mad
->
data
+
30
));
props
->
hw_ver
=
be16_to_cpup
((
u
16
*
)
(
out_mad
->
data
+
32
));
props
->
vendor_part_id
=
be16_to_cpup
((
__be
16
*
)
(
out_mad
->
data
+
30
));
props
->
hw_ver
=
be16_to_cpup
((
__be
16
*
)
(
out_mad
->
data
+
32
));
memcpy
(
&
props
->
sys_image_guid
,
out_mad
->
data
+
4
,
8
);
memcpy
(
&
props
->
node_guid
,
out_mad
->
data
+
12
,
8
);
...
...
@@ -118,6 +120,8 @@ static int mthca_query_port(struct ib_device *ibdev,
if
(
!
in_mad
||
!
out_mad
)
goto
out
;
memset
(
props
,
0
,
sizeof
*
props
);
memset
(
in_mad
,
0
,
sizeof
*
in_mad
);
in_mad
->
base_version
=
1
;
in_mad
->
mgmt_class
=
IB_MGMT_CLASS_SUBN_LID_ROUTED
;
...
...
@@ -136,16 +140,17 @@ static int mthca_query_port(struct ib_device *ibdev,
goto
out
;
}
props
->
lid
=
be16_to_cpup
((
u
16
*
)
(
out_mad
->
data
+
16
));
props
->
lid
=
be16_to_cpup
((
__be
16
*
)
(
out_mad
->
data
+
16
));
props
->
lmc
=
out_mad
->
data
[
34
]
&
0x7
;
props
->
sm_lid
=
be16_to_cpup
((
u
16
*
)
(
out_mad
->
data
+
18
));
props
->
sm_lid
=
be16_to_cpup
((
__be
16
*
)
(
out_mad
->
data
+
18
));
props
->
sm_sl
=
out_mad
->
data
[
36
]
&
0xf
;
props
->
state
=
out_mad
->
data
[
32
]
&
0xf
;
props
->
phys_state
=
out_mad
->
data
[
33
]
>>
4
;
props
->
port_cap_flags
=
be32_to_cpup
((
u
32
*
)
(
out_mad
->
data
+
20
));
props
->
port_cap_flags
=
be32_to_cpup
((
__be
32
*
)
(
out_mad
->
data
+
20
));
props
->
gid_tbl_len
=
to_mdev
(
ibdev
)
->
limits
.
gid_table_len
;
props
->
max_msg_sz
=
0x80000000
;
props
->
pkey_tbl_len
=
to_mdev
(
ibdev
)
->
limits
.
pkey_table_len
;
props
->
qkey_viol_cntr
=
be16_to_cpup
((
u
16
*
)
(
out_mad
->
data
+
48
));
props
->
qkey_viol_cntr
=
be16_to_cpup
((
__be
16
*
)
(
out_mad
->
data
+
48
));
props
->
active_width
=
out_mad
->
data
[
31
]
&
0xf
;
props
->
active_speed
=
out_mad
->
data
[
35
]
>>
4
;
...
...
@@ -221,7 +226,7 @@ static int mthca_query_pkey(struct ib_device *ibdev,
goto
out
;
}
*
pkey
=
be16_to_cpu
(((
u
16
*
)
out_mad
->
data
)[
index
%
32
]);
*
pkey
=
be16_to_cpu
(((
__be
16
*
)
out_mad
->
data
)[
index
%
32
]);
out:
kfree
(
in_mad
);
...
...
@@ -420,6 +425,77 @@ static int mthca_ah_destroy(struct ib_ah *ah)
return
0
;
}
static
struct
ib_srq
*
mthca_create_srq
(
struct
ib_pd
*
pd
,
struct
ib_srq_init_attr
*
init_attr
,
struct
ib_udata
*
udata
)
{
struct
mthca_create_srq
ucmd
;
struct
mthca_ucontext
*
context
=
NULL
;
struct
mthca_srq
*
srq
;
int
err
;
srq
=
kmalloc
(
sizeof
*
srq
,
GFP_KERNEL
);
if
(
!
srq
)
return
ERR_PTR
(
-
ENOMEM
);
if
(
pd
->
uobject
)
{
context
=
to_mucontext
(
pd
->
uobject
->
context
);
if
(
ib_copy_from_udata
(
&
ucmd
,
udata
,
sizeof
ucmd
))
return
ERR_PTR
(
-
EFAULT
);
err
=
mthca_map_user_db
(
to_mdev
(
pd
->
device
),
&
context
->
uar
,
context
->
db_tab
,
ucmd
.
db_index
,
ucmd
.
db_page
);
if
(
err
)
goto
err_free
;
srq
->
mr
.
ibmr
.
lkey
=
ucmd
.
lkey
;
srq
->
db_index
=
ucmd
.
db_index
;
}
err
=
mthca_alloc_srq
(
to_mdev
(
pd
->
device
),
to_mpd
(
pd
),
&
init_attr
->
attr
,
srq
);
if
(
err
&&
pd
->
uobject
)
mthca_unmap_user_db
(
to_mdev
(
pd
->
device
),
&
context
->
uar
,
context
->
db_tab
,
ucmd
.
db_index
);
if
(
err
)
goto
err_free
;
if
(
context
&&
ib_copy_to_udata
(
udata
,
&
srq
->
srqn
,
sizeof
(
__u32
)))
{
mthca_free_srq
(
to_mdev
(
pd
->
device
),
srq
);
err
=
-
EFAULT
;
goto
err_free
;
}
return
&
srq
->
ibsrq
;
err_free:
kfree
(
srq
);
return
ERR_PTR
(
err
);
}
static
int
mthca_destroy_srq
(
struct
ib_srq
*
srq
)
{
struct
mthca_ucontext
*
context
;
if
(
srq
->
uobject
)
{
context
=
to_mucontext
(
srq
->
uobject
->
context
);
mthca_unmap_user_db
(
to_mdev
(
srq
->
device
),
&
context
->
uar
,
context
->
db_tab
,
to_msrq
(
srq
)
->
db_index
);
}
mthca_free_srq
(
to_mdev
(
srq
->
device
),
to_msrq
(
srq
));
kfree
(
srq
);
return
0
;
}
static
struct
ib_qp
*
mthca_create_qp
(
struct
ib_pd
*
pd
,
struct
ib_qp_init_attr
*
init_attr
,
struct
ib_udata
*
udata
)
...
...
@@ -956,14 +1032,22 @@ static ssize_t show_hca(struct class_device *cdev, char *buf)
}
}
static
ssize_t
show_board
(
struct
class_device
*
cdev
,
char
*
buf
)
{
struct
mthca_dev
*
dev
=
container_of
(
cdev
,
struct
mthca_dev
,
ib_dev
.
class_dev
);
return
sprintf
(
buf
,
"%.*s
\n
"
,
MTHCA_BOARD_ID_LEN
,
dev
->
board_id
);
}
static
CLASS_DEVICE_ATTR
(
hw_rev
,
S_IRUGO
,
show_rev
,
NULL
);
static
CLASS_DEVICE_ATTR
(
fw_ver
,
S_IRUGO
,
show_fw_ver
,
NULL
);
static
CLASS_DEVICE_ATTR
(
hca_type
,
S_IRUGO
,
show_hca
,
NULL
);
static
CLASS_DEVICE_ATTR
(
board_id
,
S_IRUGO
,
show_board
,
NULL
);
static
struct
class_device_attribute
*
mthca_class_attributes
[]
=
{
&
class_device_attr_hw_rev
,
&
class_device_attr_fw_ver
,
&
class_device_attr_hca_type
&
class_device_attr_hca_type
,
&
class_device_attr_board_id
};
int
mthca_register_device
(
struct
mthca_dev
*
dev
)
...
...
@@ -990,6 +1074,17 @@ int mthca_register_device(struct mthca_dev *dev)
dev
->
ib_dev
.
dealloc_pd
=
mthca_dealloc_pd
;
dev
->
ib_dev
.
create_ah
=
mthca_ah_create
;
dev
->
ib_dev
.
destroy_ah
=
mthca_ah_destroy
;
if
(
dev
->
mthca_flags
&
MTHCA_FLAG_SRQ
)
{
dev
->
ib_dev
.
create_srq
=
mthca_create_srq
;
dev
->
ib_dev
.
destroy_srq
=
mthca_destroy_srq
;
if
(
mthca_is_memfree
(
dev
))
dev
->
ib_dev
.
post_srq_recv
=
mthca_arbel_post_srq_recv
;
else
dev
->
ib_dev
.
post_srq_recv
=
mthca_tavor_post_srq_recv
;
}
dev
->
ib_dev
.
create_qp
=
mthca_create_qp
;
dev
->
ib_dev
.
modify_qp
=
mthca_modify_qp
;
dev
->
ib_dev
.
destroy_qp
=
mthca_destroy_qp
;
...
...
drivers/infiniband/hw/mthca/mthca_provider.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Cisco Systems. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -36,8 +37,8 @@
#ifndef MTHCA_PROVIDER_H
#define MTHCA_PROVIDER_H
#include <ib_verbs.h>
#include <ib_pack.h>
#include <
rdma/
ib_verbs.h>
#include <
rdma/
ib_pack.h>
#define MTHCA_MPT_FLAG_ATOMIC (1 << 14)
#define MTHCA_MPT_FLAG_REMOTE_WRITE (1 << 13)
...
...
@@ -50,6 +51,11 @@ struct mthca_buf_list {
DECLARE_PCI_UNMAP_ADDR
(
mapping
)
};
union
mthca_buf
{
struct
mthca_buf_list
direct
;
struct
mthca_buf_list
*
page_list
;
};
struct
mthca_uar
{
unsigned
long
pfn
;
int
index
;
...
...
@@ -181,19 +187,39 @@ struct mthca_cq {
/* Next fields are Arbel only */
int
set_ci_db_index
;
u32
*
set_ci_db
;
__be32
*
set_ci_db
;
int
arm_db_index
;
u32
*
arm_db
;
__be32
*
arm_db
;
int
arm_sn
;
union
{
struct
mthca_buf_list
direct
;
struct
mthca_buf_list
*
page_list
;
}
queue
;
union
mthca_buf
queue
;
struct
mthca_mr
mr
;
wait_queue_head_t
wait
;
};
struct
mthca_srq
{
struct
ib_srq
ibsrq
;
spinlock_t
lock
;
atomic_t
refcount
;
int
srqn
;
int
max
;
int
max_gs
;
int
wqe_shift
;
int
first_free
;
int
last_free
;
u16
counter
;
/* Arbel only */
int
db_index
;
/* Arbel only */
__be32
*
db
;
/* Arbel only */
void
*
last
;
int
is_direct
;
u64
*
wrid
;
union
mthca_buf
queue
;
struct
mthca_mr
mr
;
wait_queue_head_t
wait
;
};
struct
mthca_wq
{
spinlock_t
lock
;
int
max
;
...
...
@@ -206,7 +232,7 @@ struct mthca_wq {
int
wqe_shift
;
int
db_index
;
/* Arbel only */
u32
*
db
;
__be32
*
db
;
};
struct
mthca_qp
{
...
...
@@ -227,10 +253,7 @@ struct mthca_qp {
int
send_wqe_offset
;
u64
*
wrid
;
union
{
struct
mthca_buf_list
direct
;
struct
mthca_buf_list
*
page_list
;
}
queue
;
union
mthca_buf
queue
;
wait_queue_head_t
wait
;
};
...
...
@@ -277,6 +300,11 @@ static inline struct mthca_cq *to_mcq(struct ib_cq *ibcq)
return
container_of
(
ibcq
,
struct
mthca_cq
,
ibcq
);
}
static
inline
struct
mthca_srq
*
to_msrq
(
struct
ib_srq
*
ibsrq
)
{
return
container_of
(
ibsrq
,
struct
mthca_srq
,
ibsrq
);
}
static
inline
struct
mthca_qp
*
to_mqp
(
struct
ib_qp
*
ibqp
)
{
return
container_of
(
ibqp
,
struct
mthca_qp
,
ibqp
);
...
...
drivers/infiniband/hw/mthca/mthca_qp.c
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/hw/mthca/mthca_srq.c
0 → 100644
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/hw/mthca/mthca_user.h
浏览文件 @
a78b3371
...
...
@@ -69,6 +69,17 @@ struct mthca_create_cq_resp {
__u32
reserved
;
};
struct
mthca_create_srq
{
__u32
lkey
;
__u32
db_index
;
__u64
db_page
;
};
struct
mthca_create_srq_resp
{
__u32
srqn
;
__u32
reserved
;
};
struct
mthca_create_qp
{
__u32
lkey
;
__u32
reserved
;
...
...
drivers/infiniband/hw/mthca/mthca_wqe.h
0 → 100644
浏览文件 @
a78b3371
/*
* Copyright (c) 2005 Cisco Systems. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
* General Public License (GPL) Version 2, available from the file
* COPYING in the main directory of this source tree, or the
* OpenIB.org BSD license below:
*
* Redistribution and use in source and binary forms, with or
* without modification, are permitted provided that the following
* conditions are met:
*
* - Redistributions of source code must retain the above
* copyright notice, this list of conditions and the following
* disclaimer.
*
* - Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials
* provided with the distribution.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
* BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
* ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
* CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*
* $Id: mthca_wqe.h 3047 2005-08-10 03:59:35Z roland $
*/
#ifndef MTHCA_WQE_H
#define MTHCA_WQE_H
#include <linux/types.h>
enum
{
MTHCA_NEXT_DBD
=
1
<<
7
,
MTHCA_NEXT_FENCE
=
1
<<
6
,
MTHCA_NEXT_CQ_UPDATE
=
1
<<
3
,
MTHCA_NEXT_EVENT_GEN
=
1
<<
2
,
MTHCA_NEXT_SOLICIT
=
1
<<
1
,
MTHCA_MLX_VL15
=
1
<<
17
,
MTHCA_MLX_SLR
=
1
<<
16
};
enum
{
MTHCA_INVAL_LKEY
=
0x100
};
struct
mthca_next_seg
{
__be32
nda_op
;
/* [31:6] next WQE [4:0] next opcode */
__be32
ee_nds
;
/* [31:8] next EE [7] DBD [6] F [5:0] next WQE size */
__be32
flags
;
/* [3] CQ [2] Event [1] Solicit */
__be32
imm
;
/* immediate data */
};
struct
mthca_tavor_ud_seg
{
u32
reserved1
;
__be32
lkey
;
__be64
av_addr
;
u32
reserved2
[
4
];
__be32
dqpn
;
__be32
qkey
;
u32
reserved3
[
2
];
};
struct
mthca_arbel_ud_seg
{
__be32
av
[
8
];
__be32
dqpn
;
__be32
qkey
;
u32
reserved
[
2
];
};
struct
mthca_bind_seg
{
__be32
flags
;
/* [31] Atomic [30] rem write [29] rem read */
u32
reserved
;
__be32
new_rkey
;
__be32
lkey
;
__be64
addr
;
__be64
length
;
};
struct
mthca_raddr_seg
{
__be64
raddr
;
__be32
rkey
;
u32
reserved
;
};
struct
mthca_atomic_seg
{
__be64
swap_add
;
__be64
compare
;
};
struct
mthca_data_seg
{
__be32
byte_count
;
__be32
lkey
;
__be64
addr
;
};
struct
mthca_mlx_seg
{
__be32
nda_op
;
__be32
nds
;
__be32
flags
;
/* [17] VL15 [16] SLR [14:12] static rate
[11:8] SL [3] C [2] E */
__be16
rlid
;
__be16
vcrc
;
};
#endif
/* MTHCA_WQE_H */
drivers/infiniband/ulp/ipoib/Makefile
浏览文件 @
a78b3371
EXTRA_CFLAGS
+=
-Idrivers
/infiniband/include
obj-$(CONFIG_INFINIBAND_IPOIB)
+=
ib_ipoib.o
ib_ipoib-y
:=
ipoib_main.o
\
...
...
drivers/infiniband/ulp/ipoib/ipoib.h
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -49,9 +51,9 @@
#include <asm/atomic.h>
#include <asm/semaphore.h>
#include <ib_verbs.h>
#include <ib_pack.h>
#include <ib_sa.h>
#include <
rdma/
ib_verbs.h>
#include <
rdma/
ib_pack.h>
#include <
rdma/
ib_sa.h>
/* constants */
...
...
@@ -88,8 +90,8 @@ enum {
/* structs */
struct
ipoib_header
{
u16
proto
;
u16
reserved
;
__be16
proto
;
u16
reserved
;
};
struct
ipoib_pseudoheader
{
...
...
drivers/infiniband/ulp/ipoib/ipoib_fs.c
浏览文件 @
a78b3371
...
...
@@ -97,7 +97,7 @@ static int ipoib_mcg_seq_show(struct seq_file *file, void *iter_ptr)
for
(
n
=
0
,
i
=
0
;
i
<
sizeof
mgid
/
2
;
++
i
)
{
n
+=
sprintf
(
gid_buf
+
n
,
"%x"
,
be16_to_cpu
(((
u16
*
)
mgid
.
raw
)[
i
]));
be16_to_cpu
(((
__be16
*
)
mgid
.
raw
)[
i
]));
if
(
i
<
sizeof
mgid
/
2
-
1
)
gid_buf
[
n
++
]
=
':'
;
}
...
...
drivers/infiniband/ulp/ipoib/ipoib_ib.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
* Copyright (c) 2004, 2005 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -35,7 +38,7 @@
#include <linux/delay.h>
#include <linux/dma-mapping.h>
#include <ib_cache.h>
#include <
rdma/
ib_cache.h>
#include "ipoib.h"
...
...
drivers/infiniband/ulp/ipoib/ipoib_main.c
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Sun Microsystems, Inc. All rights reserved.
* Copyright (c) 2004 Voltaire, Inc. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -357,7 +359,7 @@ static int ipoib_mcast_sendonly_join(struct ipoib_mcast *mcast)
rec
.
mgid
=
mcast
->
mcmember
.
mgid
;
rec
.
port_gid
=
priv
->
local_gid
;
rec
.
pkey
=
be16_to_cpu
(
priv
->
pkey
);
rec
.
pkey
=
cpu_to_be16
(
priv
->
pkey
);
ret
=
ib_sa_mcmember_rec_set
(
priv
->
ca
,
priv
->
port
,
&
rec
,
IB_SA_MCMEMBER_REC_MGID
|
...
...
@@ -457,7 +459,7 @@ static void ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast,
rec
.
mgid
=
mcast
->
mcmember
.
mgid
;
rec
.
port_gid
=
priv
->
local_gid
;
rec
.
pkey
=
be16_to_cpu
(
priv
->
pkey
);
rec
.
pkey
=
cpu_to_be16
(
priv
->
pkey
);
comp_mask
=
IB_SA_MCMEMBER_REC_MGID
|
...
...
@@ -646,7 +648,7 @@ static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast)
rec
.
mgid
=
mcast
->
mcmember
.
mgid
;
rec
.
port_gid
=
priv
->
local_gid
;
rec
.
pkey
=
be16_to_cpu
(
priv
->
pkey
);
rec
.
pkey
=
cpu_to_be16
(
priv
->
pkey
);
/* Remove ourselves from the multicast group */
ret
=
ipoib_mcast_detach
(
dev
,
be16_to_cpu
(
mcast
->
mcmember
.
mlid
),
...
...
drivers/infiniband/ulp/ipoib/ipoib_verbs.c
浏览文件 @
a78b3371
/*
* Copyright (c) 2004, 2005 Topspin Communications. All rights reserved.
* Copyright (c) 2005 Mellanox Technologies. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
...
...
@@ -32,7 +33,7 @@
* $Id: ipoib_verbs.c 1349 2004-12-16 21:09:43Z roland $
*/
#include <ib_cache.h>
#include <
rdma/
ib_cache.h>
#include "ipoib.h"
...
...
drivers/infiniband/ulp/ipoib/ipoib_vlan.c
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_cache.h
→
include/rdma
/ib_cache.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_cm.h
→
include/rdma
/ib_cm.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_fmr_pool.h
→
include/rdma
/ib_fmr_pool.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_mad.h
→
include/rdma
/ib_mad.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_pack.h
→
include/rdma
/ib_pack.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_sa.h
→
include/rdma
/ib_sa.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_smi.h
→
include/rdma
/ib_smi.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_user_cm.h
→
include/rdma
/ib_user_cm.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_user_mad.h
→
include/rdma
/ib_user_mad.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_user_verbs.h
→
include/rdma
/ib_user_verbs.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
drivers/infiniband/include
/ib_verbs.h
→
include/rdma
/ib_verbs.h
浏览文件 @
a78b3371
此差异已折叠。
点击以展开。
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录