提交 27e521c5 编写于 作者: D David S. Miller

Merge tag 'rxrpc-next-20221201-b' of...

Merge tag 'rxrpc-next-20221201-b' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs

David Howells says:

====================
rxrpc: Increasing SACK size and moving away from softirq, parts 2 & 3

Here are the second and third parts of patches in the process of moving
rxrpc from doing a lot of its stuff in softirq context to doing it in an
I/O thread in process context and thereby making it easier to support a
larger SACK table.

The full description is in the description for the first part[1] which is
already in net-next.

The second part includes some cleanups, adds some testing and overhauls
some tracing:

 (1) Remove declaration of rxrpc_kernel_call_is_complete() as the
     definition is no longer present.

 (2) Remove the knet() and kproto() macros in favour of using tracepoints.

 (3) Remove handling of duplicate packets from recvmsg.  The input side
     isn't now going to insert overlapping/duplicate packets into the
     recvmsg queue.

 (4) Don't use the rxrpc_conn_parameters struct in the rxrpc_connection or
     rxrpc_bundle structs - rather put the members in directly.

 (5) Extract the abort code from a received abort packet right up front
     rather than doing it in multiple places later.

 (6) Use enums and symbol lists rather than __builtin_return_address() to
     indicate where a tracepoint was triggered for local, peer, conn, call
     and skbuff tracing.

 (7) Add a refcount tracepoint for the rxrpc_bundle struct.

 (8) Implement an in-kernel server for the AFS rxperf testing program to
     talk to (enabled by a Kconfig option).

This is tagged as rxrpc-next-20221201-a.

The third part introduces the I/O thread and switches various bits over to
running there:

 (1) Fix call timers and call and connection workqueues to not hold refs on
     the rxrpc_call and rxrpc_connection structs to thereby avoid messy
     cleanup when the last ref is put in softirq mode.

 (2) Split input.c so that the call packet processing bits are separate
     from the received packet distribution bits.  Call packet processing
     gets bumped over to the call event handler.

 (3) Create a per-local endpoint I/O thread.  Barring some tiny bits that
     still get done in softirq context, all packet reception, processing
     and transmission is done in this thread.  That will allow a load of
     locking to be removed.

 (4) Perform packet processing and error processing from the I/O thread.

 (5) Provide a mechanism to process call event notifications in the I/O
     thread rather than queuing a work item for that call.

 (6) Move data and ACK transmission into the I/O thread.  ACKs can then be
     transmitted at the point they're generated rather than getting
     delegated from softirq context to some process context somewhere.

 (7) Move call and local processor event handling into the I/O thread.

 (8) Move cwnd degradation to after packets have been transmitted so that
     they don't shorten the window too quickly.

A bunch of simplifications can then be done:

 (1) The input_lock is no longer necessary as exclusion is achieved by
     running the code in the I/O thread only.

 (2) Don't need to use sk->sk_receive_queue.lock to guard socket state
     changes as the socket mutex should suffice.

 (3) Don't take spinlocks in RCU callback functions as they get run in
     softirq context and thus need _bh annotations.

 (4) RCU is then no longer needed for the peer's error_targets list.

 (5) Simplify the skbuff handling in the receive path by dropping the ref
     in the basic I/O thread loop and getting an extra ref as and when we
     need to queue the packet for recvmsg or another context.

 (6) Get the peer address earlier in the input process and pass it to the
     users so that we only do it once.

This is tagged as rxrpc-next-20221201-b.

Changes:
========
ver #2)
 - Added a patch to change four assertions into warnings in rxrpc_read()
   and fixed a checker warning from a __user annotation that should have
   been removed..
 - Change a min() to min_t() in rxperf as PAGE_SIZE doesn't seem to match
   type size_t on i386.
 - Three error handling issues in rxrpc_new_incoming_call():
   - If not DATA or not seq #1, should drop the packet, not abort.
   - Fix a goto that went to the wrong place, dropping a non-held lock.
   - Fix an rcu_read_lock that should've been an unlock.
Tested-by: NMarc Dionne <marc.dionne@auristor.com>
Tested-by: kafs-testing+fedora36_64checkkafs-build-144@auristor.com
Link: https://lore.kernel.org/r/166794587113.2389296.16484814996876530222.stgit@warthog.procyon.org.uk/ [1]
Link: https://lore.kernel.org/r/166982725699.621383.2358362793992993374.stgit@warthog.procyon.org.uk/ # v1
====================
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
...@@ -66,10 +66,10 @@ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t, ...@@ -66,10 +66,10 @@ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64); void rxrpc_kernel_set_tx_length(struct socket *, struct rxrpc_call *, s64);
bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *); bool rxrpc_kernel_check_life(const struct socket *, const struct rxrpc_call *);
u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *); u32 rxrpc_kernel_get_epoch(struct socket *, struct rxrpc_call *);
bool rxrpc_kernel_call_is_complete(struct rxrpc_call *);
void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *, void rxrpc_kernel_set_max_life(struct socket *, struct rxrpc_call *,
unsigned long); unsigned long);
int rxrpc_sock_set_min_security_level(struct sock *sk, unsigned int val); int rxrpc_sock_set_min_security_level(struct sock *sk, unsigned int val);
int rxrpc_sock_set_security_keyring(struct sock *, struct key *);
#endif /* _NET_RXRPC_H */ #endif /* _NET_RXRPC_H */
此差异已折叠。
...@@ -58,4 +58,11 @@ config RXKAD ...@@ -58,4 +58,11 @@ config RXKAD
See Documentation/networking/rxrpc.rst. See Documentation/networking/rxrpc.rst.
config RXPERF
tristate "RxRPC test service"
help
Provide an rxperf service tester. This listens on UDP port 7009 for
incoming calls from the rxperf program (an example of which can be
found in OpenAFS).
endif endif
...@@ -16,6 +16,7 @@ rxrpc-y := \ ...@@ -16,6 +16,7 @@ rxrpc-y := \
conn_service.o \ conn_service.o \
input.o \ input.o \
insecure.o \ insecure.o \
io_thread.o \
key.o \ key.o \
local_event.o \ local_event.o \
local_object.o \ local_object.o \
...@@ -36,3 +37,6 @@ rxrpc-y := \ ...@@ -36,3 +37,6 @@ rxrpc-y := \
rxrpc-$(CONFIG_PROC_FS) += proc.o rxrpc-$(CONFIG_PROC_FS) += proc.o
rxrpc-$(CONFIG_RXKAD) += rxkad.o rxrpc-$(CONFIG_RXKAD) += rxkad.o
rxrpc-$(CONFIG_SYSCTL) += sysctl.o rxrpc-$(CONFIG_SYSCTL) += sysctl.o
obj-$(CONFIG_RXPERF) += rxperf.o
...@@ -194,8 +194,8 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len) ...@@ -194,8 +194,8 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)
service_in_use: service_in_use:
write_unlock(&local->services_lock); write_unlock(&local->services_lock);
rxrpc_unuse_local(local); rxrpc_unuse_local(local, rxrpc_local_unuse_bind);
rxrpc_put_local(local); rxrpc_put_local(local, rxrpc_local_put_bind);
ret = -EADDRINUSE; ret = -EADDRINUSE;
error_unlock: error_unlock:
release_sock(&rx->sk); release_sock(&rx->sk);
...@@ -328,7 +328,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock, ...@@ -328,7 +328,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
mutex_unlock(&call->user_mutex); mutex_unlock(&call->user_mutex);
} }
rxrpc_put_peer(cp.peer); rxrpc_put_peer(cp.peer, rxrpc_peer_put_discard_tmp);
_leave(" = %p", call); _leave(" = %p", call);
return call; return call;
} }
...@@ -359,9 +359,9 @@ void rxrpc_kernel_end_call(struct socket *sock, struct rxrpc_call *call) ...@@ -359,9 +359,9 @@ void rxrpc_kernel_end_call(struct socket *sock, struct rxrpc_call *call)
/* Make sure we're not going to call back into a kernel service */ /* Make sure we're not going to call back into a kernel service */
if (call->notify_rx) { if (call->notify_rx) {
spin_lock_bh(&call->notify_lock); spin_lock(&call->notify_lock);
call->notify_rx = rxrpc_dummy_notify_rx; call->notify_rx = rxrpc_dummy_notify_rx;
spin_unlock_bh(&call->notify_lock); spin_unlock(&call->notify_lock);
} }
mutex_unlock(&call->user_mutex); mutex_unlock(&call->user_mutex);
...@@ -812,14 +812,12 @@ static int rxrpc_shutdown(struct socket *sock, int flags) ...@@ -812,14 +812,12 @@ static int rxrpc_shutdown(struct socket *sock, int flags)
lock_sock(sk); lock_sock(sk);
spin_lock_bh(&sk->sk_receive_queue.lock);
if (sk->sk_state < RXRPC_CLOSE) { if (sk->sk_state < RXRPC_CLOSE) {
sk->sk_state = RXRPC_CLOSE; sk->sk_state = RXRPC_CLOSE;
sk->sk_shutdown = SHUTDOWN_MASK; sk->sk_shutdown = SHUTDOWN_MASK;
} else { } else {
ret = -ESHUTDOWN; ret = -ESHUTDOWN;
} }
spin_unlock_bh(&sk->sk_receive_queue.lock);
rxrpc_discard_prealloc(rx); rxrpc_discard_prealloc(rx);
...@@ -872,9 +870,7 @@ static int rxrpc_release_sock(struct sock *sk) ...@@ -872,9 +870,7 @@ static int rxrpc_release_sock(struct sock *sk)
break; break;
} }
spin_lock_bh(&sk->sk_receive_queue.lock);
sk->sk_state = RXRPC_CLOSE; sk->sk_state = RXRPC_CLOSE;
spin_unlock_bh(&sk->sk_receive_queue.lock);
if (rx->local && rcu_access_pointer(rx->local->service) == rx) { if (rx->local && rcu_access_pointer(rx->local->service) == rx) {
write_lock(&rx->local->services_lock); write_lock(&rx->local->services_lock);
...@@ -888,8 +884,8 @@ static int rxrpc_release_sock(struct sock *sk) ...@@ -888,8 +884,8 @@ static int rxrpc_release_sock(struct sock *sk)
flush_workqueue(rxrpc_workqueue); flush_workqueue(rxrpc_workqueue);
rxrpc_purge_queue(&sk->sk_receive_queue); rxrpc_purge_queue(&sk->sk_receive_queue);
rxrpc_unuse_local(rx->local); rxrpc_unuse_local(rx->local, rxrpc_local_unuse_release_sock);
rxrpc_put_local(rx->local); rxrpc_put_local(rx->local, rxrpc_local_put_release_sock);
rx->local = NULL; rx->local = NULL;
key_put(rx->key); key_put(rx->key);
rx->key = NULL; rx->key = NULL;
......
此差异已折叠。
...@@ -38,7 +38,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, ...@@ -38,7 +38,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
unsigned long user_call_ID, gfp_t gfp, unsigned long user_call_ID, gfp_t gfp,
unsigned int debug_id) unsigned int debug_id)
{ {
const void *here = __builtin_return_address(0);
struct rxrpc_call *call, *xcall; struct rxrpc_call *call, *xcall;
struct rxrpc_net *rxnet = rxrpc_net(sock_net(&rx->sk)); struct rxrpc_net *rxnet = rxrpc_net(sock_net(&rx->sk));
struct rb_node *parent, **pp; struct rb_node *parent, **pp;
...@@ -70,7 +69,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, ...@@ -70,7 +69,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
head = b->peer_backlog_head; head = b->peer_backlog_head;
tail = READ_ONCE(b->peer_backlog_tail); tail = READ_ONCE(b->peer_backlog_tail);
if (CIRC_CNT(head, tail, size) < max) { if (CIRC_CNT(head, tail, size) < max) {
struct rxrpc_peer *peer = rxrpc_alloc_peer(rx->local, gfp); struct rxrpc_peer *peer;
peer = rxrpc_alloc_peer(rx->local, gfp, rxrpc_peer_new_prealloc);
if (!peer) if (!peer)
return -ENOMEM; return -ENOMEM;
b->peer_backlog[head] = peer; b->peer_backlog[head] = peer;
...@@ -89,9 +90,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, ...@@ -89,9 +90,6 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
b->conn_backlog[head] = conn; b->conn_backlog[head] = conn;
smp_store_release(&b->conn_backlog_head, smp_store_release(&b->conn_backlog_head,
(head + 1) & (size - 1)); (head + 1) & (size - 1));
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service,
refcount_read(&conn->ref), here);
} }
/* Now it gets complicated, because calls get registered with the /* Now it gets complicated, because calls get registered with the
...@@ -102,10 +100,10 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, ...@@ -102,10 +100,10 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
return -ENOMEM; return -ENOMEM;
call->flags |= (1 << RXRPC_CALL_IS_SERVICE); call->flags |= (1 << RXRPC_CALL_IS_SERVICE);
call->state = RXRPC_CALL_SERVER_PREALLOC; call->state = RXRPC_CALL_SERVER_PREALLOC;
__set_bit(RXRPC_CALL_EV_INITIAL_PING, &call->events);
trace_rxrpc_call(call->debug_id, rxrpc_call_new_service, trace_rxrpc_call(call->debug_id, refcount_read(&call->ref),
refcount_read(&call->ref), user_call_ID, rxrpc_call_new_prealloc_service);
here, (const void *)user_call_ID);
write_lock(&rx->call_lock); write_lock(&rx->call_lock);
...@@ -126,11 +124,11 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, ...@@ -126,11 +124,11 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
call->user_call_ID = user_call_ID; call->user_call_ID = user_call_ID;
call->notify_rx = notify_rx; call->notify_rx = notify_rx;
if (user_attach_call) { if (user_attach_call) {
rxrpc_get_call(call, rxrpc_call_got_kernel); rxrpc_get_call(call, rxrpc_call_get_kernel_service);
user_attach_call(call, user_call_ID); user_attach_call(call, user_call_ID);
} }
rxrpc_get_call(call, rxrpc_call_got_userid); rxrpc_get_call(call, rxrpc_call_get_userid);
rb_link_node(&call->sock_node, parent, pp); rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls); rb_insert_color(&call->sock_node, &rx->calls);
set_bit(RXRPC_CALL_HAS_USERID, &call->flags); set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
...@@ -140,9 +138,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx, ...@@ -140,9 +138,9 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
rxnet = call->rxnet; rxnet = call->rxnet;
spin_lock_bh(&rxnet->call_lock); spin_lock(&rxnet->call_lock);
list_add_tail_rcu(&call->link, &rxnet->calls); list_add_tail_rcu(&call->link, &rxnet->calls);
spin_unlock_bh(&rxnet->call_lock); spin_unlock(&rxnet->call_lock);
b->call_backlog[call_head] = call; b->call_backlog[call_head] = call;
smp_store_release(&b->call_backlog_head, (call_head + 1) & (size - 1)); smp_store_release(&b->call_backlog_head, (call_head + 1) & (size - 1));
...@@ -190,14 +188,14 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx) ...@@ -190,14 +188,14 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
/* Make sure that there aren't any incoming calls in progress before we /* Make sure that there aren't any incoming calls in progress before we
* clear the preallocation buffers. * clear the preallocation buffers.
*/ */
spin_lock_bh(&rx->incoming_lock); spin_lock(&rx->incoming_lock);
spin_unlock_bh(&rx->incoming_lock); spin_unlock(&rx->incoming_lock);
head = b->peer_backlog_head; head = b->peer_backlog_head;
tail = b->peer_backlog_tail; tail = b->peer_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) { while (CIRC_CNT(head, tail, size) > 0) {
struct rxrpc_peer *peer = b->peer_backlog[tail]; struct rxrpc_peer *peer = b->peer_backlog[tail];
rxrpc_put_local(peer->local); rxrpc_put_local(peer->local, rxrpc_local_put_prealloc_conn);
kfree(peer); kfree(peer);
tail = (tail + 1) & (size - 1); tail = (tail + 1) & (size - 1);
} }
...@@ -230,28 +228,13 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx) ...@@ -230,28 +228,13 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
} }
rxrpc_call_completed(call); rxrpc_call_completed(call);
rxrpc_release_call(rx, call); rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put); rxrpc_put_call(call, rxrpc_call_put_discard_prealloc);
tail = (tail + 1) & (size - 1); tail = (tail + 1) & (size - 1);
} }
kfree(b); kfree(b);
} }
/*
* Ping the other end to fill our RTT cache and to retrieve the rwind
* and MTU parameters.
*/
static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
ktime_t now = skb->tstamp;
if (call->peer->rtt_count < 3 ||
ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now))
rxrpc_send_ACK(call, RXRPC_ACK_PING, sp->hdr.serial,
rxrpc_propose_ack_ping_for_params);
}
/* /*
* Allocate a new incoming call from the prealloc pool, along with a connection * Allocate a new incoming call from the prealloc pool, along with a connection
* and a peer as necessary. * and a peer as necessary.
...@@ -261,6 +244,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, ...@@ -261,6 +244,7 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
struct rxrpc_peer *peer, struct rxrpc_peer *peer,
struct rxrpc_connection *conn, struct rxrpc_connection *conn,
const struct rxrpc_security *sec, const struct rxrpc_security *sec,
struct sockaddr_rxrpc *peer_srx,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct rxrpc_backlog *b = rx->backlog; struct rxrpc_backlog *b = rx->backlog;
...@@ -286,12 +270,11 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, ...@@ -286,12 +270,11 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
return NULL; return NULL;
if (!conn) { if (!conn) {
if (peer && !rxrpc_get_peer_maybe(peer)) if (peer && !rxrpc_get_peer_maybe(peer, rxrpc_peer_get_service_conn))
peer = NULL; peer = NULL;
if (!peer) { if (!peer) {
peer = b->peer_backlog[peer_tail]; peer = b->peer_backlog[peer_tail];
if (rxrpc_extract_addr_from_skb(&peer->srx, skb) < 0) peer->srx = *peer_srx;
return NULL;
b->peer_backlog[peer_tail] = NULL; b->peer_backlog[peer_tail] = NULL;
smp_store_release(&b->peer_backlog_tail, smp_store_release(&b->peer_backlog_tail,
(peer_tail + 1) & (peer_tail + 1) &
...@@ -305,12 +288,13 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, ...@@ -305,12 +288,13 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
b->conn_backlog[conn_tail] = NULL; b->conn_backlog[conn_tail] = NULL;
smp_store_release(&b->conn_backlog_tail, smp_store_release(&b->conn_backlog_tail,
(conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); (conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
conn->params.local = rxrpc_get_local(local); conn->local = rxrpc_get_local(local, rxrpc_local_get_prealloc_conn);
conn->params.peer = peer; conn->peer = peer;
rxrpc_see_connection(conn); rxrpc_see_connection(conn, rxrpc_conn_see_new_service_conn);
rxrpc_new_incoming_connection(rx, conn, sec, skb); rxrpc_new_incoming_connection(rx, conn, sec, skb);
} else { } else {
rxrpc_get_connection(conn); rxrpc_get_connection(conn, rxrpc_conn_get_service_conn);
atomic_inc(&conn->active);
} }
/* And now we can allocate and set up a new call */ /* And now we can allocate and set up a new call */
...@@ -319,43 +303,69 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx, ...@@ -319,43 +303,69 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
smp_store_release(&b->call_backlog_tail, smp_store_release(&b->call_backlog_tail,
(call_tail + 1) & (RXRPC_BACKLOG_MAX - 1)); (call_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
rxrpc_see_call(call); rxrpc_see_call(call, rxrpc_call_see_accept);
call->local = rxrpc_get_local(conn->local, rxrpc_local_get_call);
call->conn = conn; call->conn = conn;
call->security = conn->security; call->security = conn->security;
call->security_ix = conn->security_ix; call->security_ix = conn->security_ix;
call->peer = rxrpc_get_peer(conn->params.peer); call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_accept);
call->dest_srx = peer->srx;
call->cong_ssthresh = call->peer->cong_ssthresh; call->cong_ssthresh = call->peer->cong_ssthresh;
call->tx_last_sent = ktime_get_real(); call->tx_last_sent = ktime_get_real();
return call; return call;
} }
/* /*
* Set up a new incoming call. Called in BH context with the RCU read lock * Set up a new incoming call. Called from the I/O thread.
* held.
* *
* If this is for a kernel service, when we allocate the call, it will have * If this is for a kernel service, when we allocate the call, it will have
* three refs on it: (1) the kernel service, (2) the user_call_ID tree, (3) the * three refs on it: (1) the kernel service, (2) the user_call_ID tree, (3) the
* retainer ref obtained from the backlog buffer. Prealloc calls for userspace * retainer ref obtained from the backlog buffer. Prealloc calls for userspace
* services only have the ref from the backlog buffer. We want to pass this * services only have the ref from the backlog buffer.
* ref to non-BH context to dispose of.
* *
* If we want to report an error, we mark the skb with the packet type and * If we want to report an error, we mark the skb with the packet type and
* abort code and return NULL. * abort code and return false.
*
* The call is returned with the user access mutex held.
*/ */
struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, bool rxrpc_new_incoming_call(struct rxrpc_local *local,
struct rxrpc_sock *rx, struct rxrpc_peer *peer,
struct sk_buff *skb) struct rxrpc_connection *conn,
struct sockaddr_rxrpc *peer_srx,
struct sk_buff *skb)
{ {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
const struct rxrpc_security *sec = NULL; const struct rxrpc_security *sec = NULL;
struct rxrpc_connection *conn; struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_peer *peer = NULL;
struct rxrpc_call *call = NULL; struct rxrpc_call *call = NULL;
struct rxrpc_sock *rx;
_enter(""); _enter("");
/* Don't set up a call for anything other than the first DATA packet. */
if (sp->hdr.seq != 1 ||
sp->hdr.type != RXRPC_PACKET_TYPE_DATA)
return true; /* Just discard */
rcu_read_lock();
/* Weed out packets to services we're not offering. Packets that would
* begin a call are explicitly rejected and the rest are just
* discarded.
*/
rx = rcu_dereference(local->service);
if (!rx || (sp->hdr.serviceId != rx->srx.srx_service &&
sp->hdr.serviceId != rx->second_service)
) {
if (sp->hdr.type == RXRPC_PACKET_TYPE_DATA &&
sp->hdr.seq == 1)
goto unsupported_service;
goto discard;
}
if (!conn) {
sec = rxrpc_get_incoming_security(rx, skb);
if (!sec)
goto reject;
}
spin_lock(&rx->incoming_lock); spin_lock(&rx->incoming_lock);
if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED || if (rx->sk.sk_state == RXRPC_SERVER_LISTEN_DISABLED ||
rx->sk.sk_state == RXRPC_CLOSE) { rx->sk.sk_state == RXRPC_CLOSE) {
...@@ -366,20 +376,8 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, ...@@ -366,20 +376,8 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
goto no_call; goto no_call;
} }
/* The peer, connection and call may all have sprung into existence due call = rxrpc_alloc_incoming_call(rx, local, peer, conn, sec, peer_srx,
* to a duplicate packet being handled on another CPU in parallel, so skb);
* we have to recheck the routing. However, we're now holding
* rx->incoming_lock, so the values should remain stable.
*/
conn = rxrpc_find_connection_rcu(local, skb, &peer);
if (!conn) {
sec = rxrpc_get_incoming_security(rx, skb);
if (!sec)
goto no_call;
}
call = rxrpc_alloc_incoming_call(rx, local, peer, conn, sec, skb);
if (!call) { if (!call) {
skb->mark = RXRPC_SKB_MARK_REJECT_BUSY; skb->mark = RXRPC_SKB_MARK_REJECT_BUSY;
goto no_call; goto no_call;
...@@ -396,50 +394,41 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local, ...@@ -396,50 +394,41 @@ struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
rx->notify_new_call(&rx->sk, call, call->user_call_ID); rx->notify_new_call(&rx->sk, call, call->user_call_ID);
spin_lock(&conn->state_lock); spin_lock(&conn->state_lock);
switch (conn->state) { if (conn->state == RXRPC_CONN_SERVICE_UNSECURED) {
case RXRPC_CONN_SERVICE_UNSECURED:
conn->state = RXRPC_CONN_SERVICE_CHALLENGING; conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events); set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
rxrpc_queue_conn(call->conn); rxrpc_queue_conn(call->conn, rxrpc_conn_queue_challenge);
break;
case RXRPC_CONN_SERVICE:
write_lock(&call->state_lock);
if (call->state < RXRPC_CALL_COMPLETE)
call->state = RXRPC_CALL_SERVER_RECV_REQUEST;
write_unlock(&call->state_lock);
break;
case RXRPC_CONN_REMOTELY_ABORTED:
rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
conn->abort_code, conn->error);
break;
case RXRPC_CONN_LOCALLY_ABORTED:
rxrpc_abort_call("CON", call, sp->hdr.seq,
conn->abort_code, conn->error);
break;
default:
BUG();
} }
spin_unlock(&conn->state_lock); spin_unlock(&conn->state_lock);
spin_unlock(&rx->incoming_lock);
rxrpc_send_ping(call, skb); spin_unlock(&rx->incoming_lock);
rcu_read_unlock();
/* We have to discard the prealloc queue's ref here and rely on a if (hlist_unhashed(&call->error_link)) {
* combination of the RCU read lock and refs held either by the socket spin_lock(&call->peer->lock);
* (recvmsg queue, to-be-accepted queue or user ID tree) or the kernel hlist_add_head(&call->error_link, &call->peer->error_targets);
* service to prevent the call from being deallocated too early. spin_unlock(&call->peer->lock);
*/ }
rxrpc_put_call(call, rxrpc_call_put);
_leave(" = %p{%d}", call, call->debug_id); _leave(" = %p{%d}", call, call->debug_id);
return call; rxrpc_input_call_event(call, skb);
rxrpc_put_call(call, rxrpc_call_put_input);
return true;
unsupported_service:
trace_rxrpc_abort(0, "INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq,
RX_INVALID_OPERATION, EOPNOTSUPP);
skb->priority = RX_INVALID_OPERATION;
goto reject;
no_call: no_call:
spin_unlock(&rx->incoming_lock); spin_unlock(&rx->incoming_lock);
_leave(" = NULL [%u]", skb->mark); reject:
return NULL; rcu_read_unlock();
_leave(" = f [%u]", skb->mark);
return false;
discard:
rcu_read_unlock();
return true;
} }
/* /*
......
...@@ -69,21 +69,15 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial, ...@@ -69,21 +69,15 @@ void rxrpc_propose_delay_ACK(struct rxrpc_call *call, rxrpc_serial_t serial,
void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason,
rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why) rxrpc_serial_t serial, enum rxrpc_propose_ack_trace why)
{ {
struct rxrpc_local *local = call->conn->params.local;
struct rxrpc_txbuf *txb; struct rxrpc_txbuf *txb;
if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags)) if (test_bit(RXRPC_CALL_DISCONNECTED, &call->flags))
return; return;
if (ack_reason == RXRPC_ACK_DELAY &&
test_and_set_bit(RXRPC_CALL_DELAY_ACK_PENDING, &call->flags)) {
trace_rxrpc_drop_ack(call, why, ack_reason, serial, false);
return;
}
rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]); rxrpc_inc_stat(call->rxnet, stat_tx_acks[ack_reason]);
txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK, txb = rxrpc_alloc_txbuf(call, RXRPC_PACKET_TYPE_ACK,
in_softirq() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS); rcu_read_lock_held() ? GFP_ATOMIC | __GFP_NOWARN : GFP_NOFS);
if (!txb) { if (!txb) {
kleave(" = -ENOMEM"); kleave(" = -ENOMEM");
return; return;
...@@ -101,22 +95,9 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason, ...@@ -101,22 +95,9 @@ void rxrpc_send_ACK(struct rxrpc_call *call, u8 ack_reason,
txb->ack.reason = ack_reason; txb->ack.reason = ack_reason;
txb->ack.nAcks = 0; txb->ack.nAcks = 0;
if (!rxrpc_try_get_call(call, rxrpc_call_got)) {
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_nomem);
return;
}
spin_lock_bh(&local->ack_tx_lock);
list_add_tail(&txb->tx_link, &local->ack_tx_queue);
spin_unlock_bh(&local->ack_tx_lock);
trace_rxrpc_send_ack(call, why, ack_reason, serial); trace_rxrpc_send_ack(call, why, ack_reason, serial);
rxrpc_send_ack_packet(call, txb);
if (in_task()) { rxrpc_put_txbuf(txb, rxrpc_txbuf_put_ack_tx);
rxrpc_transmit_ack_packets(call->peer->local);
} else {
rxrpc_get_local(local);
rxrpc_queue_local(local);
}
} }
/* /*
...@@ -130,11 +111,10 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call) ...@@ -130,11 +111,10 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
/* /*
* Perform retransmission of NAK'd and unack'd packets. * Perform retransmission of NAK'd and unack'd packets.
*/ */
static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) void rxrpc_resend(struct rxrpc_call *call, struct sk_buff *ack_skb)
{ {
struct rxrpc_ackpacket *ack = NULL; struct rxrpc_ackpacket *ack = NULL;
struct rxrpc_txbuf *txb; struct rxrpc_txbuf *txb;
struct sk_buff *ack_skb = NULL;
unsigned long resend_at; unsigned long resend_at;
rxrpc_seq_t transmitted = READ_ONCE(call->tx_transmitted); rxrpc_seq_t transmitted = READ_ONCE(call->tx_transmitted);
ktime_t now, max_age, oldest, ack_ts; ktime_t now, max_age, oldest, ack_ts;
...@@ -148,32 +128,21 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) ...@@ -148,32 +128,21 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
max_age = ktime_sub_us(now, jiffies_to_usecs(call->peer->rto_j)); max_age = ktime_sub_us(now, jiffies_to_usecs(call->peer->rto_j));
oldest = now; oldest = now;
/* See if there's an ACK saved with a soft-ACK table in it. */
if (call->acks_soft_tbl) {
spin_lock_bh(&call->acks_ack_lock);
ack_skb = call->acks_soft_tbl;
if (ack_skb) {
rxrpc_get_skb(ack_skb, rxrpc_skb_ack);
ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header);
}
spin_unlock_bh(&call->acks_ack_lock);
}
if (list_empty(&call->tx_buffer)) if (list_empty(&call->tx_buffer))
goto no_resend; goto no_resend;
spin_lock(&call->tx_lock);
if (list_empty(&call->tx_buffer)) if (list_empty(&call->tx_buffer))
goto no_further_resend; goto no_further_resend;
trace_rxrpc_resend(call); trace_rxrpc_resend(call, ack_skb);
txb = list_first_entry(&call->tx_buffer, struct rxrpc_txbuf, call_link); txb = list_first_entry(&call->tx_buffer, struct rxrpc_txbuf, call_link);
/* Scan the soft ACK table without dropping the lock and resend any /* Scan the soft ACK table without dropping the lock and resend any
* explicitly NAK'd packets. * explicitly NAK'd packets.
*/ */
if (ack) { if (ack_skb) {
ack = (void *)ack_skb->data + sizeof(struct rxrpc_wire_header);
for (i = 0; i < ack->nAcks; i++) { for (i = 0; i < ack->nAcks; i++) {
rxrpc_seq_t seq; rxrpc_seq_t seq;
...@@ -197,8 +166,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) ...@@ -197,8 +166,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
rxrpc_see_txbuf(txb, rxrpc_txbuf_see_unacked); rxrpc_see_txbuf(txb, rxrpc_txbuf_see_unacked);
if (list_empty(&txb->tx_link)) { if (list_empty(&txb->tx_link)) {
rxrpc_get_txbuf(txb, rxrpc_txbuf_get_retrans);
rxrpc_get_call(call, rxrpc_call_got_tx);
list_add_tail(&txb->tx_link, &retrans_queue); list_add_tail(&txb->tx_link, &retrans_queue);
set_bit(RXRPC_TXBUF_RESENT, &txb->flags); set_bit(RXRPC_TXBUF_RESENT, &txb->flags);
} }
...@@ -242,7 +209,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) ...@@ -242,7 +209,6 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
do_resend: do_resend:
unacked = true; unacked = true;
if (list_empty(&txb->tx_link)) { if (list_empty(&txb->tx_link)) {
rxrpc_get_txbuf(txb, rxrpc_txbuf_get_retrans);
list_add_tail(&txb->tx_link, &retrans_queue); list_add_tail(&txb->tx_link, &retrans_queue);
set_bit(RXRPC_TXBUF_RESENT, &txb->flags); set_bit(RXRPC_TXBUF_RESENT, &txb->flags);
rxrpc_inc_stat(call->rxnet, stat_tx_data_retrans); rxrpc_inc_stat(call->rxnet, stat_tx_data_retrans);
...@@ -250,10 +216,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) ...@@ -250,10 +216,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
} }
no_further_resend: no_further_resend:
spin_unlock(&call->tx_lock);
no_resend: no_resend:
rxrpc_free_skb(ack_skb, rxrpc_skb_freed);
resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
resend_at += jiffies + rxrpc_get_rto_backoff(call->peer, resend_at += jiffies + rxrpc_get_rto_backoff(call->peer,
!list_empty(&retrans_queue)); !list_empty(&retrans_queue));
...@@ -267,7 +230,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) ...@@ -267,7 +230,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
* retransmitting data. * retransmitting data.
*/ */
if (list_empty(&retrans_queue)) { if (list_empty(&retrans_queue)) {
rxrpc_reduce_call_timer(call, resend_at, now_j, rxrpc_reduce_call_timer(call, resend_at, jiffies,
rxrpc_timer_set_for_resend); rxrpc_timer_set_for_resend);
ack_ts = ktime_sub(now, call->acks_latest_ts); ack_ts = ktime_sub(now, call->acks_latest_ts);
if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3)) if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3))
...@@ -277,76 +240,134 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) ...@@ -277,76 +240,134 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
goto out; goto out;
} }
/* Retransmit the queue */
while ((txb = list_first_entry_or_null(&retrans_queue, while ((txb = list_first_entry_or_null(&retrans_queue,
struct rxrpc_txbuf, tx_link))) { struct rxrpc_txbuf, tx_link))) {
list_del_init(&txb->tx_link); list_del_init(&txb->tx_link);
rxrpc_send_data_packet(call, txb); rxrpc_transmit_one(call, txb);
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_trans);
trace_rxrpc_retransmit(call, txb->seq,
ktime_to_ns(ktime_sub(txb->last_sent,
max_age)));
} }
out: out:
_leave(""); _leave("");
} }
static bool rxrpc_tx_window_has_space(struct rxrpc_call *call)
{
unsigned int winsize = min_t(unsigned int, call->tx_winsize,
call->cong_cwnd + call->cong_extra);
rxrpc_seq_t window = call->acks_hard_ack, wtop = window + winsize;
rxrpc_seq_t tx_top = call->tx_top;
int space;
space = wtop - tx_top;
return space > 0;
}
/*
* Decant some if the sendmsg prepared queue into the transmission buffer.
*/
static void rxrpc_decant_prepared_tx(struct rxrpc_call *call)
{
struct rxrpc_txbuf *txb;
if (rxrpc_is_client_call(call) &&
!test_bit(RXRPC_CALL_EXPOSED, &call->flags))
rxrpc_expose_client_call(call);
while ((txb = list_first_entry_or_null(&call->tx_sendmsg,
struct rxrpc_txbuf, call_link))) {
spin_lock(&call->tx_lock);
list_del(&txb->call_link);
spin_unlock(&call->tx_lock);
call->tx_top = txb->seq;
list_add_tail(&txb->call_link, &call->tx_buffer);
rxrpc_transmit_one(call, txb);
if (!rxrpc_tx_window_has_space(call))
break;
}
}
static void rxrpc_transmit_some_data(struct rxrpc_call *call)
{
switch (call->state) {
case RXRPC_CALL_SERVER_ACK_REQUEST:
if (list_empty(&call->tx_sendmsg))
return;
fallthrough;
case RXRPC_CALL_SERVER_SEND_REPLY:
case RXRPC_CALL_SERVER_AWAIT_ACK:
case RXRPC_CALL_CLIENT_SEND_REQUEST:
case RXRPC_CALL_CLIENT_AWAIT_REPLY:
if (!rxrpc_tx_window_has_space(call))
return;
if (list_empty(&call->tx_sendmsg)) {
rxrpc_inc_stat(call->rxnet, stat_tx_data_underflow);
return;
}
rxrpc_decant_prepared_tx(call);
break;
default:
return;
}
}
/*
* Ping the other end to fill our RTT cache and to retrieve the rwind
* and MTU parameters.
*/
static void rxrpc_send_initial_ping(struct rxrpc_call *call)
{
if (call->peer->rtt_count < 3 ||
ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000),
ktime_get_real()))
rxrpc_send_ACK(call, RXRPC_ACK_PING, 0,
rxrpc_propose_ack_ping_for_params);
}
/* /*
* Handle retransmission and deferred ACK/abort generation. * Handle retransmission and deferred ACK/abort generation.
*/ */
void rxrpc_process_call(struct work_struct *work) void rxrpc_input_call_event(struct rxrpc_call *call, struct sk_buff *skb)
{ {
struct rxrpc_call *call =
container_of(work, struct rxrpc_call, processor);
unsigned long now, next, t; unsigned long now, next, t;
unsigned int iterations = 0;
rxrpc_serial_t ackr_serial; rxrpc_serial_t ackr_serial;
bool resend = false, expired = false;
rxrpc_see_call(call); rxrpc_see_call(call, rxrpc_call_see_input);
//printk("\n--------------------\n"); //printk("\n--------------------\n");
_enter("{%d,%s,%lx}", _enter("{%d,%s,%lx}",
call->debug_id, rxrpc_call_states[call->state], call->events); call->debug_id, rxrpc_call_states[call->state], call->events);
recheck_state: if (call->state == RXRPC_CALL_COMPLETE)
/* Limit the number of times we do this before returning to the manager */ goto out;
iterations++;
if (iterations > 5)
goto requeue;
if (test_and_clear_bit(RXRPC_CALL_EV_ABORT, &call->events)) {
rxrpc_send_abort_packet(call);
goto recheck_state;
}
if (READ_ONCE(call->acks_hard_ack) != call->tx_bottom)
rxrpc_shrink_call_tx_buffer(call);
if (call->state == RXRPC_CALL_COMPLETE) { if (skb && skb->mark == RXRPC_SKB_MARK_ERROR)
rxrpc_delete_call_timer(call); goto out;
goto out_put;
}
/* Work out if any timeouts tripped */ /* If we see our async-event poke, check for timeout trippage. */
now = jiffies; now = jiffies;
t = READ_ONCE(call->expect_rx_by); t = READ_ONCE(call->expect_rx_by);
if (time_after_eq(now, t)) { if (time_after_eq(now, t)) {
trace_rxrpc_timer(call, rxrpc_timer_exp_normal, now); trace_rxrpc_timer(call, rxrpc_timer_exp_normal, now);
set_bit(RXRPC_CALL_EV_EXPIRED, &call->events); expired = true;
} }
t = READ_ONCE(call->expect_req_by); t = READ_ONCE(call->expect_req_by);
if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST && if (call->state == RXRPC_CALL_SERVER_RECV_REQUEST &&
time_after_eq(now, t)) { time_after_eq(now, t)) {
trace_rxrpc_timer(call, rxrpc_timer_exp_idle, now); trace_rxrpc_timer(call, rxrpc_timer_exp_idle, now);
set_bit(RXRPC_CALL_EV_EXPIRED, &call->events); expired = true;
} }
t = READ_ONCE(call->expect_term_by); t = READ_ONCE(call->expect_term_by);
if (time_after_eq(now, t)) { if (time_after_eq(now, t)) {
trace_rxrpc_timer(call, rxrpc_timer_exp_hard, now); trace_rxrpc_timer(call, rxrpc_timer_exp_hard, now);
set_bit(RXRPC_CALL_EV_EXPIRED, &call->events); expired = true;
} }
t = READ_ONCE(call->delay_ack_at); t = READ_ONCE(call->delay_ack_at);
...@@ -385,11 +406,26 @@ void rxrpc_process_call(struct work_struct *work) ...@@ -385,11 +406,26 @@ void rxrpc_process_call(struct work_struct *work)
if (time_after_eq(now, t)) { if (time_after_eq(now, t)) {
trace_rxrpc_timer(call, rxrpc_timer_exp_resend, now); trace_rxrpc_timer(call, rxrpc_timer_exp_resend, now);
cmpxchg(&call->resend_at, t, now + MAX_JIFFY_OFFSET); cmpxchg(&call->resend_at, t, now + MAX_JIFFY_OFFSET);
set_bit(RXRPC_CALL_EV_RESEND, &call->events); resend = true;
} }
if (skb)
rxrpc_input_call_packet(call, skb);
rxrpc_transmit_some_data(call);
if (skb) {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
if (sp->hdr.type == RXRPC_PACKET_TYPE_ACK)
rxrpc_congestion_degrade(call);
}
if (test_and_clear_bit(RXRPC_CALL_EV_INITIAL_PING, &call->events))
rxrpc_send_initial_ping(call);
/* Process events */ /* Process events */
if (test_and_clear_bit(RXRPC_CALL_EV_EXPIRED, &call->events)) { if (expired) {
if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) && if (test_bit(RXRPC_CALL_RX_HEARD, &call->flags) &&
(int)call->conn->hi_serial - (int)call->rx_serial > 0) { (int)call->conn->hi_serial - (int)call->rx_serial > 0) {
trace_rxrpc_call_reset(call); trace_rxrpc_call_reset(call);
...@@ -397,52 +433,50 @@ void rxrpc_process_call(struct work_struct *work) ...@@ -397,52 +433,50 @@ void rxrpc_process_call(struct work_struct *work)
} else { } else {
rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME); rxrpc_abort_call("EXP", call, 0, RX_CALL_TIMEOUT, -ETIME);
} }
set_bit(RXRPC_CALL_EV_ABORT, &call->events); rxrpc_send_abort_packet(call);
goto recheck_state; goto out;
} }
if (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events)) { if (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events))
call->acks_lost_top = call->tx_top;
rxrpc_send_ACK(call, RXRPC_ACK_PING, 0, rxrpc_send_ACK(call, RXRPC_ACK_PING, 0,
rxrpc_propose_ack_ping_for_lost_ack); rxrpc_propose_ack_ping_for_lost_ack);
}
if (test_and_clear_bit(RXRPC_CALL_EV_RESEND, &call->events) && if (resend && call->state != RXRPC_CALL_CLIENT_RECV_REPLY)
call->state != RXRPC_CALL_CLIENT_RECV_REPLY) { rxrpc_resend(call, NULL);
rxrpc_resend(call, now);
goto recheck_state; if (test_and_clear_bit(RXRPC_CALL_RX_IS_IDLE, &call->flags))
} rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0,
rxrpc_propose_ack_rx_idle);
if (atomic_read(&call->ackr_nr_unacked) > 2)
rxrpc_send_ACK(call, RXRPC_ACK_IDLE, 0,
rxrpc_propose_ack_input_data);
/* Make sure the timer is restarted */ /* Make sure the timer is restarted */
next = call->expect_rx_by; if (call->state != RXRPC_CALL_COMPLETE) {
next = call->expect_rx_by;
#define set(T) { t = READ_ONCE(T); if (time_before(t, next)) next = t; } #define set(T) { t = READ_ONCE(T); if (time_before(t, next)) next = t; }
set(call->expect_req_by); set(call->expect_req_by);
set(call->expect_term_by); set(call->expect_term_by);
set(call->delay_ack_at); set(call->delay_ack_at);
set(call->ack_lost_at); set(call->ack_lost_at);
set(call->resend_at); set(call->resend_at);
set(call->keepalive_at); set(call->keepalive_at);
set(call->ping_at); set(call->ping_at);
now = jiffies;
if (time_after_eq(now, next))
goto recheck_state;
rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart); now = jiffies;
if (time_after_eq(now, next))
rxrpc_poke_call(call, rxrpc_call_poke_timer_now);
/* other events may have been raised since we started checking */ rxrpc_reduce_call_timer(call, next, now, rxrpc_timer_restart);
if (call->events && call->state < RXRPC_CALL_COMPLETE) }
goto requeue;
out_put:
rxrpc_put_call(call, rxrpc_call_put);
out: out:
if (call->state == RXRPC_CALL_COMPLETE)
del_timer_sync(&call->timer);
if (call->acks_hard_ack != call->tx_bottom)
rxrpc_shrink_call_tx_buffer(call);
_leave(""); _leave("");
return;
requeue:
__rxrpc_queue_call(call);
goto out;
} }
...@@ -45,6 +45,24 @@ static struct semaphore rxrpc_call_limiter = ...@@ -45,6 +45,24 @@ static struct semaphore rxrpc_call_limiter =
static struct semaphore rxrpc_kernel_call_limiter = static struct semaphore rxrpc_kernel_call_limiter =
__SEMAPHORE_INITIALIZER(rxrpc_kernel_call_limiter, 1000); __SEMAPHORE_INITIALIZER(rxrpc_kernel_call_limiter, 1000);
void rxrpc_poke_call(struct rxrpc_call *call, enum rxrpc_call_poke_trace what)
{
struct rxrpc_local *local = call->local;
bool busy;
if (call->state < RXRPC_CALL_COMPLETE) {
spin_lock_bh(&local->lock);
busy = !list_empty(&call->attend_link);
trace_rxrpc_poke_call(call, busy, what);
if (!busy) {
rxrpc_get_call(call, rxrpc_call_get_poke);
list_add_tail(&call->attend_link, &local->call_attend_q);
}
spin_unlock_bh(&local->lock);
rxrpc_wake_up_io_thread(local);
}
}
static void rxrpc_call_timer_expired(struct timer_list *t) static void rxrpc_call_timer_expired(struct timer_list *t)
{ {
struct rxrpc_call *call = from_timer(call, t, timer); struct rxrpc_call *call = from_timer(call, t, timer);
...@@ -53,9 +71,7 @@ static void rxrpc_call_timer_expired(struct timer_list *t) ...@@ -53,9 +71,7 @@ static void rxrpc_call_timer_expired(struct timer_list *t)
if (call->state < RXRPC_CALL_COMPLETE) { if (call->state < RXRPC_CALL_COMPLETE) {
trace_rxrpc_timer_expired(call, jiffies); trace_rxrpc_timer_expired(call, jiffies);
__rxrpc_queue_call(call); rxrpc_poke_call(call, rxrpc_call_poke_timer);
} else {
rxrpc_put_call(call, rxrpc_call_put);
} }
} }
...@@ -64,21 +80,14 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call, ...@@ -64,21 +80,14 @@ void rxrpc_reduce_call_timer(struct rxrpc_call *call,
unsigned long now, unsigned long now,
enum rxrpc_timer_trace why) enum rxrpc_timer_trace why)
{ {
if (rxrpc_try_get_call(call, rxrpc_call_got_timer)) { trace_rxrpc_timer(call, why, now);
trace_rxrpc_timer(call, why, now); timer_reduce(&call->timer, expire_at);
if (timer_reduce(&call->timer, expire_at))
rxrpc_put_call(call, rxrpc_call_put_notimer);
}
}
void rxrpc_delete_call_timer(struct rxrpc_call *call)
{
if (del_timer_sync(&call->timer))
rxrpc_put_call(call, rxrpc_call_put_timer);
} }
static struct lock_class_key rxrpc_call_user_mutex_lock_class_key; static struct lock_class_key rxrpc_call_user_mutex_lock_class_key;
static void rxrpc_destroy_call(struct work_struct *);
/* /*
* find an extant server call * find an extant server call
* - called in process context with IRQs enabled * - called in process context with IRQs enabled
...@@ -110,7 +119,7 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx, ...@@ -110,7 +119,7 @@ struct rxrpc_call *rxrpc_find_call_by_user_ID(struct rxrpc_sock *rx,
return NULL; return NULL;
found_extant_call: found_extant_call:
rxrpc_get_call(call, rxrpc_call_got); rxrpc_get_call(call, rxrpc_call_get_sendmsg);
read_unlock(&rx->call_lock); read_unlock(&rx->call_lock);
_leave(" = %p [%d]", call, refcount_read(&call->ref)); _leave(" = %p [%d]", call, refcount_read(&call->ref));
return call; return call;
...@@ -139,20 +148,20 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, ...@@ -139,20 +148,20 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp,
&rxrpc_call_user_mutex_lock_class_key); &rxrpc_call_user_mutex_lock_class_key);
timer_setup(&call->timer, rxrpc_call_timer_expired, 0); timer_setup(&call->timer, rxrpc_call_timer_expired, 0);
INIT_WORK(&call->processor, &rxrpc_process_call); INIT_WORK(&call->destroyer, rxrpc_destroy_call);
INIT_LIST_HEAD(&call->link); INIT_LIST_HEAD(&call->link);
INIT_LIST_HEAD(&call->chan_wait_link); INIT_LIST_HEAD(&call->chan_wait_link);
INIT_LIST_HEAD(&call->accept_link); INIT_LIST_HEAD(&call->accept_link);
INIT_LIST_HEAD(&call->recvmsg_link); INIT_LIST_HEAD(&call->recvmsg_link);
INIT_LIST_HEAD(&call->sock_link); INIT_LIST_HEAD(&call->sock_link);
INIT_LIST_HEAD(&call->attend_link);
INIT_LIST_HEAD(&call->tx_sendmsg);
INIT_LIST_HEAD(&call->tx_buffer); INIT_LIST_HEAD(&call->tx_buffer);
skb_queue_head_init(&call->recvmsg_queue); skb_queue_head_init(&call->recvmsg_queue);
skb_queue_head_init(&call->rx_oos_queue); skb_queue_head_init(&call->rx_oos_queue);
init_waitqueue_head(&call->waitq); init_waitqueue_head(&call->waitq);
spin_lock_init(&call->notify_lock); spin_lock_init(&call->notify_lock);
spin_lock_init(&call->tx_lock); spin_lock_init(&call->tx_lock);
spin_lock_init(&call->input_lock);
spin_lock_init(&call->acks_ack_lock);
rwlock_init(&call->state_lock); rwlock_init(&call->state_lock);
refcount_set(&call->ref, 1); refcount_set(&call->ref, 1);
call->debug_id = debug_id; call->debug_id = debug_id;
...@@ -185,22 +194,45 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp, ...@@ -185,22 +194,45 @@ struct rxrpc_call *rxrpc_alloc_call(struct rxrpc_sock *rx, gfp_t gfp,
*/ */
static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx, static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
struct sockaddr_rxrpc *srx, struct sockaddr_rxrpc *srx,
struct rxrpc_conn_parameters *cp,
struct rxrpc_call_params *p,
gfp_t gfp, gfp_t gfp,
unsigned int debug_id) unsigned int debug_id)
{ {
struct rxrpc_call *call; struct rxrpc_call *call;
ktime_t now; ktime_t now;
int ret;
_enter(""); _enter("");
call = rxrpc_alloc_call(rx, gfp, debug_id); call = rxrpc_alloc_call(rx, gfp, debug_id);
if (!call) if (!call)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
call->state = RXRPC_CALL_CLIENT_AWAIT_CONN;
call->service_id = srx->srx_service;
now = ktime_get_real(); now = ktime_get_real();
call->acks_latest_ts = now; call->acks_latest_ts = now;
call->cong_tstamp = now; call->cong_tstamp = now;
call->state = RXRPC_CALL_CLIENT_AWAIT_CONN;
call->dest_srx = *srx;
call->interruptibility = p->interruptibility;
call->tx_total_len = p->tx_total_len;
call->key = key_get(cp->key);
call->local = rxrpc_get_local(cp->local, rxrpc_local_get_call);
if (p->kernel)
__set_bit(RXRPC_CALL_KERNEL, &call->flags);
if (cp->upgrade)
__set_bit(RXRPC_CALL_UPGRADE, &call->flags);
if (cp->exclusive)
__set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags);
ret = rxrpc_init_client_call_security(call);
if (ret < 0) {
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, 0, ret);
rxrpc_put_call(call, rxrpc_call_put_discard_error);
return ERR_PTR(ret);
}
trace_rxrpc_call(call->debug_id, refcount_read(&call->ref),
p->user_call_ID, rxrpc_call_new_client);
_leave(" = %p", call); _leave(" = %p", call);
return call; return call;
...@@ -218,6 +250,7 @@ static void rxrpc_start_call_timer(struct rxrpc_call *call) ...@@ -218,6 +250,7 @@ static void rxrpc_start_call_timer(struct rxrpc_call *call)
call->ack_lost_at = j; call->ack_lost_at = j;
call->resend_at = j; call->resend_at = j;
call->ping_at = j; call->ping_at = j;
call->keepalive_at = j;
call->expect_rx_by = j; call->expect_rx_by = j;
call->expect_req_by = j; call->expect_req_by = j;
call->expect_term_by = j; call->expect_term_by = j;
...@@ -270,7 +303,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -270,7 +303,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
struct rxrpc_net *rxnet; struct rxrpc_net *rxnet;
struct semaphore *limiter; struct semaphore *limiter;
struct rb_node *parent, **pp; struct rb_node *parent, **pp;
const void *here = __builtin_return_address(0);
int ret; int ret;
_enter("%p,%lx", rx, p->user_call_ID); _enter("%p,%lx", rx, p->user_call_ID);
...@@ -281,7 +313,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -281,7 +313,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
return ERR_PTR(-ERESTARTSYS); return ERR_PTR(-ERESTARTSYS);
} }
call = rxrpc_alloc_client_call(rx, srx, gfp, debug_id); call = rxrpc_alloc_client_call(rx, srx, cp, p, gfp, debug_id);
if (IS_ERR(call)) { if (IS_ERR(call)) {
release_sock(&rx->sk); release_sock(&rx->sk);
up(limiter); up(limiter);
...@@ -289,14 +321,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -289,14 +321,6 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
return call; return call;
} }
call->interruptibility = p->interruptibility;
call->tx_total_len = p->tx_total_len;
trace_rxrpc_call(call->debug_id, rxrpc_call_new_client,
refcount_read(&call->ref),
here, (const void *)p->user_call_ID);
if (p->kernel)
__set_bit(RXRPC_CALL_KERNEL, &call->flags);
/* We need to protect a partially set up call against the user as we /* We need to protect a partially set up call against the user as we
* will be acting outside the socket lock. * will be acting outside the socket lock.
*/ */
...@@ -322,7 +346,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -322,7 +346,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
rcu_assign_pointer(call->socket, rx); rcu_assign_pointer(call->socket, rx);
call->user_call_ID = p->user_call_ID; call->user_call_ID = p->user_call_ID;
__set_bit(RXRPC_CALL_HAS_USERID, &call->flags); __set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
rxrpc_get_call(call, rxrpc_call_got_userid); rxrpc_get_call(call, rxrpc_call_get_userid);
rb_link_node(&call->sock_node, parent, pp); rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls); rb_insert_color(&call->sock_node, &rx->calls);
list_add(&call->sock_link, &rx->sock_calls); list_add(&call->sock_link, &rx->sock_calls);
...@@ -330,9 +354,9 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -330,9 +354,9 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
write_unlock(&rx->call_lock); write_unlock(&rx->call_lock);
rxnet = call->rxnet; rxnet = call->rxnet;
spin_lock_bh(&rxnet->call_lock); spin_lock(&rxnet->call_lock);
list_add_tail_rcu(&call->link, &rxnet->calls); list_add_tail_rcu(&call->link, &rxnet->calls);
spin_unlock_bh(&rxnet->call_lock); spin_unlock(&rxnet->call_lock);
/* From this point on, the call is protected by its own lock. */ /* From this point on, the call is protected by its own lock. */
release_sock(&rx->sk); release_sock(&rx->sk);
...@@ -344,13 +368,10 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -344,13 +368,10 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
if (ret < 0) if (ret < 0)
goto error_attached_to_socket; goto error_attached_to_socket;
trace_rxrpc_call(call->debug_id, rxrpc_call_connected, rxrpc_see_call(call, rxrpc_call_see_connected);
refcount_read(&call->ref), here, NULL);
rxrpc_start_call_timer(call); rxrpc_start_call_timer(call);
_net("CALL new %d on CONN %d", call->debug_id, call->conn->debug_id);
_leave(" = %p [new]", call); _leave(" = %p [new]", call);
return call; return call;
...@@ -364,11 +385,11 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -364,11 +385,11 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
release_sock(&rx->sk); release_sock(&rx->sk);
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
RX_CALL_DEAD, -EEXIST); RX_CALL_DEAD, -EEXIST);
trace_rxrpc_call(call->debug_id, rxrpc_call_error, trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), 0,
refcount_read(&call->ref), here, ERR_PTR(-EEXIST)); rxrpc_call_see_userid_exists);
rxrpc_release_call(rx, call); rxrpc_release_call(rx, call);
mutex_unlock(&call->user_mutex); mutex_unlock(&call->user_mutex);
rxrpc_put_call(call, rxrpc_call_put); rxrpc_put_call(call, rxrpc_call_put_userid_exists);
_leave(" = -EEXIST"); _leave(" = -EEXIST");
return ERR_PTR(-EEXIST); return ERR_PTR(-EEXIST);
...@@ -378,8 +399,8 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx, ...@@ -378,8 +399,8 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *rx,
* leave the error to recvmsg() to deal with. * leave the error to recvmsg() to deal with.
*/ */
error_attached_to_socket: error_attached_to_socket:
trace_rxrpc_call(call->debug_id, rxrpc_call_error, trace_rxrpc_call(call->debug_id, refcount_read(&call->ref), ret,
refcount_read(&call->ref), here, ERR_PTR(ret)); rxrpc_call_see_connect_failed);
set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); set_bit(RXRPC_CALL_DISCONNECTED, &call->flags);
__rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR, __rxrpc_set_call_completion(call, RXRPC_CALL_LOCAL_ERROR,
RX_CALL_DEAD, ret); RX_CALL_DEAD, ret);
...@@ -403,11 +424,34 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -403,11 +424,34 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
rcu_assign_pointer(call->socket, rx); rcu_assign_pointer(call->socket, rx);
call->call_id = sp->hdr.callNumber; call->call_id = sp->hdr.callNumber;
call->service_id = sp->hdr.serviceId; call->dest_srx.srx_service = sp->hdr.serviceId;
call->cid = sp->hdr.cid; call->cid = sp->hdr.cid;
call->state = RXRPC_CALL_SERVER_SECURING; call->state = RXRPC_CALL_SERVER_SECURING;
call->cong_tstamp = skb->tstamp; call->cong_tstamp = skb->tstamp;
spin_lock(&conn->state_lock);
switch (conn->state) {
case RXRPC_CONN_SERVICE_UNSECURED:
case RXRPC_CONN_SERVICE_CHALLENGING:
call->state = RXRPC_CALL_SERVER_SECURING;
break;
case RXRPC_CONN_SERVICE:
call->state = RXRPC_CALL_SERVER_RECV_REQUEST;
break;
case RXRPC_CONN_REMOTELY_ABORTED:
__rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
conn->abort_code, conn->error);
break;
case RXRPC_CONN_LOCALLY_ABORTED:
__rxrpc_abort_call("CON", call, 1,
conn->abort_code, conn->error);
break;
default:
BUG();
}
/* Set the channel for this call. We don't get channel_lock as we're /* Set the channel for this call. We don't get channel_lock as we're
* only defending against the data_ready handler (which we're called * only defending against the data_ready handler (which we're called
* from) and the RESPONSE packet parser (which is only really * from) and the RESPONSE packet parser (which is only really
...@@ -418,86 +462,48 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx, ...@@ -418,86 +462,48 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
conn->channels[chan].call_counter = call->call_id; conn->channels[chan].call_counter = call->call_id;
conn->channels[chan].call_id = call->call_id; conn->channels[chan].call_id = call->call_id;
rcu_assign_pointer(conn->channels[chan].call, call); rcu_assign_pointer(conn->channels[chan].call, call);
spin_unlock(&conn->state_lock);
spin_lock(&conn->params.peer->lock); spin_lock(&conn->peer->lock);
hlist_add_head_rcu(&call->error_link, &conn->params.peer->error_targets); hlist_add_head(&call->error_link, &conn->peer->error_targets);
spin_unlock(&conn->params.peer->lock); spin_unlock(&conn->peer->lock);
_net("CALL incoming %d on CONN %d", call->debug_id, call->conn->debug_id);
rxrpc_start_call_timer(call); rxrpc_start_call_timer(call);
_leave(""); _leave("");
} }
/*
* Queue a call's work processor, getting a ref to pass to the work queue.
*/
bool rxrpc_queue_call(struct rxrpc_call *call)
{
const void *here = __builtin_return_address(0);
int n;
if (!__refcount_inc_not_zero(&call->ref, &n))
return false;
if (rxrpc_queue_work(&call->processor))
trace_rxrpc_call(call->debug_id, rxrpc_call_queued, n + 1,
here, NULL);
else
rxrpc_put_call(call, rxrpc_call_put_noqueue);
return true;
}
/*
* Queue a call's work processor, passing the callers ref to the work queue.
*/
bool __rxrpc_queue_call(struct rxrpc_call *call)
{
const void *here = __builtin_return_address(0);
int n = refcount_read(&call->ref);
ASSERTCMP(n, >=, 1);
if (rxrpc_queue_work(&call->processor))
trace_rxrpc_call(call->debug_id, rxrpc_call_queued_ref, n,
here, NULL);
else
rxrpc_put_call(call, rxrpc_call_put_noqueue);
return true;
}
/* /*
* Note the re-emergence of a call. * Note the re-emergence of a call.
*/ */
void rxrpc_see_call(struct rxrpc_call *call) void rxrpc_see_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{ {
const void *here = __builtin_return_address(0);
if (call) { if (call) {
int n = refcount_read(&call->ref); int r = refcount_read(&call->ref);
trace_rxrpc_call(call->debug_id, rxrpc_call_seen, n, trace_rxrpc_call(call->debug_id, r, 0, why);
here, NULL);
} }
} }
bool rxrpc_try_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op) struct rxrpc_call *rxrpc_try_get_call(struct rxrpc_call *call,
enum rxrpc_call_trace why)
{ {
const void *here = __builtin_return_address(0); int r;
int n;
if (!__refcount_inc_not_zero(&call->ref, &n)) if (!call || !__refcount_inc_not_zero(&call->ref, &r))
return false; return NULL;
trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL); trace_rxrpc_call(call->debug_id, r + 1, 0, why);
return true; return call;
} }
/* /*
* Note the addition of a ref on a call. * Note the addition of a ref on a call.
*/ */
void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace op) void rxrpc_get_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{ {
const void *here = __builtin_return_address(0); int r;
int n;
__refcount_inc(&call->ref, &n); __refcount_inc(&call->ref, &r);
trace_rxrpc_call(call->debug_id, op, n + 1, here, NULL); trace_rxrpc_call(call->debug_id, r + 1, 0, why);
} }
/* /*
...@@ -514,15 +520,13 @@ static void rxrpc_cleanup_ring(struct rxrpc_call *call) ...@@ -514,15 +520,13 @@ static void rxrpc_cleanup_ring(struct rxrpc_call *call)
*/ */
void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
{ {
const void *here = __builtin_return_address(0);
struct rxrpc_connection *conn = call->conn; struct rxrpc_connection *conn = call->conn;
bool put = false; bool put = false;
_enter("{%d,%d}", call->debug_id, refcount_read(&call->ref)); _enter("{%d,%d}", call->debug_id, refcount_read(&call->ref));
trace_rxrpc_call(call->debug_id, rxrpc_call_release, trace_rxrpc_call(call->debug_id, refcount_read(&call->ref),
refcount_read(&call->ref), call->flags, rxrpc_call_see_release);
here, (const void *)call->flags);
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
...@@ -530,10 +534,10 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) ...@@ -530,10 +534,10 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
BUG(); BUG();
rxrpc_put_call_slot(call); rxrpc_put_call_slot(call);
rxrpc_delete_call_timer(call); del_timer_sync(&call->timer);
/* Make sure we don't get any more notifications */ /* Make sure we don't get any more notifications */
write_lock_bh(&rx->recvmsg_lock); write_lock(&rx->recvmsg_lock);
if (!list_empty(&call->recvmsg_link)) { if (!list_empty(&call->recvmsg_link)) {
_debug("unlinking once-pending call %p { e=%lx f=%lx }", _debug("unlinking once-pending call %p { e=%lx f=%lx }",
...@@ -546,16 +550,16 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call) ...@@ -546,16 +550,16 @@ void rxrpc_release_call(struct rxrpc_sock *rx, struct rxrpc_call *call)
call->recvmsg_link.next = NULL; call->recvmsg_link.next = NULL;
call->recvmsg_link.prev = NULL; call->recvmsg_link.prev = NULL;
write_unlock_bh(&rx->recvmsg_lock); write_unlock(&rx->recvmsg_lock);
if (put) if (put)
rxrpc_put_call(call, rxrpc_call_put); rxrpc_put_call(call, rxrpc_call_put_unnotify);
write_lock(&rx->call_lock); write_lock(&rx->call_lock);
if (test_and_clear_bit(RXRPC_CALL_HAS_USERID, &call->flags)) { if (test_and_clear_bit(RXRPC_CALL_HAS_USERID, &call->flags)) {
rb_erase(&call->sock_node, &rx->calls); rb_erase(&call->sock_node, &rx->calls);
memset(&call->sock_node, 0xdd, sizeof(call->sock_node)); memset(&call->sock_node, 0xdd, sizeof(call->sock_node));
rxrpc_put_call(call, rxrpc_call_put_userid); rxrpc_put_call(call, rxrpc_call_put_userid_exists);
} }
list_del(&call->sock_link); list_del(&call->sock_link);
...@@ -584,17 +588,17 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) ...@@ -584,17 +588,17 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
struct rxrpc_call, accept_link); struct rxrpc_call, accept_link);
list_del(&call->accept_link); list_del(&call->accept_link);
rxrpc_abort_call("SKR", call, 0, RX_CALL_DEAD, -ECONNRESET); rxrpc_abort_call("SKR", call, 0, RX_CALL_DEAD, -ECONNRESET);
rxrpc_put_call(call, rxrpc_call_put); rxrpc_put_call(call, rxrpc_call_put_release_sock_tba);
} }
while (!list_empty(&rx->sock_calls)) { while (!list_empty(&rx->sock_calls)) {
call = list_entry(rx->sock_calls.next, call = list_entry(rx->sock_calls.next,
struct rxrpc_call, sock_link); struct rxrpc_call, sock_link);
rxrpc_get_call(call, rxrpc_call_got); rxrpc_get_call(call, rxrpc_call_get_release_sock);
rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, -ECONNRESET); rxrpc_abort_call("SKT", call, 0, RX_CALL_DEAD, -ECONNRESET);
rxrpc_send_abort_packet(call); rxrpc_send_abort_packet(call);
rxrpc_release_call(rx, call); rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put); rxrpc_put_call(call, rxrpc_call_put_release_sock);
} }
_leave(""); _leave("");
...@@ -603,26 +607,24 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx) ...@@ -603,26 +607,24 @@ void rxrpc_release_calls_on_socket(struct rxrpc_sock *rx)
/* /*
* release a call * release a call
*/ */
void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op) void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace why)
{ {
struct rxrpc_net *rxnet = call->rxnet; struct rxrpc_net *rxnet = call->rxnet;
const void *here = __builtin_return_address(0);
unsigned int debug_id = call->debug_id; unsigned int debug_id = call->debug_id;
bool dead; bool dead;
int n; int r;
ASSERT(call != NULL); ASSERT(call != NULL);
dead = __refcount_dec_and_test(&call->ref, &n); dead = __refcount_dec_and_test(&call->ref, &r);
trace_rxrpc_call(debug_id, op, n, here, NULL); trace_rxrpc_call(debug_id, r - 1, 0, why);
if (dead) { if (dead) {
_debug("call %d dead", call->debug_id);
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
if (!list_empty(&call->link)) { if (!list_empty(&call->link)) {
spin_lock_bh(&rxnet->call_lock); spin_lock(&rxnet->call_lock);
list_del_init(&call->link); list_del_init(&call->link);
spin_unlock_bh(&rxnet->call_lock); spin_unlock(&rxnet->call_lock);
} }
rxrpc_cleanup_call(call); rxrpc_cleanup_call(call);
...@@ -630,36 +632,45 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op) ...@@ -630,36 +632,45 @@ void rxrpc_put_call(struct rxrpc_call *call, enum rxrpc_call_trace op)
} }
/* /*
* Final call destruction - but must be done in process context. * Free up the call under RCU.
*/ */
static void rxrpc_destroy_call(struct work_struct *work) static void rxrpc_rcu_free_call(struct rcu_head *rcu)
{ {
struct rxrpc_call *call = container_of(work, struct rxrpc_call, processor); struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu);
struct rxrpc_net *rxnet = call->rxnet; struct rxrpc_net *rxnet = READ_ONCE(call->rxnet);
rxrpc_delete_call_timer(call);
rxrpc_put_connection(call->conn);
rxrpc_put_peer(call->peer);
kmem_cache_free(rxrpc_call_jar, call); kmem_cache_free(rxrpc_call_jar, call);
if (atomic_dec_and_test(&rxnet->nr_calls)) if (atomic_dec_and_test(&rxnet->nr_calls))
wake_up_var(&rxnet->nr_calls); wake_up_var(&rxnet->nr_calls);
} }
/* /*
* Final call destruction under RCU. * Final call destruction - but must be done in process context.
*/ */
static void rxrpc_rcu_destroy_call(struct rcu_head *rcu) static void rxrpc_destroy_call(struct work_struct *work)
{ {
struct rxrpc_call *call = container_of(rcu, struct rxrpc_call, rcu); struct rxrpc_call *call = container_of(work, struct rxrpc_call, destroyer);
struct rxrpc_txbuf *txb;
if (in_softirq()) { del_timer_sync(&call->timer);
INIT_WORK(&call->processor, rxrpc_destroy_call);
if (!rxrpc_queue_work(&call->processor)) rxrpc_cleanup_ring(call);
BUG(); while ((txb = list_first_entry_or_null(&call->tx_sendmsg,
} else { struct rxrpc_txbuf, call_link))) {
rxrpc_destroy_call(&call->processor); list_del(&txb->call_link);
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned);
} }
while ((txb = list_first_entry_or_null(&call->tx_buffer,
struct rxrpc_txbuf, call_link))) {
list_del(&txb->call_link);
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned);
}
rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned);
rxrpc_put_connection(call->conn, rxrpc_conn_put_call);
rxrpc_put_peer(call->peer, rxrpc_peer_put_call);
rxrpc_put_local(call->local, rxrpc_local_put_call);
call_rcu(&call->rcu, rxrpc_rcu_free_call);
} }
/* /*
...@@ -667,25 +678,20 @@ static void rxrpc_rcu_destroy_call(struct rcu_head *rcu) ...@@ -667,25 +678,20 @@ static void rxrpc_rcu_destroy_call(struct rcu_head *rcu)
*/ */
void rxrpc_cleanup_call(struct rxrpc_call *call) void rxrpc_cleanup_call(struct rxrpc_call *call)
{ {
struct rxrpc_txbuf *txb;
_net("DESTROY CALL %d", call->debug_id);
memset(&call->sock_node, 0xcd, sizeof(call->sock_node)); memset(&call->sock_node, 0xcd, sizeof(call->sock_node));
ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE); ASSERTCMP(call->state, ==, RXRPC_CALL_COMPLETE);
ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags)); ASSERT(test_bit(RXRPC_CALL_RELEASED, &call->flags));
rxrpc_cleanup_ring(call); del_timer(&call->timer);
while ((txb = list_first_entry_or_null(&call->tx_buffer,
struct rxrpc_txbuf, call_link))) {
list_del(&txb->call_link);
rxrpc_put_txbuf(txb, rxrpc_txbuf_put_cleaned);
}
rxrpc_put_txbuf(call->tx_pending, rxrpc_txbuf_put_cleaned);
rxrpc_free_skb(call->acks_soft_tbl, rxrpc_skb_cleaned);
call_rcu(&call->rcu, rxrpc_rcu_destroy_call); if (rcu_read_lock_held())
/* Can't use the rxrpc workqueue as we need to cancel/flush
* something that may be running/waiting there.
*/
schedule_work(&call->destroyer);
else
rxrpc_destroy_call(&call->destroyer);
} }
/* /*
...@@ -700,14 +706,14 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet) ...@@ -700,14 +706,14 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
_enter(""); _enter("");
if (!list_empty(&rxnet->calls)) { if (!list_empty(&rxnet->calls)) {
spin_lock_bh(&rxnet->call_lock); spin_lock(&rxnet->call_lock);
while (!list_empty(&rxnet->calls)) { while (!list_empty(&rxnet->calls)) {
call = list_entry(rxnet->calls.next, call = list_entry(rxnet->calls.next,
struct rxrpc_call, link); struct rxrpc_call, link);
_debug("Zapping call %p", call); _debug("Zapping call %p", call);
rxrpc_see_call(call); rxrpc_see_call(call, rxrpc_call_see_zap);
list_del_init(&call->link); list_del_init(&call->link);
pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n", pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",
...@@ -715,12 +721,12 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet) ...@@ -715,12 +721,12 @@ void rxrpc_destroy_all_calls(struct rxrpc_net *rxnet)
rxrpc_call_states[call->state], rxrpc_call_states[call->state],
call->flags, call->events); call->flags, call->events);
spin_unlock_bh(&rxnet->call_lock); spin_unlock(&rxnet->call_lock);
cond_resched(); cond_resched();
spin_lock_bh(&rxnet->call_lock); spin_lock(&rxnet->call_lock);
} }
spin_unlock_bh(&rxnet->call_lock); spin_unlock(&rxnet->call_lock);
} }
atomic_dec(&rxnet->nr_calls); atomic_dec(&rxnet->nr_calls);
......
...@@ -51,7 +51,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle); ...@@ -51,7 +51,7 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle);
static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn, static int rxrpc_get_client_connection_id(struct rxrpc_connection *conn,
gfp_t gfp) gfp_t gfp)
{ {
struct rxrpc_net *rxnet = conn->params.local->rxnet; struct rxrpc_net *rxnet = conn->rxnet;
int id; int id;
_enter(""); _enter("");
...@@ -122,37 +122,47 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp, ...@@ -122,37 +122,47 @@ static struct rxrpc_bundle *rxrpc_alloc_bundle(struct rxrpc_conn_parameters *cp,
bundle = kzalloc(sizeof(*bundle), gfp); bundle = kzalloc(sizeof(*bundle), gfp);
if (bundle) { if (bundle) {
bundle->params = *cp; bundle->local = cp->local;
rxrpc_get_peer(bundle->params.peer); bundle->peer = rxrpc_get_peer(cp->peer, rxrpc_peer_get_bundle);
bundle->key = cp->key;
bundle->exclusive = cp->exclusive;
bundle->upgrade = cp->upgrade;
bundle->service_id = cp->service_id;
bundle->security_level = cp->security_level;
refcount_set(&bundle->ref, 1); refcount_set(&bundle->ref, 1);
atomic_set(&bundle->active, 1); atomic_set(&bundle->active, 1);
spin_lock_init(&bundle->channel_lock); spin_lock_init(&bundle->channel_lock);
INIT_LIST_HEAD(&bundle->waiting_calls); INIT_LIST_HEAD(&bundle->waiting_calls);
trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_new);
} }
return bundle; return bundle;
} }
struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle) struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle,
enum rxrpc_bundle_trace why)
{ {
refcount_inc(&bundle->ref); int r;
__refcount_inc(&bundle->ref, &r);
trace_rxrpc_bundle(bundle->debug_id, r + 1, why);
return bundle; return bundle;
} }
static void rxrpc_free_bundle(struct rxrpc_bundle *bundle) static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
{ {
rxrpc_put_peer(bundle->params.peer); trace_rxrpc_bundle(bundle->debug_id, 1, rxrpc_bundle_free);
rxrpc_put_peer(bundle->peer, rxrpc_peer_put_bundle);
kfree(bundle); kfree(bundle);
} }
void rxrpc_put_bundle(struct rxrpc_bundle *bundle) void rxrpc_put_bundle(struct rxrpc_bundle *bundle, enum rxrpc_bundle_trace why)
{ {
unsigned int d = bundle->debug_id; unsigned int id = bundle->debug_id;
bool dead; bool dead;
int r; int r;
dead = __refcount_dec_and_test(&bundle->ref, &r); dead = __refcount_dec_and_test(&bundle->ref, &r);
trace_rxrpc_bundle(id, r - 1, why);
_debug("PUT B=%x %d", d, r - 1);
if (dead) if (dead)
rxrpc_free_bundle(bundle); rxrpc_free_bundle(bundle);
} }
...@@ -164,12 +174,12 @@ static struct rxrpc_connection * ...@@ -164,12 +174,12 @@ static struct rxrpc_connection *
rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
struct rxrpc_net *rxnet = bundle->params.local->rxnet; struct rxrpc_net *rxnet = bundle->local->rxnet;
int ret; int ret;
_enter(""); _enter("");
conn = rxrpc_alloc_connection(gfp); conn = rxrpc_alloc_connection(rxnet, gfp);
if (!conn) { if (!conn) {
_leave(" = -ENOMEM"); _leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
...@@ -177,10 +187,16 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) ...@@ -177,10 +187,16 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
refcount_set(&conn->ref, 1); refcount_set(&conn->ref, 1);
conn->bundle = bundle; conn->bundle = bundle;
conn->params = bundle->params; conn->local = bundle->local;
conn->peer = bundle->peer;
conn->key = bundle->key;
conn->exclusive = bundle->exclusive;
conn->upgrade = bundle->upgrade;
conn->orig_service_id = bundle->service_id;
conn->security_level = bundle->security_level;
conn->out_clientflag = RXRPC_CLIENT_INITIATED; conn->out_clientflag = RXRPC_CLIENT_INITIATED;
conn->state = RXRPC_CONN_CLIENT; conn->state = RXRPC_CONN_CLIENT;
conn->service_id = conn->params.service_id; conn->service_id = conn->orig_service_id;
ret = rxrpc_get_client_connection_id(conn, gfp); ret = rxrpc_get_client_connection_id(conn, gfp);
if (ret < 0) if (ret < 0)
...@@ -195,14 +211,13 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp) ...@@ -195,14 +211,13 @@ rxrpc_alloc_client_connection(struct rxrpc_bundle *bundle, gfp_t gfp)
list_add_tail(&conn->proc_link, &rxnet->conn_proc_list); list_add_tail(&conn->proc_link, &rxnet->conn_proc_list);
write_unlock(&rxnet->conn_lock); write_unlock(&rxnet->conn_lock);
rxrpc_get_bundle(bundle); rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_conn);
rxrpc_get_peer(conn->params.peer); rxrpc_get_peer(conn->peer, rxrpc_peer_get_client_conn);
rxrpc_get_local(conn->params.local); rxrpc_get_local(conn->local, rxrpc_local_get_client_conn);
key_get(conn->params.key); key_get(conn->key);
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_client, trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref),
refcount_read(&conn->ref), rxrpc_conn_new_client);
__builtin_return_address(0));
atomic_inc(&rxnet->nr_client_conns); atomic_inc(&rxnet->nr_client_conns);
trace_rxrpc_client(conn, -1, rxrpc_client_alloc); trace_rxrpc_client(conn, -1, rxrpc_client_alloc);
...@@ -228,7 +243,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn) ...@@ -228,7 +243,7 @@ static bool rxrpc_may_reuse_conn(struct rxrpc_connection *conn)
if (!conn) if (!conn)
goto dont_reuse; goto dont_reuse;
rxnet = conn->params.local->rxnet; rxnet = conn->rxnet;
if (test_bit(RXRPC_CONN_DONT_REUSE, &conn->flags)) if (test_bit(RXRPC_CONN_DONT_REUSE, &conn->flags))
goto dont_reuse; goto dont_reuse;
...@@ -285,7 +300,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c ...@@ -285,7 +300,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
while (p) { while (p) {
bundle = rb_entry(p, struct rxrpc_bundle, local_node); bundle = rb_entry(p, struct rxrpc_bundle, local_node);
#define cmp(X) ((long)bundle->params.X - (long)cp->X) #define cmp(X) ((long)bundle->X - (long)cp->X)
diff = (cmp(peer) ?: diff = (cmp(peer) ?:
cmp(key) ?: cmp(key) ?:
cmp(security_level) ?: cmp(security_level) ?:
...@@ -314,7 +329,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c ...@@ -314,7 +329,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
parent = *pp; parent = *pp;
bundle = rb_entry(parent, struct rxrpc_bundle, local_node); bundle = rb_entry(parent, struct rxrpc_bundle, local_node);
#define cmp(X) ((long)bundle->params.X - (long)cp->X) #define cmp(X) ((long)bundle->X - (long)cp->X)
diff = (cmp(peer) ?: diff = (cmp(peer) ?:
cmp(key) ?: cmp(key) ?:
cmp(security_level) ?: cmp(security_level) ?:
...@@ -332,7 +347,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c ...@@ -332,7 +347,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
candidate->debug_id = atomic_inc_return(&rxrpc_bundle_id); candidate->debug_id = atomic_inc_return(&rxrpc_bundle_id);
rb_link_node(&candidate->local_node, parent, pp); rb_link_node(&candidate->local_node, parent, pp);
rb_insert_color(&candidate->local_node, &local->client_bundles); rb_insert_color(&candidate->local_node, &local->client_bundles);
rxrpc_get_bundle(candidate); rxrpc_get_bundle(candidate, rxrpc_bundle_get_client_call);
spin_unlock(&local->client_bundles_lock); spin_unlock(&local->client_bundles_lock);
_leave(" = %u [new]", candidate->debug_id); _leave(" = %u [new]", candidate->debug_id);
return candidate; return candidate;
...@@ -340,7 +355,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c ...@@ -340,7 +355,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
found_bundle_free: found_bundle_free:
rxrpc_free_bundle(candidate); rxrpc_free_bundle(candidate);
found_bundle: found_bundle:
rxrpc_get_bundle(bundle); rxrpc_get_bundle(bundle, rxrpc_bundle_get_client_call);
atomic_inc(&bundle->active); atomic_inc(&bundle->active);
spin_unlock(&local->client_bundles_lock); spin_unlock(&local->client_bundles_lock);
_leave(" = %u [found]", bundle->debug_id); _leave(" = %u [found]", bundle->debug_id);
...@@ -456,10 +471,10 @@ static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp) ...@@ -456,10 +471,10 @@ static void rxrpc_add_conn_to_bundle(struct rxrpc_bundle *bundle, gfp_t gfp)
if (candidate) { if (candidate) {
_debug("discard C=%x", candidate->debug_id); _debug("discard C=%x", candidate->debug_id);
trace_rxrpc_client(candidate, -1, rxrpc_client_duplicate); trace_rxrpc_client(candidate, -1, rxrpc_client_duplicate);
rxrpc_put_connection(candidate); rxrpc_put_connection(candidate, rxrpc_conn_put_discard);
} }
rxrpc_put_connection(old); rxrpc_put_connection(old, rxrpc_conn_put_noreuse);
_leave(""); _leave("");
} }
...@@ -530,23 +545,21 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, ...@@ -530,23 +545,21 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,
clear_bit(RXRPC_CONN_FINAL_ACK_0 + channel, &conn->flags); clear_bit(RXRPC_CONN_FINAL_ACK_0 + channel, &conn->flags);
clear_bit(conn->bundle_shift + channel, &bundle->avail_chans); clear_bit(conn->bundle_shift + channel, &bundle->avail_chans);
rxrpc_see_call(call); rxrpc_see_call(call, rxrpc_call_see_activate_client);
list_del_init(&call->chan_wait_link); list_del_init(&call->chan_wait_link);
call->peer = rxrpc_get_peer(conn->params.peer); call->peer = rxrpc_get_peer(conn->peer, rxrpc_peer_get_activate_call);
call->conn = rxrpc_get_connection(conn); call->conn = rxrpc_get_connection(conn, rxrpc_conn_get_activate_call);
call->cid = conn->proto.cid | channel; call->cid = conn->proto.cid | channel;
call->call_id = call_id; call->call_id = call_id;
call->security = conn->security; call->security = conn->security;
call->security_ix = conn->security_ix; call->security_ix = conn->security_ix;
call->service_id = conn->service_id; call->dest_srx.srx_service = conn->service_id;
trace_rxrpc_connect_call(call); trace_rxrpc_connect_call(call);
_net("CONNECT call %08x:%08x as call %d on conn %d",
call->cid, call->call_id, call->debug_id, conn->debug_id);
write_lock_bh(&call->state_lock); write_lock(&call->state_lock);
call->state = RXRPC_CALL_CLIENT_SEND_REQUEST; call->state = RXRPC_CALL_CLIENT_SEND_REQUEST;
write_unlock_bh(&call->state_lock); write_unlock(&call->state_lock);
/* Paired with the read barrier in rxrpc_connect_call(). This orders /* Paired with the read barrier in rxrpc_connect_call(). This orders
* cid and epoch in the connection wrt to call_id without the need to * cid and epoch in the connection wrt to call_id without the need to
...@@ -571,7 +584,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn, ...@@ -571,7 +584,7 @@ static void rxrpc_activate_one_channel(struct rxrpc_connection *conn,
*/ */
static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connection *conn) static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connection *conn)
{ {
struct rxrpc_net *rxnet = bundle->params.local->rxnet; struct rxrpc_net *rxnet = bundle->local->rxnet;
bool drop_ref; bool drop_ref;
if (!list_empty(&conn->cache_link)) { if (!list_empty(&conn->cache_link)) {
...@@ -583,7 +596,7 @@ static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connecti ...@@ -583,7 +596,7 @@ static void rxrpc_unidle_conn(struct rxrpc_bundle *bundle, struct rxrpc_connecti
} }
spin_unlock(&rxnet->client_conn_cache_lock); spin_unlock(&rxnet->client_conn_cache_lock);
if (drop_ref) if (drop_ref)
rxrpc_put_connection(conn); rxrpc_put_connection(conn, rxrpc_conn_put_unidle);
} }
} }
...@@ -732,7 +745,7 @@ int rxrpc_connect_call(struct rxrpc_sock *rx, ...@@ -732,7 +745,7 @@ int rxrpc_connect_call(struct rxrpc_sock *rx,
out_put_bundle: out_put_bundle:
rxrpc_deactivate_bundle(bundle); rxrpc_deactivate_bundle(bundle);
rxrpc_put_bundle(bundle); rxrpc_put_bundle(bundle, rxrpc_bundle_get_client_call);
out: out:
_leave(" = %d", ret); _leave(" = %d", ret);
return ret; return ret;
...@@ -773,6 +786,10 @@ void rxrpc_expose_client_call(struct rxrpc_call *call) ...@@ -773,6 +786,10 @@ void rxrpc_expose_client_call(struct rxrpc_call *call)
if (chan->call_counter >= INT_MAX) if (chan->call_counter >= INT_MAX)
set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags);
trace_rxrpc_client(conn, channel, rxrpc_client_exposed); trace_rxrpc_client(conn, channel, rxrpc_client_exposed);
spin_lock(&call->peer->lock);
hlist_add_head(&call->error_link, &call->peer->error_targets);
spin_unlock(&call->peer->lock);
} }
} }
...@@ -797,7 +814,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call ...@@ -797,7 +814,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
struct rxrpc_channel *chan = NULL; struct rxrpc_channel *chan = NULL;
struct rxrpc_net *rxnet = bundle->params.local->rxnet; struct rxrpc_net *rxnet = bundle->local->rxnet;
unsigned int channel; unsigned int channel;
bool may_reuse; bool may_reuse;
u32 cid; u32 cid;
...@@ -887,7 +904,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call ...@@ -887,7 +904,7 @@ void rxrpc_disconnect_client_call(struct rxrpc_bundle *bundle, struct rxrpc_call
trace_rxrpc_client(conn, channel, rxrpc_client_to_idle); trace_rxrpc_client(conn, channel, rxrpc_client_to_idle);
conn->idle_timestamp = jiffies; conn->idle_timestamp = jiffies;
rxrpc_get_connection(conn); rxrpc_get_connection(conn, rxrpc_conn_get_idle);
spin_lock(&rxnet->client_conn_cache_lock); spin_lock(&rxnet->client_conn_cache_lock);
list_move_tail(&conn->cache_link, &rxnet->idle_client_conns); list_move_tail(&conn->cache_link, &rxnet->idle_client_conns);
spin_unlock(&rxnet->client_conn_cache_lock); spin_unlock(&rxnet->client_conn_cache_lock);
...@@ -929,7 +946,7 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) ...@@ -929,7 +946,7 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)
if (need_drop) { if (need_drop) {
rxrpc_deactivate_bundle(bundle); rxrpc_deactivate_bundle(bundle);
rxrpc_put_connection(conn); rxrpc_put_connection(conn, rxrpc_conn_put_unbundle);
} }
} }
...@@ -938,11 +955,11 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn) ...@@ -938,11 +955,11 @@ static void rxrpc_unbundle_conn(struct rxrpc_connection *conn)
*/ */
static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle)
{ {
struct rxrpc_local *local = bundle->params.local; struct rxrpc_local *local = bundle->local;
bool need_put = false; bool need_put = false;
if (atomic_dec_and_lock(&bundle->active, &local->client_bundles_lock)) { if (atomic_dec_and_lock(&bundle->active, &local->client_bundles_lock)) {
if (!bundle->params.exclusive) { if (!bundle->exclusive) {
_debug("erase bundle"); _debug("erase bundle");
rb_erase(&bundle->local_node, &local->client_bundles); rb_erase(&bundle->local_node, &local->client_bundles);
need_put = true; need_put = true;
...@@ -950,16 +967,16 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle) ...@@ -950,16 +967,16 @@ static void rxrpc_deactivate_bundle(struct rxrpc_bundle *bundle)
spin_unlock(&local->client_bundles_lock); spin_unlock(&local->client_bundles_lock);
if (need_put) if (need_put)
rxrpc_put_bundle(bundle); rxrpc_put_bundle(bundle, rxrpc_bundle_put_discard);
} }
} }
/* /*
* Clean up a dead client connection. * Clean up a dead client connection.
*/ */
static void rxrpc_kill_client_conn(struct rxrpc_connection *conn) void rxrpc_kill_client_conn(struct rxrpc_connection *conn)
{ {
struct rxrpc_local *local = conn->params.local; struct rxrpc_local *local = conn->local;
struct rxrpc_net *rxnet = local->rxnet; struct rxrpc_net *rxnet = local->rxnet;
_enter("C=%x", conn->debug_id); _enter("C=%x", conn->debug_id);
...@@ -968,23 +985,6 @@ static void rxrpc_kill_client_conn(struct rxrpc_connection *conn) ...@@ -968,23 +985,6 @@ static void rxrpc_kill_client_conn(struct rxrpc_connection *conn)
atomic_dec(&rxnet->nr_client_conns); atomic_dec(&rxnet->nr_client_conns);
rxrpc_put_client_connection_id(conn); rxrpc_put_client_connection_id(conn);
rxrpc_kill_connection(conn);
}
/*
* Clean up a dead client connections.
*/
void rxrpc_put_client_conn(struct rxrpc_connection *conn)
{
const void *here = __builtin_return_address(0);
unsigned int debug_id = conn->debug_id;
bool dead;
int r;
dead = __refcount_dec_and_test(&conn->ref, &r);
trace_rxrpc_conn(debug_id, rxrpc_conn_put_client, r - 1, here);
if (dead)
rxrpc_kill_client_conn(conn);
} }
/* /*
...@@ -1010,7 +1010,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) ...@@ -1010,7 +1010,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work)
} }
/* Don't double up on the discarding */ /* Don't double up on the discarding */
if (!spin_trylock(&rxnet->client_conn_discard_lock)) { if (!mutex_trylock(&rxnet->client_conn_discard_lock)) {
_leave(" [already]"); _leave(" [already]");
return; return;
} }
...@@ -1038,7 +1038,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) ...@@ -1038,7 +1038,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work)
expiry = rxrpc_conn_idle_client_expiry; expiry = rxrpc_conn_idle_client_expiry;
if (nr_conns > rxrpc_reap_client_connections) if (nr_conns > rxrpc_reap_client_connections)
expiry = rxrpc_conn_idle_client_fast_expiry; expiry = rxrpc_conn_idle_client_fast_expiry;
if (conn->params.local->service_closed) if (conn->local->service_closed)
expiry = rxrpc_closed_conn_expiry * HZ; expiry = rxrpc_closed_conn_expiry * HZ;
conn_expires_at = conn->idle_timestamp + expiry; conn_expires_at = conn->idle_timestamp + expiry;
...@@ -1048,13 +1048,15 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) ...@@ -1048,13 +1048,15 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work)
goto not_yet_expired; goto not_yet_expired;
} }
atomic_dec(&conn->active);
trace_rxrpc_client(conn, -1, rxrpc_client_discard); trace_rxrpc_client(conn, -1, rxrpc_client_discard);
list_del_init(&conn->cache_link); list_del_init(&conn->cache_link);
spin_unlock(&rxnet->client_conn_cache_lock); spin_unlock(&rxnet->client_conn_cache_lock);
rxrpc_unbundle_conn(conn); rxrpc_unbundle_conn(conn);
rxrpc_put_connection(conn); /* Drop the ->cache_link ref */ /* Drop the ->cache_link ref */
rxrpc_put_connection(conn, rxrpc_conn_put_discard_idle);
nr_conns--; nr_conns--;
goto next; goto next;
...@@ -1073,7 +1075,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work) ...@@ -1073,7 +1075,7 @@ void rxrpc_discard_expired_client_conns(struct work_struct *work)
out: out:
spin_unlock(&rxnet->client_conn_cache_lock); spin_unlock(&rxnet->client_conn_cache_lock);
spin_unlock(&rxnet->client_conn_discard_lock); mutex_unlock(&rxnet->client_conn_discard_lock);
_leave(""); _leave("");
} }
...@@ -1112,7 +1114,8 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) ...@@ -1112,7 +1114,8 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local)
list_for_each_entry_safe(conn, tmp, &rxnet->idle_client_conns, list_for_each_entry_safe(conn, tmp, &rxnet->idle_client_conns,
cache_link) { cache_link) {
if (conn->params.local == local) { if (conn->local == local) {
atomic_dec(&conn->active);
trace_rxrpc_client(conn, -1, rxrpc_client_discard); trace_rxrpc_client(conn, -1, rxrpc_client_discard);
list_move(&conn->cache_link, &graveyard); list_move(&conn->cache_link, &graveyard);
} }
...@@ -1125,7 +1128,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local) ...@@ -1125,7 +1128,7 @@ void rxrpc_clean_up_local_conns(struct rxrpc_local *local)
struct rxrpc_connection, cache_link); struct rxrpc_connection, cache_link);
list_del_init(&conn->cache_link); list_del_init(&conn->cache_link);
rxrpc_unbundle_conn(conn); rxrpc_unbundle_conn(conn);
rxrpc_put_connection(conn); rxrpc_put_connection(conn, rxrpc_conn_put_local_dead);
} }
_leave(" [culled]"); _leave(" [culled]");
......
...@@ -52,8 +52,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, ...@@ -52,8 +52,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
if (skb && call_id != sp->hdr.callNumber) if (skb && call_id != sp->hdr.callNumber)
return; return;
msg.msg_name = &conn->params.peer->srx.transport; msg.msg_name = &conn->peer->srx.transport;
msg.msg_namelen = conn->params.peer->srx.transport_len; msg.msg_namelen = conn->peer->srx.transport_len;
msg.msg_control = NULL; msg.msg_control = NULL;
msg.msg_controllen = 0; msg.msg_controllen = 0;
msg.msg_flags = 0; msg.msg_flags = 0;
...@@ -86,8 +86,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, ...@@ -86,8 +86,8 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
break; break;
case RXRPC_PACKET_TYPE_ACK: case RXRPC_PACKET_TYPE_ACK:
mtu = conn->params.peer->if_mtu; mtu = conn->peer->if_mtu;
mtu -= conn->params.peer->hdrsize; mtu -= conn->peer->hdrsize;
pkt.ack.bufferSpace = 0; pkt.ack.bufferSpace = 0;
pkt.ack.maxSkew = htons(skb ? skb->priority : 0); pkt.ack.maxSkew = htons(skb ? skb->priority : 0);
pkt.ack.firstPacket = htonl(chan->last_seq + 1); pkt.ack.firstPacket = htonl(chan->last_seq + 1);
...@@ -122,19 +122,17 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn, ...@@ -122,19 +122,17 @@ static void rxrpc_conn_retransmit_call(struct rxrpc_connection *conn,
switch (chan->last_type) { switch (chan->last_type) {
case RXRPC_PACKET_TYPE_ABORT: case RXRPC_PACKET_TYPE_ABORT:
_proto("Tx ABORT %%%u { %d } [re]", serial, conn->abort_code);
break; break;
case RXRPC_PACKET_TYPE_ACK: case RXRPC_PACKET_TYPE_ACK:
trace_rxrpc_tx_ack(chan->call_debug_id, serial, trace_rxrpc_tx_ack(chan->call_debug_id, serial,
ntohl(pkt.ack.firstPacket), ntohl(pkt.ack.firstPacket),
ntohl(pkt.ack.serial), ntohl(pkt.ack.serial),
pkt.ack.reason, 0); pkt.ack.reason, 0);
_proto("Tx ACK %%%u [re]", serial);
break; break;
} }
ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, ioc, len); ret = kernel_sendmsg(conn->local->socket, &msg, iov, ioc, len);
conn->params.peer->last_tx_at = ktime_get_seconds(); conn->peer->last_tx_at = ktime_get_seconds();
if (ret < 0) if (ret < 0)
trace_rxrpc_tx_fail(chan->call_debug_id, serial, ret, trace_rxrpc_tx_fail(chan->call_debug_id, serial, ret,
rxrpc_tx_point_call_final_resend); rxrpc_tx_point_call_final_resend);
...@@ -200,9 +198,9 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, ...@@ -200,9 +198,9 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
_enter("%d,,%u,%u", conn->debug_id, error, abort_code); _enter("%d,,%u,%u", conn->debug_id, error, abort_code);
/* generate a connection-level abort */ /* generate a connection-level abort */
spin_lock_bh(&conn->state_lock); spin_lock(&conn->state_lock);
if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) {
spin_unlock_bh(&conn->state_lock); spin_unlock(&conn->state_lock);
_leave(" = 0 [already dead]"); _leave(" = 0 [already dead]");
return 0; return 0;
} }
...@@ -211,10 +209,10 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, ...@@ -211,10 +209,10 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
conn->abort_code = abort_code; conn->abort_code = abort_code;
conn->state = RXRPC_CONN_LOCALLY_ABORTED; conn->state = RXRPC_CONN_LOCALLY_ABORTED;
set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags);
spin_unlock_bh(&conn->state_lock); spin_unlock(&conn->state_lock);
msg.msg_name = &conn->params.peer->srx.transport; msg.msg_name = &conn->peer->srx.transport;
msg.msg_namelen = conn->params.peer->srx.transport_len; msg.msg_namelen = conn->peer->srx.transport_len;
msg.msg_control = NULL; msg.msg_control = NULL;
msg.msg_controllen = 0; msg.msg_controllen = 0;
msg.msg_flags = 0; msg.msg_flags = 0;
...@@ -242,9 +240,8 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, ...@@ -242,9 +240,8 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
serial = atomic_inc_return(&conn->serial); serial = atomic_inc_return(&conn->serial);
rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial); rxrpc_abort_calls(conn, RXRPC_CALL_LOCALLY_ABORTED, serial);
whdr.serial = htonl(serial); whdr.serial = htonl(serial);
_proto("Tx CONN ABORT %%%u { %d }", serial, conn->abort_code);
ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); ret = kernel_sendmsg(conn->local->socket, &msg, iov, 2, len);
if (ret < 0) { if (ret < 0) {
trace_rxrpc_tx_fail(conn->debug_id, serial, ret, trace_rxrpc_tx_fail(conn->debug_id, serial, ret,
rxrpc_tx_point_conn_abort); rxrpc_tx_point_conn_abort);
...@@ -254,7 +251,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn, ...@@ -254,7 +251,7 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort); trace_rxrpc_tx_packet(conn->debug_id, &whdr, rxrpc_tx_point_conn_abort);
conn->params.peer->last_tx_at = ktime_get_seconds(); conn->peer->last_tx_at = ktime_get_seconds();
_leave(" = 0"); _leave(" = 0");
return 0; return 0;
...@@ -268,12 +265,12 @@ static void rxrpc_call_is_secure(struct rxrpc_call *call) ...@@ -268,12 +265,12 @@ static void rxrpc_call_is_secure(struct rxrpc_call *call)
{ {
_enter("%p", call); _enter("%p", call);
if (call) { if (call) {
write_lock_bh(&call->state_lock); write_lock(&call->state_lock);
if (call->state == RXRPC_CALL_SERVER_SECURING) { if (call->state == RXRPC_CALL_SERVER_SECURING) {
call->state = RXRPC_CALL_SERVER_RECV_REQUEST; call->state = RXRPC_CALL_SERVER_RECV_REQUEST;
rxrpc_notify_socket(call); rxrpc_notify_socket(call);
} }
write_unlock_bh(&call->state_lock); write_unlock(&call->state_lock);
} }
} }
...@@ -285,8 +282,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, ...@@ -285,8 +282,6 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
u32 *_abort_code) u32 *_abort_code)
{ {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
__be32 wtmp;
u32 abort_code;
int loop, ret; int loop, ret;
if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) { if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) {
...@@ -308,17 +303,8 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, ...@@ -308,17 +303,8 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return 0; return 0;
case RXRPC_PACKET_TYPE_ABORT: case RXRPC_PACKET_TYPE_ABORT:
if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
&wtmp, sizeof(wtmp)) < 0) {
trace_rxrpc_rx_eproto(NULL, sp->hdr.serial,
tracepoint_string("bad_abort"));
return -EPROTO;
}
abort_code = ntohl(wtmp);
_proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);
conn->error = -ECONNABORTED; conn->error = -ECONNABORTED;
conn->abort_code = abort_code; conn->abort_code = skb->priority;
conn->state = RXRPC_CONN_REMOTELY_ABORTED; conn->state = RXRPC_CONN_REMOTELY_ABORTED;
set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags); set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags);
rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial); rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial);
...@@ -334,23 +320,23 @@ static int rxrpc_process_event(struct rxrpc_connection *conn, ...@@ -334,23 +320,23 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return ret; return ret;
ret = conn->security->init_connection_security( ret = conn->security->init_connection_security(
conn, conn->params.key->payload.data[0]); conn, conn->key->payload.data[0]);
if (ret < 0) if (ret < 0)
return ret; return ret;
spin_lock(&conn->bundle->channel_lock); spin_lock(&conn->bundle->channel_lock);
spin_lock_bh(&conn->state_lock); spin_lock(&conn->state_lock);
if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) { if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) {
conn->state = RXRPC_CONN_SERVICE; conn->state = RXRPC_CONN_SERVICE;
spin_unlock_bh(&conn->state_lock); spin_unlock(&conn->state_lock);
for (loop = 0; loop < RXRPC_MAXCALLS; loop++) for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
rxrpc_call_is_secure( rxrpc_call_is_secure(
rcu_dereference_protected( rcu_dereference_protected(
conn->channels[loop].call, conn->channels[loop].call,
lockdep_is_held(&conn->bundle->channel_lock))); lockdep_is_held(&conn->bundle->channel_lock)));
} else { } else {
spin_unlock_bh(&conn->state_lock); spin_unlock(&conn->state_lock);
} }
spin_unlock(&conn->bundle->channel_lock); spin_unlock(&conn->bundle->channel_lock);
...@@ -451,7 +437,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) ...@@ -451,7 +437,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn)
/* go through the conn-level event packets, releasing the ref on this /* go through the conn-level event packets, releasing the ref on this
* connection that each one has when we've finished with it */ * connection that each one has when we've finished with it */
while ((skb = skb_dequeue(&conn->rx_queue))) { while ((skb = skb_dequeue(&conn->rx_queue))) {
rxrpc_see_skb(skb, rxrpc_skb_seen); rxrpc_see_skb(skb, rxrpc_skb_see_conn_work);
ret = rxrpc_process_event(conn, skb, &abort_code); ret = rxrpc_process_event(conn, skb, &abort_code);
switch (ret) { switch (ret) {
case -EPROTO: case -EPROTO:
...@@ -463,7 +449,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) ...@@ -463,7 +449,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn)
goto requeue_and_leave; goto requeue_and_leave;
case -ECONNABORTED: case -ECONNABORTED:
default: default:
rxrpc_free_skb(skb, rxrpc_skb_freed); rxrpc_free_skb(skb, rxrpc_skb_put_conn_work);
break; break;
} }
} }
...@@ -477,7 +463,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn) ...@@ -477,7 +463,7 @@ static void rxrpc_do_process_connection(struct rxrpc_connection *conn)
protocol_error: protocol_error:
if (rxrpc_abort_connection(conn, ret, abort_code) < 0) if (rxrpc_abort_connection(conn, ret, abort_code) < 0)
goto requeue_and_leave; goto requeue_and_leave;
rxrpc_free_skb(skb, rxrpc_skb_freed); rxrpc_free_skb(skb, rxrpc_skb_put_conn_work);
return; return;
} }
...@@ -486,14 +472,70 @@ void rxrpc_process_connection(struct work_struct *work) ...@@ -486,14 +472,70 @@ void rxrpc_process_connection(struct work_struct *work)
struct rxrpc_connection *conn = struct rxrpc_connection *conn =
container_of(work, struct rxrpc_connection, processor); container_of(work, struct rxrpc_connection, processor);
rxrpc_see_connection(conn); rxrpc_see_connection(conn, rxrpc_conn_see_work);
if (__rxrpc_use_local(conn->params.local)) { if (__rxrpc_use_local(conn->local, rxrpc_local_use_conn_work)) {
rxrpc_do_process_connection(conn); rxrpc_do_process_connection(conn);
rxrpc_unuse_local(conn->params.local); rxrpc_unuse_local(conn->local, rxrpc_local_unuse_conn_work);
} }
}
rxrpc_put_connection(conn); /*
_leave(""); * post connection-level events to the connection
return; * - this includes challenges, responses, some aborts and call terminal packet
* retransmission.
*/
static void rxrpc_post_packet_to_conn(struct rxrpc_connection *conn,
struct sk_buff *skb)
{
_enter("%p,%p", conn, skb);
rxrpc_get_skb(skb, rxrpc_skb_get_conn_work);
skb_queue_tail(&conn->rx_queue, skb);
rxrpc_queue_conn(conn, rxrpc_conn_queue_rx_work);
}
/*
* Input a connection-level packet.
*/
int rxrpc_input_conn_packet(struct rxrpc_connection *conn, struct sk_buff *skb)
{
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) {
_leave(" = -ECONNABORTED [%u]", conn->state);
return -ECONNABORTED;
}
_enter("{%d},{%u,%%%u},", conn->debug_id, sp->hdr.type, sp->hdr.serial);
switch (sp->hdr.type) {
case RXRPC_PACKET_TYPE_DATA:
case RXRPC_PACKET_TYPE_ACK:
rxrpc_conn_retransmit_call(conn, skb,
sp->hdr.cid & RXRPC_CHANNELMASK);
return 0;
case RXRPC_PACKET_TYPE_BUSY:
/* Just ignore BUSY packets for now. */
return 0;
case RXRPC_PACKET_TYPE_ABORT:
conn->error = -ECONNABORTED;
conn->abort_code = skb->priority;
conn->state = RXRPC_CONN_REMOTELY_ABORTED;
set_bit(RXRPC_CONN_DONT_REUSE, &conn->flags);
rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED, sp->hdr.serial);
return -ECONNABORTED;
case RXRPC_PACKET_TYPE_CHALLENGE:
case RXRPC_PACKET_TYPE_RESPONSE:
rxrpc_post_packet_to_conn(conn, skb);
return 0;
default:
trace_rxrpc_rx_eproto(NULL, sp->hdr.serial,
tracepoint_string("bad_conn_pkt"));
return -EPROTO;
}
} }
...@@ -19,20 +19,23 @@ ...@@ -19,20 +19,23 @@
unsigned int __read_mostly rxrpc_connection_expiry = 10 * 60; unsigned int __read_mostly rxrpc_connection_expiry = 10 * 60;
unsigned int __read_mostly rxrpc_closed_conn_expiry = 10; unsigned int __read_mostly rxrpc_closed_conn_expiry = 10;
static void rxrpc_destroy_connection(struct rcu_head *); static void rxrpc_clean_up_connection(struct work_struct *work);
static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet,
unsigned long reap_at);
static void rxrpc_connection_timer(struct timer_list *timer) static void rxrpc_connection_timer(struct timer_list *timer)
{ {
struct rxrpc_connection *conn = struct rxrpc_connection *conn =
container_of(timer, struct rxrpc_connection, timer); container_of(timer, struct rxrpc_connection, timer);
rxrpc_queue_conn(conn); rxrpc_queue_conn(conn, rxrpc_conn_queue_timer);
} }
/* /*
* allocate a new connection * allocate a new connection
*/ */
struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet,
gfp_t gfp)
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
...@@ -42,10 +45,12 @@ struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) ...@@ -42,10 +45,12 @@ struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
if (conn) { if (conn) {
INIT_LIST_HEAD(&conn->cache_link); INIT_LIST_HEAD(&conn->cache_link);
timer_setup(&conn->timer, &rxrpc_connection_timer, 0); timer_setup(&conn->timer, &rxrpc_connection_timer, 0);
INIT_WORK(&conn->processor, &rxrpc_process_connection); INIT_WORK(&conn->processor, rxrpc_process_connection);
INIT_WORK(&conn->destructor, rxrpc_clean_up_connection);
INIT_LIST_HEAD(&conn->proc_link); INIT_LIST_HEAD(&conn->proc_link);
INIT_LIST_HEAD(&conn->link); INIT_LIST_HEAD(&conn->link);
skb_queue_head_init(&conn->rx_queue); skb_queue_head_init(&conn->rx_queue);
conn->rxnet = rxnet;
conn->security = &rxrpc_no_security; conn->security = &rxrpc_no_security;
spin_lock_init(&conn->state_lock); spin_lock_init(&conn->state_lock);
conn->debug_id = atomic_inc_return(&rxrpc_debug_id); conn->debug_id = atomic_inc_return(&rxrpc_debug_id);
...@@ -67,89 +72,55 @@ struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp) ...@@ -67,89 +72,55 @@ struct rxrpc_connection *rxrpc_alloc_connection(gfp_t gfp)
* *
* The caller must be holding the RCU read lock. * The caller must be holding the RCU read lock.
*/ */
struct rxrpc_connection *rxrpc_find_connection_rcu(struct rxrpc_local *local, struct rxrpc_connection *rxrpc_find_client_connection_rcu(struct rxrpc_local *local,
struct sk_buff *skb, struct sockaddr_rxrpc *srx,
struct rxrpc_peer **_peer) struct sk_buff *skb)
{ {
struct rxrpc_connection *conn; struct rxrpc_connection *conn;
struct rxrpc_conn_proto k;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct sockaddr_rxrpc srx;
struct rxrpc_peer *peer; struct rxrpc_peer *peer;
_enter(",%x", sp->hdr.cid & RXRPC_CIDMASK); _enter(",%x", sp->hdr.cid & RXRPC_CIDMASK);
if (rxrpc_extract_addr_from_skb(&srx, skb) < 0) /* Look up client connections by connection ID alone as their IDs are
goto not_found; * unique for this machine.
*/
if (srx.transport.family != local->srx.transport.family && conn = idr_find(&rxrpc_client_conn_ids, sp->hdr.cid >> RXRPC_CIDSHIFT);
(srx.transport.family == AF_INET && if (!conn || refcount_read(&conn->ref) == 0) {
local->srx.transport.family != AF_INET6)) { _debug("no conn");
pr_warn_ratelimited("AF_RXRPC: Protocol mismatch %u not %u\n",
srx.transport.family,
local->srx.transport.family);
goto not_found; goto not_found;
} }
k.epoch = sp->hdr.epoch; if (conn->proto.epoch != sp->hdr.epoch ||
k.cid = sp->hdr.cid & RXRPC_CIDMASK; conn->local != local)
goto not_found;
if (rxrpc_to_server(sp)) {
/* We need to look up service connections by the full protocol
* parameter set. We look up the peer first as an intermediate
* step and then the connection from the peer's tree.
*/
peer = rxrpc_lookup_peer_rcu(local, &srx);
if (!peer)
goto not_found;
*_peer = peer;
conn = rxrpc_find_service_conn_rcu(peer, skb);
if (!conn || refcount_read(&conn->ref) == 0)
goto not_found;
_leave(" = %p", conn);
return conn;
} else {
/* Look up client connections by connection ID alone as their
* IDs are unique for this machine.
*/
conn = idr_find(&rxrpc_client_conn_ids,
sp->hdr.cid >> RXRPC_CIDSHIFT);
if (!conn || refcount_read(&conn->ref) == 0) {
_debug("no conn");
goto not_found;
}
if (conn->proto.epoch != k.epoch || peer = conn->peer;
conn->params.local != local) switch (srx->transport.family) {
case AF_INET:
if (peer->srx.transport.sin.sin_port !=
srx->transport.sin.sin_port ||
peer->srx.transport.sin.sin_addr.s_addr !=
srx->transport.sin.sin_addr.s_addr)
goto not_found; goto not_found;
break;
peer = conn->params.peer;
switch (srx.transport.family) {
case AF_INET:
if (peer->srx.transport.sin.sin_port !=
srx.transport.sin.sin_port ||
peer->srx.transport.sin.sin_addr.s_addr !=
srx.transport.sin.sin_addr.s_addr)
goto not_found;
break;
#ifdef CONFIG_AF_RXRPC_IPV6 #ifdef CONFIG_AF_RXRPC_IPV6
case AF_INET6: case AF_INET6:
if (peer->srx.transport.sin6.sin6_port != if (peer->srx.transport.sin6.sin6_port !=
srx.transport.sin6.sin6_port || srx->transport.sin6.sin6_port ||
memcmp(&peer->srx.transport.sin6.sin6_addr, memcmp(&peer->srx.transport.sin6.sin6_addr,
&srx.transport.sin6.sin6_addr, &srx->transport.sin6.sin6_addr,
sizeof(struct in6_addr)) != 0) sizeof(struct in6_addr)) != 0)
goto not_found; goto not_found;
break; break;
#endif #endif
default: default:
BUG(); BUG();
}
_leave(" = %p", conn);
return conn;
} }
_leave(" = %p", conn);
return conn;
not_found: not_found:
_leave(" = NULL"); _leave(" = NULL");
return NULL; return NULL;
...@@ -210,9 +181,9 @@ void rxrpc_disconnect_call(struct rxrpc_call *call) ...@@ -210,9 +181,9 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
call->peer->cong_ssthresh = call->cong_ssthresh; call->peer->cong_ssthresh = call->cong_ssthresh;
if (!hlist_unhashed(&call->error_link)) { if (!hlist_unhashed(&call->error_link)) {
spin_lock_bh(&call->peer->lock); spin_lock(&call->peer->lock);
hlist_del_rcu(&call->error_link); hlist_del_init(&call->error_link);
spin_unlock_bh(&call->peer->lock); spin_unlock(&call->peer->lock);
} }
if (rxrpc_is_client_call(call)) if (rxrpc_is_client_call(call))
...@@ -224,79 +195,45 @@ void rxrpc_disconnect_call(struct rxrpc_call *call) ...@@ -224,79 +195,45 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
set_bit(RXRPC_CALL_DISCONNECTED, &call->flags); set_bit(RXRPC_CALL_DISCONNECTED, &call->flags);
conn->idle_timestamp = jiffies; conn->idle_timestamp = jiffies;
} if (atomic_dec_and_test(&conn->active))
rxrpc_set_service_reap_timer(conn->rxnet,
/* jiffies + rxrpc_connection_expiry);
* Kill off a connection.
*/
void rxrpc_kill_connection(struct rxrpc_connection *conn)
{
struct rxrpc_net *rxnet = conn->params.local->rxnet;
ASSERT(!rcu_access_pointer(conn->channels[0].call) &&
!rcu_access_pointer(conn->channels[1].call) &&
!rcu_access_pointer(conn->channels[2].call) &&
!rcu_access_pointer(conn->channels[3].call));
ASSERT(list_empty(&conn->cache_link));
write_lock(&rxnet->conn_lock);
list_del_init(&conn->proc_link);
write_unlock(&rxnet->conn_lock);
/* Drain the Rx queue. Note that even though we've unpublished, an
* incoming packet could still be being added to our Rx queue, so we
* will need to drain it again in the RCU cleanup handler.
*/
rxrpc_purge_queue(&conn->rx_queue);
/* Leave final destruction to RCU. The connection processor work item
* must carry a ref on the connection to prevent us getting here whilst
* it is queued or running.
*/
call_rcu(&conn->rcu, rxrpc_destroy_connection);
} }
/* /*
* Queue a connection's work processor, getting a ref to pass to the work * Queue a connection's work processor, getting a ref to pass to the work
* queue. * queue.
*/ */
bool rxrpc_queue_conn(struct rxrpc_connection *conn) void rxrpc_queue_conn(struct rxrpc_connection *conn, enum rxrpc_conn_trace why)
{ {
const void *here = __builtin_return_address(0); if (atomic_read(&conn->active) >= 0 &&
int r; rxrpc_queue_work(&conn->processor))
rxrpc_see_connection(conn, why);
if (!__refcount_inc_not_zero(&conn->ref, &r))
return false;
if (rxrpc_queue_work(&conn->processor))
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_queued, r + 1, here);
else
rxrpc_put_connection(conn);
return true;
} }
/* /*
* Note the re-emergence of a connection. * Note the re-emergence of a connection.
*/ */
void rxrpc_see_connection(struct rxrpc_connection *conn) void rxrpc_see_connection(struct rxrpc_connection *conn,
enum rxrpc_conn_trace why)
{ {
const void *here = __builtin_return_address(0);
if (conn) { if (conn) {
int n = refcount_read(&conn->ref); int r = refcount_read(&conn->ref);
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_seen, n, here); trace_rxrpc_conn(conn->debug_id, r, why);
} }
} }
/* /*
* Get a ref on a connection. * Get a ref on a connection.
*/ */
struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn) struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn,
enum rxrpc_conn_trace why)
{ {
const void *here = __builtin_return_address(0);
int r; int r;
__refcount_inc(&conn->ref, &r); __refcount_inc(&conn->ref, &r);
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r, here); trace_rxrpc_conn(conn->debug_id, r + 1, why);
return conn; return conn;
} }
...@@ -304,14 +241,14 @@ struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn) ...@@ -304,14 +241,14 @@ struct rxrpc_connection *rxrpc_get_connection(struct rxrpc_connection *conn)
* Try to get a ref on a connection. * Try to get a ref on a connection.
*/ */
struct rxrpc_connection * struct rxrpc_connection *
rxrpc_get_connection_maybe(struct rxrpc_connection *conn) rxrpc_get_connection_maybe(struct rxrpc_connection *conn,
enum rxrpc_conn_trace why)
{ {
const void *here = __builtin_return_address(0);
int r; int r;
if (conn) { if (conn) {
if (__refcount_inc_not_zero(&conn->ref, &r)) if (__refcount_inc_not_zero(&conn->ref, &r))
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_got, r + 1, here); trace_rxrpc_conn(conn->debug_id, r + 1, why);
else else
conn = NULL; conn = NULL;
} }
...@@ -329,49 +266,95 @@ static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet, ...@@ -329,49 +266,95 @@ static void rxrpc_set_service_reap_timer(struct rxrpc_net *rxnet,
} }
/* /*
* Release a service connection * destroy a virtual connection
*/ */
void rxrpc_put_service_conn(struct rxrpc_connection *conn) static void rxrpc_rcu_free_connection(struct rcu_head *rcu)
{ {
const void *here = __builtin_return_address(0); struct rxrpc_connection *conn =
unsigned int debug_id = conn->debug_id; container_of(rcu, struct rxrpc_connection, rcu);
int r; struct rxrpc_net *rxnet = conn->rxnet;
__refcount_dec(&conn->ref, &r); _enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref));
trace_rxrpc_conn(debug_id, rxrpc_conn_put_service, r - 1, here);
if (r - 1 == 1) trace_rxrpc_conn(conn->debug_id, refcount_read(&conn->ref),
rxrpc_set_service_reap_timer(conn->params.local->rxnet, rxrpc_conn_free);
jiffies + rxrpc_connection_expiry); kfree(conn);
if (atomic_dec_and_test(&rxnet->nr_conns))
wake_up_var(&rxnet->nr_conns);
} }
/* /*
* destroy a virtual connection * Clean up a dead connection.
*/ */
static void rxrpc_destroy_connection(struct rcu_head *rcu) static void rxrpc_clean_up_connection(struct work_struct *work)
{ {
struct rxrpc_connection *conn = struct rxrpc_connection *conn =
container_of(rcu, struct rxrpc_connection, rcu); container_of(work, struct rxrpc_connection, destructor);
struct rxrpc_net *rxnet = conn->rxnet;
_enter("{%d,u=%d}", conn->debug_id, refcount_read(&conn->ref)); ASSERT(!rcu_access_pointer(conn->channels[0].call) &&
!rcu_access_pointer(conn->channels[1].call) &&
!rcu_access_pointer(conn->channels[2].call) &&
!rcu_access_pointer(conn->channels[3].call));
ASSERT(list_empty(&conn->cache_link));
ASSERTCMP(refcount_read(&conn->ref), ==, 0); del_timer_sync(&conn->timer);
cancel_work_sync(&conn->processor); /* Processing may restart the timer */
del_timer_sync(&conn->timer);
_net("DESTROY CONN %d", conn->debug_id); write_lock(&rxnet->conn_lock);
list_del_init(&conn->proc_link);
write_unlock(&rxnet->conn_lock);
del_timer_sync(&conn->timer);
rxrpc_purge_queue(&conn->rx_queue); rxrpc_purge_queue(&conn->rx_queue);
rxrpc_kill_client_conn(conn);
conn->security->clear(conn); conn->security->clear(conn);
key_put(conn->params.key); key_put(conn->key);
rxrpc_put_bundle(conn->bundle); rxrpc_put_bundle(conn->bundle, rxrpc_bundle_put_conn);
rxrpc_put_peer(conn->params.peer); rxrpc_put_peer(conn->peer, rxrpc_peer_put_conn);
rxrpc_put_local(conn->local, rxrpc_local_put_kill_conn);
/* Drain the Rx queue. Note that even though we've unpublished, an
* incoming packet could still be being added to our Rx queue, so we
* will need to drain it again in the RCU cleanup handler.
*/
rxrpc_purge_queue(&conn->rx_queue);
if (atomic_dec_and_test(&conn->params.local->rxnet->nr_conns)) call_rcu(&conn->rcu, rxrpc_rcu_free_connection);
wake_up_var(&conn->params.local->rxnet->nr_conns); }
rxrpc_put_local(conn->params.local);
kfree(conn); /*
_leave(""); * Drop a ref on a connection.
*/
void rxrpc_put_connection(struct rxrpc_connection *conn,
enum rxrpc_conn_trace why)
{
unsigned int debug_id;
bool dead;
int r;
if (!conn)
return;
debug_id = conn->debug_id;
dead = __refcount_dec_and_test(&conn->ref, &r);
trace_rxrpc_conn(debug_id, r - 1, why);
if (dead) {
del_timer(&conn->timer);
cancel_work(&conn->processor);
if (in_softirq() || work_busy(&conn->processor) ||
timer_pending(&conn->timer))
/* Can't use the rxrpc workqueue as we need to cancel/flush
* something that may be running/waiting there.
*/
schedule_work(&conn->destructor);
else
rxrpc_clean_up_connection(&conn->destructor);
}
} }
/* /*
...@@ -383,6 +366,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work) ...@@ -383,6 +366,7 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
struct rxrpc_net *rxnet = struct rxrpc_net *rxnet =
container_of(work, struct rxrpc_net, service_conn_reaper); container_of(work, struct rxrpc_net, service_conn_reaper);
unsigned long expire_at, earliest, idle_timestamp, now; unsigned long expire_at, earliest, idle_timestamp, now;
int active;
LIST_HEAD(graveyard); LIST_HEAD(graveyard);
...@@ -393,20 +377,20 @@ void rxrpc_service_connection_reaper(struct work_struct *work) ...@@ -393,20 +377,20 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
write_lock(&rxnet->conn_lock); write_lock(&rxnet->conn_lock);
list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) { list_for_each_entry_safe(conn, _p, &rxnet->service_conns, link) {
ASSERTCMP(refcount_read(&conn->ref), >, 0); ASSERTCMP(atomic_read(&conn->active), >=, 0);
if (likely(refcount_read(&conn->ref) > 1)) if (likely(atomic_read(&conn->active) > 0))
continue; continue;
if (conn->state == RXRPC_CONN_SERVICE_PREALLOC) if (conn->state == RXRPC_CONN_SERVICE_PREALLOC)
continue; continue;
if (rxnet->live && !conn->params.local->dead) { if (rxnet->live && !conn->local->dead) {
idle_timestamp = READ_ONCE(conn->idle_timestamp); idle_timestamp = READ_ONCE(conn->idle_timestamp);
expire_at = idle_timestamp + rxrpc_connection_expiry * HZ; expire_at = idle_timestamp + rxrpc_connection_expiry * HZ;
if (conn->params.local->service_closed) if (conn->local->service_closed)
expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ; expire_at = idle_timestamp + rxrpc_closed_conn_expiry * HZ;
_debug("reap CONN %d { u=%d,t=%ld }", _debug("reap CONN %d { a=%d,t=%ld }",
conn->debug_id, refcount_read(&conn->ref), conn->debug_id, atomic_read(&conn->active),
(long)expire_at - (long)now); (long)expire_at - (long)now);
if (time_before(now, expire_at)) { if (time_before(now, expire_at)) {
...@@ -416,12 +400,13 @@ void rxrpc_service_connection_reaper(struct work_struct *work) ...@@ -416,12 +400,13 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
} }
} }
/* The usage count sits at 1 whilst the object is unused on the /* The activity count sits at 0 whilst the conn is unused on
* list; we reduce that to 0 to make the object unavailable. * the list; we reduce that to -1 to make the conn unavailable.
*/ */
if (!refcount_dec_if_one(&conn->ref)) active = 0;
if (!atomic_try_cmpxchg(&conn->active, &active, -1))
continue; continue;
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_reap_service, 0, NULL); rxrpc_see_connection(conn, rxrpc_conn_see_reap_service);
if (rxrpc_conn_is_client(conn)) if (rxrpc_conn_is_client(conn))
BUG(); BUG();
...@@ -443,8 +428,8 @@ void rxrpc_service_connection_reaper(struct work_struct *work) ...@@ -443,8 +428,8 @@ void rxrpc_service_connection_reaper(struct work_struct *work)
link); link);
list_del_init(&conn->link); list_del_init(&conn->link);
ASSERTCMP(refcount_read(&conn->ref), ==, 0); ASSERTCMP(atomic_read(&conn->active), ==, -1);
rxrpc_kill_connection(conn); rxrpc_put_connection(conn, rxrpc_conn_put_service_reaped);
} }
_leave(""); _leave("");
......
...@@ -73,7 +73,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer, ...@@ -73,7 +73,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
struct rxrpc_conn_proto k = conn->proto; struct rxrpc_conn_proto k = conn->proto;
struct rb_node **pp, *parent; struct rb_node **pp, *parent;
write_seqlock_bh(&peer->service_conn_lock); write_seqlock(&peer->service_conn_lock);
pp = &peer->service_conns.rb_node; pp = &peer->service_conns.rb_node;
parent = NULL; parent = NULL;
...@@ -94,14 +94,14 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer, ...@@ -94,14 +94,14 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
rb_insert_color(&conn->service_node, &peer->service_conns); rb_insert_color(&conn->service_node, &peer->service_conns);
conn_published: conn_published:
set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags); set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags);
write_sequnlock_bh(&peer->service_conn_lock); write_sequnlock(&peer->service_conn_lock);
_leave(" = %d [new]", conn->debug_id); _leave(" = %d [new]", conn->debug_id);
return; return;
found_extant_conn: found_extant_conn:
if (refcount_read(&cursor->ref) == 0) if (refcount_read(&cursor->ref) == 0)
goto replace_old_connection; goto replace_old_connection;
write_sequnlock_bh(&peer->service_conn_lock); write_sequnlock(&peer->service_conn_lock);
/* We should not be able to get here. rxrpc_incoming_connection() is /* We should not be able to get here. rxrpc_incoming_connection() is
* called in a non-reentrant context, so there can't be a race to * called in a non-reentrant context, so there can't be a race to
* insert a new connection. * insert a new connection.
...@@ -125,7 +125,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer, ...@@ -125,7 +125,7 @@ static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxnet, struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxnet,
gfp_t gfp) gfp_t gfp)
{ {
struct rxrpc_connection *conn = rxrpc_alloc_connection(gfp); struct rxrpc_connection *conn = rxrpc_alloc_connection(rxnet, gfp);
if (conn) { if (conn) {
/* We maintain an extra ref on the connection whilst it is on /* We maintain an extra ref on the connection whilst it is on
...@@ -133,7 +133,8 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn ...@@ -133,7 +133,8 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn
*/ */
conn->state = RXRPC_CONN_SERVICE_PREALLOC; conn->state = RXRPC_CONN_SERVICE_PREALLOC;
refcount_set(&conn->ref, 2); refcount_set(&conn->ref, 2);
conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle); conn->bundle = rxrpc_get_bundle(&rxrpc_service_dummy_bundle,
rxrpc_bundle_get_service_conn);
atomic_inc(&rxnet->nr_conns); atomic_inc(&rxnet->nr_conns);
write_lock(&rxnet->conn_lock); write_lock(&rxnet->conn_lock);
...@@ -141,9 +142,7 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn ...@@ -141,9 +142,7 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(struct rxrpc_net *rxn
list_add_tail(&conn->proc_link, &rxnet->conn_proc_list); list_add_tail(&conn->proc_link, &rxnet->conn_proc_list);
write_unlock(&rxnet->conn_lock); write_unlock(&rxnet->conn_lock);
trace_rxrpc_conn(conn->debug_id, rxrpc_conn_new_service, rxrpc_see_connection(conn, rxrpc_conn_new_service);
refcount_read(&conn->ref),
__builtin_return_address(0));
} }
return conn; return conn;
...@@ -164,7 +163,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, ...@@ -164,7 +163,7 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,
conn->proto.epoch = sp->hdr.epoch; conn->proto.epoch = sp->hdr.epoch;
conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK; conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK;
conn->params.service_id = sp->hdr.serviceId; conn->orig_service_id = sp->hdr.serviceId;
conn->service_id = sp->hdr.serviceId; conn->service_id = sp->hdr.serviceId;
conn->security_ix = sp->hdr.securityIndex; conn->security_ix = sp->hdr.securityIndex;
conn->out_clientflag = 0; conn->out_clientflag = 0;
...@@ -182,10 +181,10 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, ...@@ -182,10 +181,10 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,
conn->service_id == rx->service_upgrade.from) conn->service_id == rx->service_upgrade.from)
conn->service_id = rx->service_upgrade.to; conn->service_id = rx->service_upgrade.to;
/* Make the connection a target for incoming packets. */ atomic_set(&conn->active, 1);
rxrpc_publish_service_conn(conn->params.peer, conn);
_net("CONNECTION new %d {%x}", conn->debug_id, conn->proto.cid); /* Make the connection a target for incoming packets. */
rxrpc_publish_service_conn(conn->peer, conn);
} }
/* /*
...@@ -194,10 +193,10 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx, ...@@ -194,10 +193,10 @@ void rxrpc_new_incoming_connection(struct rxrpc_sock *rx,
*/ */
void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn) void rxrpc_unpublish_service_conn(struct rxrpc_connection *conn)
{ {
struct rxrpc_peer *peer = conn->params.peer; struct rxrpc_peer *peer = conn->peer;
write_seqlock_bh(&peer->service_conn_lock); write_seqlock(&peer->service_conn_lock);
if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags)) if (test_and_clear_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags))
rb_erase(&conn->service_node, &peer->service_conns); rb_erase(&conn->service_node, &peer->service_conns);
write_sequnlock_bh(&peer->service_conn_lock); write_sequnlock(&peer->service_conn_lock);
} }
此差异已折叠。
此差异已折叠。
...@@ -513,7 +513,7 @@ int rxrpc_get_server_data_key(struct rxrpc_connection *conn, ...@@ -513,7 +513,7 @@ int rxrpc_get_server_data_key(struct rxrpc_connection *conn,
if (ret < 0) if (ret < 0)
goto error; goto error;
conn->params.key = key; conn->key = key;
_leave(" = 0 [%d]", key_serial(key)); _leave(" = 0 [%d]", key_serial(key));
return 0; return 0;
...@@ -602,7 +602,8 @@ static long rxrpc_read(const struct key *key, ...@@ -602,7 +602,8 @@ static long rxrpc_read(const struct key *key,
} }
_debug("token[%u]: toksize=%u", ntoks, toksize); _debug("token[%u]: toksize=%u", ntoks, toksize);
ASSERTCMP(toksize, <=, AFSTOKEN_LENGTH_MAX); if (WARN_ON(toksize > AFSTOKEN_LENGTH_MAX))
return -EIO;
toksizes[ntoks++] = toksize; toksizes[ntoks++] = toksize;
size += toksize + 4; /* each token has a length word */ size += toksize + 4; /* each token has a length word */
...@@ -679,8 +680,9 @@ static long rxrpc_read(const struct key *key, ...@@ -679,8 +680,9 @@ static long rxrpc_read(const struct key *key,
return -ENOPKG; return -ENOPKG;
} }
ASSERTCMP((unsigned long)xdr - (unsigned long)oldxdr, ==, if (WARN_ON((unsigned long)xdr - (unsigned long)oldxdr ==
toksize); toksize))
return -EIO;
} }
#undef ENCODE_STR #undef ENCODE_STR
...@@ -688,8 +690,10 @@ static long rxrpc_read(const struct key *key, ...@@ -688,8 +690,10 @@ static long rxrpc_read(const struct key *key,
#undef ENCODE64 #undef ENCODE64
#undef ENCODE #undef ENCODE
ASSERTCMP(tok, ==, ntoks); if (WARN_ON(tok != ntoks))
ASSERTCMP((char __user *) xdr - buffer, ==, size); return -EIO;
if (WARN_ON((unsigned long)xdr - (unsigned long)buffer != size))
return -EIO;
_leave(" = %zu", size); _leave(" = %zu", size);
return size; return size;
} }
...@@ -21,9 +21,9 @@ static const char rxrpc_version_string[65] = "linux-" UTS_RELEASE " AF_RXRPC"; ...@@ -21,9 +21,9 @@ static const char rxrpc_version_string[65] = "linux-" UTS_RELEASE " AF_RXRPC";
/* /*
* Reply to a version request * Reply to a version request
*/ */
static void rxrpc_send_version_request(struct rxrpc_local *local, void rxrpc_send_version_request(struct rxrpc_local *local,
struct rxrpc_host_header *hdr, struct rxrpc_host_header *hdr,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct rxrpc_wire_header whdr; struct rxrpc_wire_header whdr;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb); struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
...@@ -63,8 +63,6 @@ static void rxrpc_send_version_request(struct rxrpc_local *local, ...@@ -63,8 +63,6 @@ static void rxrpc_send_version_request(struct rxrpc_local *local,
len = iov[0].iov_len + iov[1].iov_len; len = iov[0].iov_len + iov[1].iov_len;
_proto("Tx VERSION (reply)");
ret = kernel_sendmsg(local->socket, &msg, iov, 2, len); ret = kernel_sendmsg(local->socket, &msg, iov, 2, len);
if (ret < 0) if (ret < 0)
trace_rxrpc_tx_fail(local->debug_id, 0, ret, trace_rxrpc_tx_fail(local->debug_id, 0, ret,
...@@ -75,41 +73,3 @@ static void rxrpc_send_version_request(struct rxrpc_local *local, ...@@ -75,41 +73,3 @@ static void rxrpc_send_version_request(struct rxrpc_local *local,
_leave(""); _leave("");
} }
/*
* Process event packets targeted at a local endpoint.
*/
void rxrpc_process_local_events(struct rxrpc_local *local)
{
struct sk_buff *skb;
char v;
_enter("");
skb = skb_dequeue(&local->event_queue);
if (skb) {
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
rxrpc_see_skb(skb, rxrpc_skb_seen);
_debug("{%d},{%u}", local->debug_id, sp->hdr.type);
switch (sp->hdr.type) {
case RXRPC_PACKET_TYPE_VERSION:
if (skb_copy_bits(skb, sizeof(struct rxrpc_wire_header),
&v, 1) < 0)
return;
_proto("Rx VERSION { %02x }", v);
if (v == 0)
rxrpc_send_version_request(local, &sp->hdr, skb);
break;
default:
/* Just ignore anything we don't understand */
break;
}
rxrpc_free_skb(skb, rxrpc_skb_freed);
}
_leave("");
}
此差异已折叠。
...@@ -65,7 +65,7 @@ static __net_init int rxrpc_init_net(struct net *net) ...@@ -65,7 +65,7 @@ static __net_init int rxrpc_init_net(struct net *net)
atomic_set(&rxnet->nr_client_conns, 0); atomic_set(&rxnet->nr_client_conns, 0);
rxnet->kill_all_client_conns = false; rxnet->kill_all_client_conns = false;
spin_lock_init(&rxnet->client_conn_cache_lock); spin_lock_init(&rxnet->client_conn_cache_lock);
spin_lock_init(&rxnet->client_conn_discard_lock); mutex_init(&rxnet->client_conn_discard_lock);
INIT_LIST_HEAD(&rxnet->idle_client_conns); INIT_LIST_HEAD(&rxnet->idle_client_conns);
INIT_WORK(&rxnet->client_conn_reaper, INIT_WORK(&rxnet->client_conn_reaper,
rxrpc_discard_expired_client_conns); rxrpc_discard_expired_client_conns);
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
...@@ -49,8 +49,6 @@ static void rxrpc_call_seq_stop(struct seq_file *seq, void *v) ...@@ -49,8 +49,6 @@ static void rxrpc_call_seq_stop(struct seq_file *seq, void *v)
static int rxrpc_call_seq_show(struct seq_file *seq, void *v) static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
{ {
struct rxrpc_local *local; struct rxrpc_local *local;
struct rxrpc_sock *rx;
struct rxrpc_peer *peer;
struct rxrpc_call *call; struct rxrpc_call *call;
struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq)); struct rxrpc_net *rxnet = rxrpc_net(seq_file_net(seq));
unsigned long timeout = 0; unsigned long timeout = 0;
...@@ -63,28 +61,19 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) ...@@ -63,28 +61,19 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
"Proto Local " "Proto Local "
" Remote " " Remote "
" SvID ConnID CallID End Use State Abort " " SvID ConnID CallID End Use State Abort "
" DebugId TxSeq TW RxSeq RW RxSerial RxTimo\n"); " DebugId TxSeq TW RxSeq RW RxSerial CW RxTimo\n");
return 0; return 0;
} }
call = list_entry(v, struct rxrpc_call, link); call = list_entry(v, struct rxrpc_call, link);
rx = rcu_dereference(call->socket); local = call->local;
if (rx) { if (local)
local = READ_ONCE(rx->local); sprintf(lbuff, "%pISpc", &local->srx.transport);
if (local)
sprintf(lbuff, "%pISpc", &local->srx.transport);
else
strcpy(lbuff, "no_local");
} else {
strcpy(lbuff, "no_socket");
}
peer = call->peer;
if (peer)
sprintf(rbuff, "%pISpc", &peer->srx.transport);
else else
strcpy(rbuff, "no_connection"); strcpy(lbuff, "no_local");
sprintf(rbuff, "%pISpc", &call->dest_srx.transport);
if (call->state != RXRPC_CALL_SERVER_PREALLOC) { if (call->state != RXRPC_CALL_SERVER_PREALLOC) {
timeout = READ_ONCE(call->expect_rx_by); timeout = READ_ONCE(call->expect_rx_by);
...@@ -95,10 +84,10 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) ...@@ -95,10 +84,10 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
wtmp = atomic64_read_acquire(&call->ackr_window); wtmp = atomic64_read_acquire(&call->ackr_window);
seq_printf(seq, seq_printf(seq,
"UDP %-47.47s %-47.47s %4x %08x %08x %s %3u" "UDP %-47.47s %-47.47s %4x %08x %08x %s %3u"
" %-8.8s %08x %08x %08x %02x %08x %02x %08x %06lx\n", " %-8.8s %08x %08x %08x %02x %08x %02x %08x %02x %06lx\n",
lbuff, lbuff,
rbuff, rbuff,
call->service_id, call->dest_srx.srx_service,
call->cid, call->cid,
call->call_id, call->call_id,
rxrpc_is_service_call(call) ? "Svc" : "Clt", rxrpc_is_service_call(call) ? "Svc" : "Clt",
...@@ -109,6 +98,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v) ...@@ -109,6 +98,7 @@ static int rxrpc_call_seq_show(struct seq_file *seq, void *v)
acks_hard_ack, READ_ONCE(call->tx_top) - acks_hard_ack, acks_hard_ack, READ_ONCE(call->tx_top) - acks_hard_ack,
lower_32_bits(wtmp), upper_32_bits(wtmp) - lower_32_bits(wtmp), lower_32_bits(wtmp), upper_32_bits(wtmp) - lower_32_bits(wtmp),
call->rx_serial, call->rx_serial,
call->cong_cwnd,
timeout); timeout);
return 0; return 0;
...@@ -159,7 +149,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) ...@@ -159,7 +149,7 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
seq_puts(seq, seq_puts(seq,
"Proto Local " "Proto Local "
" Remote " " Remote "
" SvID ConnID End Use State Key " " SvID ConnID End Ref Act State Key "
" Serial ISerial CallId0 CallId1 CallId2 CallId3\n" " Serial ISerial CallId0 CallId1 CallId2 CallId3\n"
); );
return 0; return 0;
...@@ -172,12 +162,12 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) ...@@ -172,12 +162,12 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
goto print; goto print;
} }
sprintf(lbuff, "%pISpc", &conn->params.local->srx.transport); sprintf(lbuff, "%pISpc", &conn->local->srx.transport);
sprintf(rbuff, "%pISpc", &conn->params.peer->srx.transport); sprintf(rbuff, "%pISpc", &conn->peer->srx.transport);
print: print:
seq_printf(seq, seq_printf(seq,
"UDP %-47.47s %-47.47s %4x %08x %s %3u" "UDP %-47.47s %-47.47s %4x %08x %s %3u %3d"
" %s %08x %08x %08x %08x %08x %08x %08x\n", " %s %08x %08x %08x %08x %08x %08x %08x\n",
lbuff, lbuff,
rbuff, rbuff,
...@@ -185,8 +175,9 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v) ...@@ -185,8 +175,9 @@ static int rxrpc_connection_seq_show(struct seq_file *seq, void *v)
conn->proto.cid, conn->proto.cid,
rxrpc_conn_is_service(conn) ? "Svc" : "Clt", rxrpc_conn_is_service(conn) ? "Svc" : "Clt",
refcount_read(&conn->ref), refcount_read(&conn->ref),
atomic_read(&conn->active),
rxrpc_conn_states[conn->state], rxrpc_conn_states[conn->state],
key_serial(conn->params.key), key_serial(conn->key),
atomic_read(&conn->serial), atomic_read(&conn->serial),
conn->hi_serial, conn->hi_serial,
conn->channels[0].call_id, conn->channels[0].call_id,
...@@ -341,7 +332,7 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v) ...@@ -341,7 +332,7 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v)
if (v == SEQ_START_TOKEN) { if (v == SEQ_START_TOKEN) {
seq_puts(seq, seq_puts(seq,
"Proto Local " "Proto Local "
" Use Act\n"); " Use Act RxQ\n");
return 0; return 0;
} }
...@@ -350,10 +341,11 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v) ...@@ -350,10 +341,11 @@ static int rxrpc_local_seq_show(struct seq_file *seq, void *v)
sprintf(lbuff, "%pISpc", &local->srx.transport); sprintf(lbuff, "%pISpc", &local->srx.transport);
seq_printf(seq, seq_printf(seq,
"UDP %-47.47s %3u %3u\n", "UDP %-47.47s %3u %3u %3u\n",
lbuff, lbuff,
refcount_read(&local->ref), refcount_read(&local->ref),
atomic_read(&local->active_users)); atomic_read(&local->active_users),
local->rx_queue.qlen);
return 0; return 0;
} }
...@@ -407,13 +399,16 @@ int rxrpc_stats_show(struct seq_file *seq, void *v) ...@@ -407,13 +399,16 @@ int rxrpc_stats_show(struct seq_file *seq, void *v)
struct rxrpc_net *rxnet = rxrpc_net(seq_file_single_net(seq)); struct rxrpc_net *rxnet = rxrpc_net(seq_file_single_net(seq));
seq_printf(seq, seq_printf(seq,
"Data : send=%u sendf=%u\n", "Data : send=%u sendf=%u fail=%u\n",
atomic_read(&rxnet->stat_tx_data_send), atomic_read(&rxnet->stat_tx_data_send),
atomic_read(&rxnet->stat_tx_data_send_frag)); atomic_read(&rxnet->stat_tx_data_send_frag),
atomic_read(&rxnet->stat_tx_data_send_fail));
seq_printf(seq, seq_printf(seq,
"Data-Tx : nr=%u retrans=%u\n", "Data-Tx : nr=%u retrans=%u uf=%u cwr=%u\n",
atomic_read(&rxnet->stat_tx_data), atomic_read(&rxnet->stat_tx_data),
atomic_read(&rxnet->stat_tx_data_retrans)); atomic_read(&rxnet->stat_tx_data_retrans),
atomic_read(&rxnet->stat_tx_data_underflow),
atomic_read(&rxnet->stat_tx_data_cwnd_reset));
seq_printf(seq, seq_printf(seq,
"Data-Rx : nr=%u reqack=%u jumbo=%u\n", "Data-Rx : nr=%u reqack=%u jumbo=%u\n",
atomic_read(&rxnet->stat_rx_data), atomic_read(&rxnet->stat_rx_data),
...@@ -462,6 +457,9 @@ int rxrpc_stats_show(struct seq_file *seq, void *v) ...@@ -462,6 +457,9 @@ int rxrpc_stats_show(struct seq_file *seq, void *v)
"Buffers : txb=%u rxb=%u\n", "Buffers : txb=%u rxb=%u\n",
atomic_read(&rxrpc_nr_txbuf), atomic_read(&rxrpc_nr_txbuf),
atomic_read(&rxrpc_n_rx_skbs)); atomic_read(&rxrpc_n_rx_skbs));
seq_printf(seq,
"IO-thread: loops=%u\n",
atomic_read(&rxnet->stat_io_loop));
return 0; return 0;
} }
...@@ -478,8 +476,11 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size) ...@@ -478,8 +476,11 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size)
atomic_set(&rxnet->stat_tx_data, 0); atomic_set(&rxnet->stat_tx_data, 0);
atomic_set(&rxnet->stat_tx_data_retrans, 0); atomic_set(&rxnet->stat_tx_data_retrans, 0);
atomic_set(&rxnet->stat_tx_data_underflow, 0);
atomic_set(&rxnet->stat_tx_data_cwnd_reset, 0);
atomic_set(&rxnet->stat_tx_data_send, 0); atomic_set(&rxnet->stat_tx_data_send, 0);
atomic_set(&rxnet->stat_tx_data_send_frag, 0); atomic_set(&rxnet->stat_tx_data_send_frag, 0);
atomic_set(&rxnet->stat_tx_data_send_fail, 0);
atomic_set(&rxnet->stat_rx_data, 0); atomic_set(&rxnet->stat_rx_data, 0);
atomic_set(&rxnet->stat_rx_data_reqack, 0); atomic_set(&rxnet->stat_rx_data_reqack, 0);
atomic_set(&rxnet->stat_rx_data_jumbo, 0); atomic_set(&rxnet->stat_rx_data_jumbo, 0);
...@@ -491,5 +492,7 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size) ...@@ -491,5 +492,7 @@ int rxrpc_stats_clear(struct file *file, char *buf, size_t size)
memset(&rxnet->stat_rx_acks, 0, sizeof(rxnet->stat_rx_acks)); memset(&rxnet->stat_rx_acks, 0, sizeof(rxnet->stat_rx_acks));
memset(&rxnet->stat_why_req_ack, 0, sizeof(rxnet->stat_why_req_ack)); memset(&rxnet->stat_why_req_ack, 0, sizeof(rxnet->stat_why_req_ack));
atomic_set(&rxnet->stat_io_loop, 0);
return size; return size;
} }
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册