提交 248f219c 编写于 作者: D David Howells

rxrpc: Rewrite the data and ack handling code

Rewrite the data and ack handling code such that:

 (1) Parsing of received ACK and ABORT packets and the distribution and the
     filing of DATA packets happens entirely within the data_ready context
     called from the UDP socket.  This allows us to process and discard ACK
     and ABORT packets much more quickly (they're no longer stashed on a
     queue for a background thread to process).

 (2) We avoid calling skb_clone(), pskb_pull() and pskb_trim().  We instead
     keep track of the offset and length of the content of each packet in
     the sk_buff metadata.  This means we don't do any allocation in the
     receive path.

 (3) Jumbo DATA packet parsing is now done in data_ready context.  Rather
     than cloning the packet once for each subpacket and pulling/trimming
     it, we file the packet multiple times with an annotation for each
     indicating which subpacket is there.  From that we can directly
     calculate the offset and length.

 (4) A call's receive queue can be accessed without taking locks (memory
     barriers do have to be used, though).

 (5) Incoming calls are set up from preallocated resources and immediately
     made live.  They can than have packets queued upon them and ACKs
     generated.  If insufficient resources exist, DATA packet #1 is given a
     BUSY reply and other DATA packets are discarded).

 (6) sk_buffs no longer take a ref on their parent call.

To make this work, the following changes are made:

 (1) Each call's receive buffer is now a circular buffer of sk_buff
     pointers (rxtx_buffer) rather than a number of sk_buff_heads spread
     between the call and the socket.  This permits each sk_buff to be in
     the buffer multiple times.  The receive buffer is reused for the
     transmit buffer.

 (2) A circular buffer of annotations (rxtx_annotations) is kept parallel
     to the data buffer.  Transmission phase annotations indicate whether a
     buffered packet has been ACK'd or not and whether it needs
     retransmission.

     Receive phase annotations indicate whether a slot holds a whole packet
     or a jumbo subpacket and, if the latter, which subpacket.  They also
     note whether the packet has been decrypted in place.

 (3) DATA packet window tracking is much simplified.  Each phase has just
     two numbers representing the window (rx_hard_ack/rx_top and
     tx_hard_ack/tx_top).

     The hard_ack number is the sequence number before base of the window,
     representing the last packet the other side says it has consumed.
     hard_ack starts from 0 and the first packet is sequence number 1.

     The top number is the sequence number of the highest-numbered packet
     residing in the buffer.  Packets between hard_ack+1 and top are
     soft-ACK'd to indicate they've been received, but not yet consumed.

     Four macros, before(), before_eq(), after() and after_eq() are added
     to compare sequence numbers within the window.  This allows for the
     top of the window to wrap when the hard-ack sequence number gets close
     to the limit.

     Two flags, RXRPC_CALL_RX_LAST and RXRPC_CALL_TX_LAST, are added also
     to indicate when rx_top and tx_top point at the packets with the
     LAST_PACKET bit set, indicating the end of the phase.

 (4) Calls are queued on the socket 'receive queue' rather than packets.
     This means that we don't need have to invent dummy packets to queue to
     indicate abnormal/terminal states and we don't have to keep metadata
     packets (such as ABORTs) around

 (5) The offset and length of a (sub)packet's content are now passed to
     the verify_packet security op.  This is currently expected to decrypt
     the packet in place and validate it.

     However, there's now nowhere to store the revised offset and length of
     the actual data within the decrypted blob (there may be a header and
     padding to skip) because an sk_buff may represent multiple packets, so
     a locate_data security op is added to retrieve these details from the
     sk_buff content when needed.

 (6) recvmsg() now has to handle jumbo subpackets, where each subpacket is
     individually secured and needs to be individually decrypted.  The code
     to do this is broken out into rxrpc_recvmsg_data() and shared with the
     kernel API.  It now iterates over the call's receive buffer rather
     than walking the socket receive queue.

Additional changes:

 (1) The timers are condensed to a single timer that is set for the soonest
     of three timeouts (delayed ACK generation, DATA retransmission and
     call lifespan).

 (2) Transmission of ACK and ABORT packets is effected immediately from
     process-context socket ops/kernel API calls that cause them instead of
     them being punted off to a background work item.  The data_ready
     handler still has to defer to the background, though.

 (3) A shutdown op is added to the AF_RXRPC socket so that the AFS
     filesystem can shut down the socket and flush its own work items
     before closing the socket to deal with any in-progress service calls.

Future additional changes that will need to be considered:

 (1) Make sure that a call doesn't hog the front of the queue by receiving
     data from the network as fast as userspace is consuming it to the
     exclusion of other calls.

 (2) Transmit delayed ACKs from within recvmsg() when we've consumed
     sufficiently more packets to avoid the background work item needing to
     run.
Signed-off-by: NDavid Howells <dhowells@redhat.com>
上级 00e90712
......@@ -55,10 +55,8 @@ static const struct afs_call_type afs_RXCMxxxx = {
.abort_to_error = afs_abort_to_error,
};
static void afs_collect_incoming_call(struct work_struct *);
static void afs_charge_preallocation(struct work_struct *);
static DECLARE_WORK(afs_collect_incoming_call_work, afs_collect_incoming_call);
static DECLARE_WORK(afs_charge_preallocation_work, afs_charge_preallocation);
static int afs_wait_atomic_t(atomic_t *p)
......@@ -143,6 +141,8 @@ void afs_close_socket(void)
TASK_UNINTERRUPTIBLE);
_debug("no outstanding calls");
flush_workqueue(afs_async_calls);
kernel_sock_shutdown(afs_socket, SHUT_RDWR);
flush_workqueue(afs_async_calls);
sock_release(afs_socket);
......@@ -602,51 +602,6 @@ static void afs_process_async_call(struct work_struct *work)
_leave("");
}
/*
* accept the backlog of incoming calls
*/
static void afs_collect_incoming_call(struct work_struct *work)
{
struct rxrpc_call *rxcall;
struct afs_call *call = NULL;
_enter("");
do {
if (!call) {
call = kzalloc(sizeof(struct afs_call), GFP_KERNEL);
if (!call) {
rxrpc_kernel_reject_call(afs_socket);
return;
}
INIT_WORK(&call->async_work, afs_process_async_call);
call->wait_mode = &afs_async_incoming_call;
call->type = &afs_RXCMxxxx;
init_waitqueue_head(&call->waitq);
call->state = AFS_CALL_AWAIT_OP_ID;
_debug("CALL %p{%s} [%d]",
call, call->type->name,
atomic_read(&afs_outstanding_calls));
atomic_inc(&afs_outstanding_calls);
}
rxcall = rxrpc_kernel_accept_call(afs_socket,
(unsigned long)call,
afs_wake_up_async_call);
if (!IS_ERR(rxcall)) {
call->rxcall = rxcall;
call->need_attention = true;
queue_work(afs_async_calls, &call->async_work);
call = NULL;
}
} while (!call);
if (call)
afs_free_call(call);
}
static void afs_rx_attach(struct rxrpc_call *rxcall, unsigned long user_call_ID)
{
struct afs_call *call = (struct afs_call *)user_call_ID;
......@@ -704,7 +659,7 @@ static void afs_rx_discard_new_call(struct rxrpc_call *rxcall,
static void afs_rx_new_call(struct sock *sk, struct rxrpc_call *rxcall,
unsigned long user_call_ID)
{
queue_work(afs_wq, &afs_collect_incoming_call_work);
atomic_inc(&afs_outstanding_calls);
queue_work(afs_wq, &afs_charge_preallocation_work);
}
......
......@@ -42,9 +42,6 @@ int rxrpc_kernel_recv_data(struct socket *, struct rxrpc_call *,
void rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *,
u32, int, const char *);
void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *);
struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *, unsigned long,
rxrpc_notify_rx_t);
int rxrpc_kernel_reject_call(struct socket *);
void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *,
struct sockaddr_rxrpc *);
int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
......
......@@ -133,6 +133,13 @@ struct rxrpc_ackpacket {
} __packed;
/* Some ACKs refer to specific packets and some are general and can be updated. */
#define RXRPC_ACK_UPDATEABLE ((1 << RXRPC_ACK_REQUESTED) | \
(1 << RXRPC_ACK_PING_RESPONSE) | \
(1 << RXRPC_ACK_DELAY) | \
(1 << RXRPC_ACK_IDLE))
/*
* ACK packets can have a further piece of information tagged on the end
*/
......
......@@ -155,7 +155,7 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)
}
if (rx->srx.srx_service) {
write_lock_bh(&local->services_lock);
write_lock(&local->services_lock);
hlist_for_each_entry(prx, &local->services, listen_link) {
if (prx->srx.srx_service == rx->srx.srx_service)
goto service_in_use;
......@@ -163,7 +163,7 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)
rx->local = local;
hlist_add_head_rcu(&rx->listen_link, &local->services);
write_unlock_bh(&local->services_lock);
write_unlock(&local->services_lock);
rx->sk.sk_state = RXRPC_SERVER_BOUND;
} else {
......@@ -176,7 +176,7 @@ static int rxrpc_bind(struct socket *sock, struct sockaddr *saddr, int len)
return 0;
service_in_use:
write_unlock_bh(&local->services_lock);
write_unlock(&local->services_lock);
rxrpc_put_local(local);
ret = -EADDRINUSE;
error_unlock:
......@@ -515,15 +515,16 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
static unsigned int rxrpc_poll(struct file *file, struct socket *sock,
poll_table *wait)
{
unsigned int mask;
struct sock *sk = sock->sk;
struct rxrpc_sock *rx = rxrpc_sk(sk);
unsigned int mask;
sock_poll_wait(file, sk_sleep(sk), wait);
mask = 0;
/* the socket is readable if there are any messages waiting on the Rx
* queue */
if (!skb_queue_empty(&sk->sk_receive_queue))
if (!list_empty(&rx->recvmsg_q))
mask |= POLLIN | POLLRDNORM;
/* the socket is writable if there is space to add new data to the
......@@ -575,8 +576,11 @@ static int rxrpc_create(struct net *net, struct socket *sock, int protocol,
rx->calls = RB_ROOT;
INIT_HLIST_NODE(&rx->listen_link);
INIT_LIST_HEAD(&rx->secureq);
INIT_LIST_HEAD(&rx->acceptq);
spin_lock_init(&rx->incoming_lock);
INIT_LIST_HEAD(&rx->sock_calls);
INIT_LIST_HEAD(&rx->to_be_accepted);
INIT_LIST_HEAD(&rx->recvmsg_q);
rwlock_init(&rx->recvmsg_lock);
rwlock_init(&rx->call_lock);
memset(&rx->srx, 0, sizeof(rx->srx));
......@@ -584,6 +588,39 @@ static int rxrpc_create(struct net *net, struct socket *sock, int protocol,
return 0;
}
/*
* Kill all the calls on a socket and shut it down.
*/
static int rxrpc_shutdown(struct socket *sock, int flags)
{
struct sock *sk = sock->sk;
struct rxrpc_sock *rx = rxrpc_sk(sk);
int ret = 0;
_enter("%p,%d", sk, flags);
if (flags != SHUT_RDWR)
return -EOPNOTSUPP;
if (sk->sk_state == RXRPC_CLOSE)
return -ESHUTDOWN;
lock_sock(sk);
spin_lock_bh(&sk->sk_receive_queue.lock);
if (sk->sk_state < RXRPC_CLOSE) {
sk->sk_state = RXRPC_CLOSE;
sk->sk_shutdown = SHUTDOWN_MASK;
} else {
ret = -ESHUTDOWN;
}
spin_unlock_bh(&sk->sk_receive_queue.lock);
rxrpc_discard_prealloc(rx);
release_sock(sk);
return ret;
}
/*
* RxRPC socket destructor
*/
......@@ -623,9 +660,9 @@ static int rxrpc_release_sock(struct sock *sk)
ASSERTCMP(rx->listen_link.next, !=, LIST_POISON1);
if (!hlist_unhashed(&rx->listen_link)) {
write_lock_bh(&rx->local->services_lock);
write_lock(&rx->local->services_lock);
hlist_del_rcu(&rx->listen_link);
write_unlock_bh(&rx->local->services_lock);
write_unlock(&rx->local->services_lock);
}
/* try to flush out this socket */
......@@ -678,7 +715,7 @@ static const struct proto_ops rxrpc_rpc_ops = {
.poll = rxrpc_poll,
.ioctl = sock_no_ioctl,
.listen = rxrpc_listen,
.shutdown = sock_no_shutdown,
.shutdown = rxrpc_shutdown,
.setsockopt = rxrpc_setsockopt,
.getsockopt = sock_no_getsockopt,
.sendmsg = rxrpc_sendmsg,
......
......@@ -94,9 +94,12 @@ struct rxrpc_sock {
rxrpc_discard_new_call_t discard_new_call; /* Func to discard a new call */
struct rxrpc_local *local; /* local endpoint */
struct hlist_node listen_link; /* link in the local endpoint's listen list */
struct list_head secureq; /* calls awaiting connection security clearance */
struct list_head acceptq; /* calls awaiting acceptance */
struct rxrpc_backlog *backlog; /* Preallocation for services */
spinlock_t incoming_lock; /* Incoming call vs service shutdown lock */
struct list_head sock_calls; /* List of calls owned by this socket */
struct list_head to_be_accepted; /* calls awaiting acceptance */
struct list_head recvmsg_q; /* Calls awaiting recvmsg's attention */
rwlock_t recvmsg_lock; /* Lock for recvmsg_q */
struct key *key; /* security for this socket */
struct key *securities; /* list of server security descriptors */
struct rb_root calls; /* User ID -> call mapping */
......@@ -138,13 +141,16 @@ struct rxrpc_host_header {
* - max 48 bytes (struct sk_buff::cb)
*/
struct rxrpc_skb_priv {
struct rxrpc_call *call; /* call with which associated */
union {
unsigned long resend_at; /* time in jiffies at which to resend */
struct {
u8 nr_jumbo; /* Number of jumbo subpackets */
};
};
union {
unsigned int offset; /* offset into buffer of next read */
int remain; /* amount of space remaining for next write */
u32 error; /* network error code */
bool need_resend; /* T if needs resending */
};
struct rxrpc_host_header hdr; /* RxRPC packet header from this packet */
......@@ -179,7 +185,11 @@ struct rxrpc_security {
/* verify the security on a received packet */
int (*verify_packet)(struct rxrpc_call *, struct sk_buff *,
rxrpc_seq_t, u16);
unsigned int, unsigned int, rxrpc_seq_t, u16);
/* Locate the data in a received packet that has been verified. */
void (*locate_data)(struct rxrpc_call *, struct sk_buff *,
unsigned int *, unsigned int *);
/* issue a challenge */
int (*issue_challenge)(struct rxrpc_connection *);
......@@ -211,7 +221,6 @@ struct rxrpc_local {
struct work_struct processor;
struct hlist_head services; /* services listening on this endpoint */
struct rw_semaphore defrag_sem; /* control re-enablement of IP DF bit */
struct sk_buff_head accept_queue; /* incoming calls awaiting acceptance */
struct sk_buff_head reject_queue; /* packets awaiting rejection */
struct sk_buff_head event_queue; /* endpoint event packets awaiting processing */
struct rb_root client_conns; /* Client connections by socket params */
......@@ -388,38 +397,21 @@ struct rxrpc_connection {
*/
enum rxrpc_call_flag {
RXRPC_CALL_RELEASED, /* call has been released - no more message to userspace */
RXRPC_CALL_TERMINAL_MSG, /* call has given the socket its final message */
RXRPC_CALL_RCVD_LAST, /* all packets received */
RXRPC_CALL_RUN_RTIMER, /* Tx resend timer started */
RXRPC_CALL_TX_SOFT_ACK, /* sent some soft ACKs */
RXRPC_CALL_INIT_ACCEPT, /* acceptance was initiated */
RXRPC_CALL_HAS_USERID, /* has a user ID attached */
RXRPC_CALL_EXPECT_OOS, /* expect out of sequence packets */
RXRPC_CALL_IS_SERVICE, /* Call is service call */
RXRPC_CALL_EXPOSED, /* The call was exposed to the world */
RXRPC_CALL_RX_NO_MORE, /* Don't indicate MSG_MORE from recvmsg() */
RXRPC_CALL_RX_LAST, /* Received the last packet (at rxtx_top) */
RXRPC_CALL_TX_LAST, /* Last packet in Tx buffer (at rxtx_top) */
};
/*
* Events that can be raised on a call.
*/
enum rxrpc_call_event {
RXRPC_CALL_EV_RCVD_ACKALL, /* ACKALL or reply received */
RXRPC_CALL_EV_RCVD_BUSY, /* busy packet received */
RXRPC_CALL_EV_RCVD_ABORT, /* abort packet received */
RXRPC_CALL_EV_RCVD_ERROR, /* network error received */
RXRPC_CALL_EV_ACK_FINAL, /* need to generate final ACK (and release call) */
RXRPC_CALL_EV_ACK, /* need to generate ACK */
RXRPC_CALL_EV_REJECT_BUSY, /* need to generate busy message */
RXRPC_CALL_EV_ABORT, /* need to generate abort */
RXRPC_CALL_EV_CONN_ABORT, /* local connection abort generated */
RXRPC_CALL_EV_RESEND_TIMER, /* Tx resend timer expired */
RXRPC_CALL_EV_TIMER, /* Timer expired */
RXRPC_CALL_EV_RESEND, /* Tx resend required */
RXRPC_CALL_EV_DRAIN_RX_OOS, /* drain the Rx out of sequence queue */
RXRPC_CALL_EV_LIFE_TIMER, /* call's lifetimer ran out */
RXRPC_CALL_EV_ACCEPTED, /* incoming call accepted by userspace app */
RXRPC_CALL_EV_SECURED, /* incoming call's connection is now secure */
RXRPC_CALL_EV_POST_ACCEPT, /* need to post an "accept?" message to the app */
};
/*
......@@ -431,7 +423,6 @@ enum rxrpc_call_state {
RXRPC_CALL_CLIENT_SEND_REQUEST, /* - client sending request phase */
RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */
RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */
RXRPC_CALL_CLIENT_FINAL_ACK, /* - client sending final ACK phase */
RXRPC_CALL_SERVER_PREALLOC, /* - service preallocation */
RXRPC_CALL_SERVER_SECURING, /* - server securing request connection */
RXRPC_CALL_SERVER_ACCEPTING, /* - server accepting request */
......@@ -448,7 +439,6 @@ enum rxrpc_call_state {
*/
enum rxrpc_call_completion {
RXRPC_CALL_SUCCEEDED, /* - Normal termination */
RXRPC_CALL_SERVER_BUSY, /* - call rejected by busy server */
RXRPC_CALL_REMOTELY_ABORTED, /* - call aborted by peer */
RXRPC_CALL_LOCALLY_ABORTED, /* - call aborted locally on error or close */
RXRPC_CALL_LOCAL_ERROR, /* - call failed due to local error */
......@@ -465,24 +455,23 @@ struct rxrpc_call {
struct rxrpc_connection *conn; /* connection carrying call */
struct rxrpc_peer *peer; /* Peer record for remote address */
struct rxrpc_sock __rcu *socket; /* socket responsible */
struct timer_list lifetimer; /* lifetime remaining on call */
struct timer_list ack_timer; /* ACK generation timer */
struct timer_list resend_timer; /* Tx resend timer */
struct work_struct processor; /* packet processor and ACK generator */
unsigned long ack_at; /* When deferred ACK needs to happen */
unsigned long resend_at; /* When next resend needs to happen */
unsigned long expire_at; /* When the call times out */
struct timer_list timer; /* Combined event timer */
struct work_struct processor; /* Event processor */
rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */
struct list_head link; /* link in master call list */
struct list_head chan_wait_link; /* Link in conn->waiting_calls */
struct hlist_node error_link; /* link in error distribution list */
struct list_head accept_link; /* calls awaiting acceptance */
struct rb_node sock_node; /* node in socket call tree */
struct sk_buff_head rx_queue; /* received packets */
struct sk_buff_head rx_oos_queue; /* packets received out of sequence */
struct sk_buff_head knlrecv_queue; /* Queue for kernel_recv [TODO: replace this] */
struct list_head accept_link; /* Link in rx->acceptq */
struct list_head recvmsg_link; /* Link in rx->recvmsg_q */
struct list_head sock_link; /* Link in rx->sock_calls */
struct rb_node sock_node; /* Node in rx->calls */
struct sk_buff *tx_pending; /* Tx socket buffer being filled */
wait_queue_head_t waitq; /* Wait queue for channel or Tx */
__be32 crypto_buf[2]; /* Temporary packet crypto buffer */
unsigned long user_call_ID; /* user-defined call ID */
unsigned long creation_jif; /* time of call creation */
unsigned long flags;
unsigned long events;
spinlock_t lock;
......@@ -492,40 +481,55 @@ struct rxrpc_call {
enum rxrpc_call_state state; /* current state of call */
enum rxrpc_call_completion completion; /* Call completion condition */
atomic_t usage;
atomic_t sequence; /* Tx data packet sequence counter */
u16 service_id; /* service ID */
u8 security_ix; /* Security type */
u32 call_id; /* call ID on connection */
u32 cid; /* connection ID plus channel index */
int debug_id; /* debug ID for printks */
/* transmission-phase ACK management */
u8 acks_head; /* offset into window of first entry */
u8 acks_tail; /* offset into window of last entry */
u8 acks_winsz; /* size of un-ACK'd window */
u8 acks_unacked; /* lowest unacked packet in last ACK received */
int acks_latest; /* serial number of latest ACK received */
rxrpc_seq_t acks_hard; /* highest definitively ACK'd msg seq */
unsigned long *acks_window; /* sent packet window
* - elements are pointers with LSB set if ACK'd
/* Rx/Tx circular buffer, depending on phase.
*
* In the Rx phase, packets are annotated with 0 or the number of the
* segment of a jumbo packet each buffer refers to. There can be up to
* 47 segments in a maximum-size UDP packet.
*
* In the Tx phase, packets are annotated with which buffers have been
* acked.
*/
#define RXRPC_RXTX_BUFF_SIZE 64
#define RXRPC_RXTX_BUFF_MASK (RXRPC_RXTX_BUFF_SIZE - 1)
struct sk_buff **rxtx_buffer;
u8 *rxtx_annotations;
#define RXRPC_TX_ANNO_ACK 0
#define RXRPC_TX_ANNO_UNACK 1
#define RXRPC_TX_ANNO_NAK 2
#define RXRPC_TX_ANNO_RETRANS 3
#define RXRPC_RX_ANNO_JUMBO 0x3f /* Jumbo subpacket number + 1 if not zero */
#define RXRPC_RX_ANNO_JLAST 0x40 /* Set if last element of a jumbo packet */
#define RXRPC_RX_ANNO_VERIFIED 0x80 /* Set if verified and decrypted */
rxrpc_seq_t tx_hard_ack; /* Dead slot in buffer; the first transmitted but
* not hard-ACK'd packet follows this.
*/
rxrpc_seq_t tx_top; /* Highest Tx slot allocated. */
rxrpc_seq_t rx_hard_ack; /* Dead slot in buffer; the first received but not
* consumed packet follows this.
*/
rxrpc_seq_t rx_top; /* Highest Rx slot allocated. */
rxrpc_seq_t rx_expect_next; /* Expected next packet sequence number */
u8 rx_winsize; /* Size of Rx window */
u8 tx_winsize; /* Maximum size of Tx window */
u8 nr_jumbo_dup; /* Number of jumbo duplicates */
/* receive-phase ACK management */
rxrpc_seq_t rx_data_expect; /* next data seq ID expected to be received */
rxrpc_seq_t rx_data_post; /* next data seq ID expected to be posted */
rxrpc_seq_t rx_data_recv; /* last data seq ID encountered by recvmsg */
rxrpc_seq_t rx_data_eaten; /* last data seq ID consumed by recvmsg */
rxrpc_seq_t rx_first_oos; /* first packet in rx_oos_queue (or 0) */
rxrpc_seq_t ackr_win_top; /* top of ACK window (rx_data_eaten is bottom) */
rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */
u8 ackr_reason; /* reason to ACK */
u16 ackr_skew; /* skew on packet being ACK'd */
rxrpc_serial_t ackr_serial; /* serial of packet being ACK'd */
atomic_t ackr_not_idle; /* number of packets in Rx queue */
rxrpc_seq_t ackr_prev_seq; /* previous sequence number received */
unsigned short rx_pkt_offset; /* Current recvmsg packet offset */
unsigned short rx_pkt_len; /* Current recvmsg packet len */
/* received packet records, 1 bit per record */
#define RXRPC_ACKR_WINDOW_ASZ DIV_ROUND_UP(RXRPC_MAXACKS, BITS_PER_LONG)
unsigned long ackr_window[RXRPC_ACKR_WINDOW_ASZ + 1];
/* transmission-phase ACK management */
rxrpc_serial_t acks_latest; /* serial number of latest ACK received */
};
enum rxrpc_call_trace {
......@@ -535,10 +539,8 @@ enum rxrpc_call_trace {
rxrpc_call_queued_ref,
rxrpc_call_seen,
rxrpc_call_got,
rxrpc_call_got_skb,
rxrpc_call_got_userid,
rxrpc_call_put,
rxrpc_call_put_skb,
rxrpc_call_put_userid,
rxrpc_call_put_noqueue,
rxrpc_call__nr_trace
......@@ -561,6 +563,9 @@ extern struct workqueue_struct *rxrpc_workqueue;
*/
int rxrpc_service_prealloc(struct rxrpc_sock *, gfp_t);
void rxrpc_discard_prealloc(struct rxrpc_sock *);
struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *,
struct rxrpc_connection *,
struct sk_buff *);
void rxrpc_accept_incoming_calls(struct rxrpc_local *);
struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *, unsigned long,
rxrpc_notify_rx_t);
......@@ -569,8 +574,7 @@ int rxrpc_reject_call(struct rxrpc_sock *);
/*
* call_event.c
*/
void __rxrpc_propose_ACK(struct rxrpc_call *, u8, u16, u32, bool);
void rxrpc_propose_ACK(struct rxrpc_call *, u8, u16, u32, bool);
void rxrpc_propose_ACK(struct rxrpc_call *, u8, u16, u32, bool, bool);
void rxrpc_process_call(struct work_struct *);
/*
......@@ -589,8 +593,7 @@ struct rxrpc_call *rxrpc_new_client_call(struct rxrpc_sock *,
struct rxrpc_conn_parameters *,
struct sockaddr_rxrpc *,
unsigned long, gfp_t);
struct rxrpc_call *rxrpc_incoming_call(struct rxrpc_sock *,
struct rxrpc_connection *,
void rxrpc_incoming_call(struct rxrpc_sock *, struct rxrpc_call *,
struct sk_buff *);
void rxrpc_release_call(struct rxrpc_sock *, struct rxrpc_call *);
void rxrpc_release_calls_on_socket(struct rxrpc_sock *);
......@@ -599,8 +602,6 @@ bool rxrpc_queue_call(struct rxrpc_call *);
void rxrpc_see_call(struct rxrpc_call *);
void rxrpc_get_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_put_call(struct rxrpc_call *, enum rxrpc_call_trace);
void rxrpc_get_call_for_skb(struct rxrpc_call *, struct sk_buff *);
void rxrpc_put_call_for_skb(struct rxrpc_call *, struct sk_buff *);
void rxrpc_cleanup_call(struct rxrpc_call *);
void __exit rxrpc_destroy_all_calls(void);
......@@ -672,13 +673,8 @@ static inline bool __rxrpc_abort_call(const char *why, struct rxrpc_call *call,
{
trace_rxrpc_abort(why, call->cid, call->call_id, seq,
abort_code, error);
if (__rxrpc_set_call_completion(call,
RXRPC_CALL_LOCALLY_ABORTED,
abort_code, error)) {
set_bit(RXRPC_CALL_EV_ABORT, &call->events);
return true;
}
return false;
return __rxrpc_set_call_completion(call, RXRPC_CALL_LOCALLY_ABORTED,
abort_code, error);
}
static inline bool rxrpc_abort_call(const char *why, struct rxrpc_call *call,
......@@ -713,8 +709,6 @@ void __exit rxrpc_destroy_all_client_connections(void);
* conn_event.c
*/
void rxrpc_process_connection(struct work_struct *);
void rxrpc_reject_packet(struct rxrpc_local *, struct sk_buff *);
void rxrpc_reject_packets(struct rxrpc_local *);
/*
* conn_object.c
......@@ -783,18 +777,14 @@ static inline bool rxrpc_queue_conn(struct rxrpc_connection *conn)
*/
struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *,
struct sk_buff *);
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *,
struct sockaddr_rxrpc *,
struct sk_buff *);
struct rxrpc_connection *rxrpc_prealloc_service_connection(gfp_t);
void rxrpc_new_incoming_connection(struct rxrpc_connection *, struct sk_buff *);
void rxrpc_unpublish_service_conn(struct rxrpc_connection *);
/*
* input.c
*/
void rxrpc_data_ready(struct sock *);
int rxrpc_queue_rcv_skb(struct rxrpc_call *, struct sk_buff *, bool, bool);
void rxrpc_fast_process_packet(struct rxrpc_call *, struct sk_buff *);
/*
* insecure.c
......@@ -868,6 +858,7 @@ extern const char *rxrpc_acks(u8 reason);
*/
int rxrpc_send_call_packet(struct rxrpc_call *, u8);
int rxrpc_send_data_packet(struct rxrpc_connection *, struct sk_buff *);
void rxrpc_reject_packets(struct rxrpc_local *);
/*
* peer_event.c
......@@ -883,6 +874,8 @@ struct rxrpc_peer *rxrpc_lookup_peer_rcu(struct rxrpc_local *,
struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *,
struct sockaddr_rxrpc *, gfp_t);
struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *, gfp_t);
struct rxrpc_peer *rxrpc_lookup_incoming_peer(struct rxrpc_local *,
struct rxrpc_peer *);
static inline struct rxrpc_peer *rxrpc_get_peer(struct rxrpc_peer *peer)
{
......@@ -912,6 +905,7 @@ extern const struct file_operations rxrpc_connection_seq_fops;
/*
* recvmsg.c
*/
void rxrpc_notify_socket(struct rxrpc_call *);
int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int);
/*
......@@ -961,6 +955,23 @@ static inline void rxrpc_sysctl_exit(void) {}
*/
int rxrpc_extract_addr_from_skb(struct sockaddr_rxrpc *, struct sk_buff *);
static inline bool before(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) < 0;
}
static inline bool before_eq(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) <= 0;
}
static inline bool after(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) > 0;
}
static inline bool after_eq(u32 seq1, u32 seq2)
{
return (s32)(seq1 - seq2) >= 0;
}
/*
* debug tracing
*/
......
......@@ -129,6 +129,8 @@ static int rxrpc_service_prealloc_one(struct rxrpc_sock *rx,
set_bit(RXRPC_CALL_HAS_USERID, &call->flags);
}
list_add(&call->sock_link, &rx->sock_calls);
write_unlock(&rx->call_lock);
write_lock(&rxrpc_call_lock);
......@@ -186,6 +188,12 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
return;
rx->backlog = NULL;
/* Make sure that there aren't any incoming calls in progress before we
* clear the preallocation buffers.
*/
spin_lock_bh(&rx->incoming_lock);
spin_unlock_bh(&rx->incoming_lock);
head = b->peer_backlog_head;
tail = b->peer_backlog_tail;
while (CIRC_CNT(head, tail, size) > 0) {
......@@ -224,251 +232,179 @@ void rxrpc_discard_prealloc(struct rxrpc_sock *rx)
}
/*
* generate a connection-level abort
* Allocate a new incoming call from the prealloc pool, along with a connection
* and a peer as necessary.
*/
static int rxrpc_busy(struct rxrpc_local *local, struct sockaddr_rxrpc *srx,
struct rxrpc_wire_header *whdr)
static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
struct rxrpc_local *local,
struct rxrpc_connection *conn,
struct sk_buff *skb)
{
struct msghdr msg;
struct kvec iov[1];
size_t len;
int ret;
_enter("%d,,", local->debug_id);
whdr->type = RXRPC_PACKET_TYPE_BUSY;
whdr->serial = htonl(1);
msg.msg_name = &srx->transport.sin;
msg.msg_namelen = sizeof(srx->transport.sin);
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
iov[0].iov_base = whdr;
iov[0].iov_len = sizeof(*whdr);
len = iov[0].iov_len;
_proto("Tx BUSY %%1");
struct rxrpc_backlog *b = rx->backlog;
struct rxrpc_peer *peer, *xpeer;
struct rxrpc_call *call;
unsigned short call_head, conn_head, peer_head;
unsigned short call_tail, conn_tail, peer_tail;
unsigned short call_count, conn_count;
/* #calls >= #conns >= #peers must hold true. */
call_head = smp_load_acquire(&b->call_backlog_head);
call_tail = b->call_backlog_tail;
call_count = CIRC_CNT(call_head, call_tail, RXRPC_BACKLOG_MAX);
conn_head = smp_load_acquire(&b->conn_backlog_head);
conn_tail = b->conn_backlog_tail;
conn_count = CIRC_CNT(conn_head, conn_tail, RXRPC_BACKLOG_MAX);
ASSERTCMP(conn_count, >=, call_count);
peer_head = smp_load_acquire(&b->peer_backlog_head);
peer_tail = b->peer_backlog_tail;
ASSERTCMP(CIRC_CNT(peer_head, peer_tail, RXRPC_BACKLOG_MAX), >=,
conn_count);
if (call_count == 0)
return NULL;
if (!conn) {
/* No connection. We're going to need a peer to start off
* with. If one doesn't yet exist, use a spare from the
* preallocation set. We dump the address into the spare in
* anticipation - and to save on stack space.
*/
xpeer = b->peer_backlog[peer_tail];
if (rxrpc_extract_addr_from_skb(&xpeer->srx, skb) < 0)
return NULL;
peer = rxrpc_lookup_incoming_peer(local, xpeer);
if (peer == xpeer) {
b->peer_backlog[peer_tail] = NULL;
smp_store_release(&b->peer_backlog_tail,
(peer_tail + 1) &
(RXRPC_BACKLOG_MAX - 1));
}
ret = kernel_sendmsg(local->socket, &msg, iov, 1, len);
if (ret < 0) {
_leave(" = -EAGAIN [sendmsg failed: %d]", ret);
return -EAGAIN;
/* Now allocate and set up the connection */
conn = b->conn_backlog[conn_tail];
b->conn_backlog[conn_tail] = NULL;
smp_store_release(&b->conn_backlog_tail,
(conn_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
rxrpc_get_local(local);
conn->params.local = local;
conn->params.peer = peer;
rxrpc_new_incoming_connection(conn, skb);
} else {
rxrpc_get_connection(conn);
}
_leave(" = 0");
return 0;
/* And now we can allocate and set up a new call */
call = b->call_backlog[call_tail];
b->call_backlog[call_tail] = NULL;
smp_store_release(&b->call_backlog_tail,
(call_tail + 1) & (RXRPC_BACKLOG_MAX - 1));
call->conn = conn;
call->peer = rxrpc_get_peer(conn->params.peer);
return call;
}
/*
* accept an incoming call that needs peer, transport and/or connection setting
* up
* Set up a new incoming call. Called in BH context with the RCU read lock
* held.
*
* If this is for a kernel service, when we allocate the call, it will have
* three refs on it: (1) the kernel service, (2) the user_call_ID tree, (3) the
* retainer ref obtained from the backlog buffer. Prealloc calls for userspace
* services only have the ref from the backlog buffer. We want to pass this
* ref to non-BH context to dispose of.
*
* If we want to report an error, we mark the skb with the packet type and
* abort code and return NULL.
*/
static int rxrpc_accept_incoming_call(struct rxrpc_local *local,
struct rxrpc_sock *rx,
struct sk_buff *skb,
struct sockaddr_rxrpc *srx)
struct rxrpc_call *rxrpc_new_incoming_call(struct rxrpc_local *local,
struct rxrpc_connection *conn,
struct sk_buff *skb)
{
struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp, *nsp;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_sock *rx;
struct rxrpc_call *call;
struct sk_buff *notification;
int ret;
_enter("");
sp = rxrpc_skb(skb);
/* get a notification message to send to the server app */
notification = alloc_skb(0, GFP_NOFS);
if (!notification) {
_debug("no memory");
ret = -ENOMEM;
goto error_nofree;
}
rxrpc_new_skb(notification);
notification->mark = RXRPC_SKB_MARK_NEW_CALL;
conn = rxrpc_incoming_connection(local, srx, skb);
if (IS_ERR(conn)) {
_debug("no conn");
ret = PTR_ERR(conn);
goto error;
}
call = rxrpc_incoming_call(rx, conn, skb);
rxrpc_put_connection(conn);
if (IS_ERR(call)) {
_debug("no call");
ret = PTR_ERR(call);
goto error;
/* Get the socket providing the service */
hlist_for_each_entry_rcu_bh(rx, &local->services, listen_link) {
if (rx->srx.srx_service == sp->hdr.serviceId)
goto found_service;
}
/* attach the call to the socket */
read_lock_bh(&local->services_lock);
if (rx->sk.sk_state == RXRPC_CLOSE)
goto invalid_service;
write_lock(&rx->call_lock);
if (!test_and_set_bit(RXRPC_CALL_INIT_ACCEPT, &call->flags)) {
rxrpc_get_call(call, rxrpc_call_got);
trace_rxrpc_abort("INV", sp->hdr.cid, sp->hdr.callNumber, sp->hdr.seq,
RX_INVALID_OPERATION, EOPNOTSUPP);
skb->mark = RXRPC_SKB_MARK_LOCAL_ABORT;
skb->priority = RX_INVALID_OPERATION;
_leave(" = NULL [service]");
return NULL;
spin_lock(&call->conn->state_lock);
if (sp->hdr.securityIndex > 0 &&
call->conn->state == RXRPC_CONN_SERVICE_UNSECURED) {
_debug("await conn sec");
list_add_tail(&call->accept_link, &rx->secureq);
call->conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
rxrpc_queue_conn(call->conn);
} else {
_debug("conn ready");
call->state = RXRPC_CALL_SERVER_ACCEPTING;
list_add_tail(&call->accept_link, &rx->acceptq);
rxrpc_get_call_for_skb(call, notification);
nsp = rxrpc_skb(notification);
nsp->call = call;
ASSERTCMP(atomic_read(&call->usage), >=, 3);
_debug("notify");
spin_lock(&call->lock);
ret = rxrpc_queue_rcv_skb(call, notification, true,
false);
spin_unlock(&call->lock);
notification = NULL;
BUG_ON(ret < 0);
found_service:
spin_lock(&rx->incoming_lock);
if (rx->sk.sk_state == RXRPC_CLOSE) {
trace_rxrpc_abort("CLS", sp->hdr.cid, sp->hdr.callNumber,
sp->hdr.seq, RX_INVALID_OPERATION, ESHUTDOWN);
skb->mark = RXRPC_SKB_MARK_LOCAL_ABORT;
skb->priority = RX_INVALID_OPERATION;
_leave(" = NULL [close]");
call = NULL;
goto out;
}
spin_unlock(&call->conn->state_lock);
_debug("queued");
call = rxrpc_alloc_incoming_call(rx, local, conn, skb);
if (!call) {
skb->mark = RXRPC_SKB_MARK_BUSY;
_leave(" = NULL [busy]");
call = NULL;
goto out;
}
write_unlock(&rx->call_lock);
_debug("process");
rxrpc_fast_process_packet(call, skb);
_debug("done");
read_unlock_bh(&local->services_lock);
rxrpc_free_skb(notification);
rxrpc_put_call(call, rxrpc_call_put);
_leave(" = 0");
return 0;
invalid_service:
_debug("invalid");
read_unlock_bh(&local->services_lock);
rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put);
ret = -ECONNREFUSED;
error:
rxrpc_free_skb(notification);
error_nofree:
_leave(" = %d", ret);
return ret;
}
/*
* accept incoming calls that need peer, transport and/or connection setting up
* - the packets we get are all incoming client DATA packets that have seq == 1
*/
void rxrpc_accept_incoming_calls(struct rxrpc_local *local)
{
struct rxrpc_skb_priv *sp;
struct sockaddr_rxrpc srx;
struct rxrpc_sock *rx;
struct rxrpc_wire_header whdr;
struct sk_buff *skb;
int ret;
/* Make the call live. */
rxrpc_incoming_call(rx, call, skb);
conn = call->conn;
_enter("%d", local->debug_id);
if (rx->notify_new_call)
rx->notify_new_call(&rx->sk, call, call->user_call_ID);
skb = skb_dequeue(&local->accept_queue);
if (!skb) {
_leave("\n");
return;
}
spin_lock(&conn->state_lock);
switch (conn->state) {
case RXRPC_CONN_SERVICE_UNSECURED:
conn->state = RXRPC_CONN_SERVICE_CHALLENGING;
set_bit(RXRPC_CONN_EV_CHALLENGE, &call->conn->events);
rxrpc_queue_conn(call->conn);
break;
_net("incoming call skb %p", skb);
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
/* Set up a response packet header in case we need it */
whdr.epoch = htonl(sp->hdr.epoch);
whdr.cid = htonl(sp->hdr.cid);
whdr.callNumber = htonl(sp->hdr.callNumber);
whdr.seq = htonl(sp->hdr.seq);
whdr.serial = 0;
whdr.flags = 0;
whdr.type = 0;
whdr.userStatus = 0;
whdr.securityIndex = sp->hdr.securityIndex;
whdr._rsvd = 0;
whdr.serviceId = htons(sp->hdr.serviceId);
if (rxrpc_extract_addr_from_skb(&srx, skb) < 0)
goto drop;
/* get the socket providing the service */
read_lock_bh(&local->services_lock);
hlist_for_each_entry(rx, &local->services, listen_link) {
if (rx->srx.srx_service == sp->hdr.serviceId &&
rx->sk.sk_state != RXRPC_CLOSE)
goto found_service;
}
read_unlock_bh(&local->services_lock);
goto invalid_service;
case RXRPC_CONN_SERVICE:
write_lock(&call->state_lock);
if (rx->discard_new_call)
call->state = RXRPC_CALL_SERVER_RECV_REQUEST;
else
call->state = RXRPC_CALL_SERVER_ACCEPTING;
write_unlock(&call->state_lock);
break;
found_service:
_debug("found service %hd", rx->srx.srx_service);
if (sk_acceptq_is_full(&rx->sk))
goto backlog_full;
sk_acceptq_added(&rx->sk);
read_unlock_bh(&local->services_lock);
ret = rxrpc_accept_incoming_call(local, rx, skb, &srx);
if (ret < 0)
sk_acceptq_removed(&rx->sk);
switch (ret) {
case -ECONNRESET: /* old calls are ignored */
case -ECONNABORTED: /* aborted calls are reaborted or ignored */
case 0:
return;
case -ECONNREFUSED:
goto invalid_service;
case -EBUSY:
goto busy;
case -EKEYREJECTED:
goto security_mismatch;
case RXRPC_CONN_REMOTELY_ABORTED:
rxrpc_set_call_completion(call, RXRPC_CALL_REMOTELY_ABORTED,
conn->remote_abort, ECONNABORTED);
break;
case RXRPC_CONN_LOCALLY_ABORTED:
rxrpc_abort_call("CON", call, sp->hdr.seq,
conn->local_abort, ECONNABORTED);
break;
default:
BUG();
}
spin_unlock(&conn->state_lock);
backlog_full:
read_unlock_bh(&local->services_lock);
busy:
rxrpc_busy(local, &srx, &whdr);
rxrpc_free_skb(skb);
return;
drop:
rxrpc_free_skb(skb);
return;
invalid_service:
skb->priority = RX_INVALID_OPERATION;
rxrpc_reject_packet(local, skb);
return;
if (call->state == RXRPC_CALL_SERVER_ACCEPTING)
rxrpc_notify_socket(call);
/* can't change connection security type mid-flow */
security_mismatch:
skb->priority = RX_PROTOCOL_ERROR;
rxrpc_reject_packet(local, skb);
return;
_leave(" = %p{%d}", call, call->debug_id);
out:
spin_unlock(&rx->incoming_lock);
return call;
}
/*
......@@ -490,11 +426,10 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
write_lock(&rx->call_lock);
ret = -ENODATA;
if (list_empty(&rx->acceptq))
if (list_empty(&rx->to_be_accepted))
goto out;
/* check the user ID isn't already in use */
ret = -EBADSLT;
pp = &rx->calls.rb_node;
parent = NULL;
while (*pp) {
......@@ -506,11 +441,14 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
else if (user_call_ID > call->user_call_ID)
pp = &(*pp)->rb_right;
else
goto out;
goto id_in_use;
}
/* dequeue the first call and check it's still valid */
call = list_entry(rx->acceptq.next, struct rxrpc_call, accept_link);
/* Dequeue the first call and check it's still valid. We gain
* responsibility for the queue's reference.
*/
call = list_entry(rx->to_be_accepted.next,
struct rxrpc_call, accept_link);
list_del_init(&call->accept_link);
sk_acceptq_removed(&rx->sk);
rxrpc_see_call(call);
......@@ -528,31 +466,35 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
}
/* formalise the acceptance */
rxrpc_get_call(call, rxrpc_call_got_userid);
rxrpc_get_call(call, rxrpc_call_got);
call->notify_rx = notify_rx;
call->user_call_ID = user_call_ID;
rxrpc_get_call(call, rxrpc_call_got_userid);
rb_link_node(&call->sock_node, parent, pp);
rb_insert_color(&call->sock_node, &rx->calls);
if (test_and_set_bit(RXRPC_CALL_HAS_USERID, &call->flags))
BUG();
if (test_and_set_bit(RXRPC_CALL_EV_ACCEPTED, &call->events))
BUG();
write_unlock_bh(&call->state_lock);
write_unlock(&rx->call_lock);
rxrpc_queue_call(call);
rxrpc_notify_socket(call);
rxrpc_service_prealloc(rx, GFP_KERNEL);
_leave(" = %p{%d}", call, call->debug_id);
return call;
out_release:
_debug("release %p", call);
write_unlock_bh(&call->state_lock);
write_unlock(&rx->call_lock);
_debug("release %p", call);
rxrpc_release_call(rx, call);
_leave(" = %d", ret);
return ERR_PTR(ret);
out:
rxrpc_put_call(call, rxrpc_call_put);
goto out;
id_in_use:
ret = -EBADSLT;
write_unlock(&rx->call_lock);
out:
rxrpc_service_prealloc(rx, GFP_KERNEL);
_leave(" = %d", ret);
return ERR_PTR(ret);
}
......@@ -564,6 +506,7 @@ struct rxrpc_call *rxrpc_accept_call(struct rxrpc_sock *rx,
int rxrpc_reject_call(struct rxrpc_sock *rx)
{
struct rxrpc_call *call;
bool abort = false;
int ret;
_enter("");
......@@ -572,15 +515,16 @@ int rxrpc_reject_call(struct rxrpc_sock *rx)
write_lock(&rx->call_lock);
ret = -ENODATA;
if (list_empty(&rx->acceptq)) {
if (list_empty(&rx->to_be_accepted)) {
write_unlock(&rx->call_lock);
_leave(" = -ENODATA");
return -ENODATA;
}
/* dequeue the first call and check it's still valid */
call = list_entry(rx->acceptq.next, struct rxrpc_call, accept_link);
/* Dequeue the first call and check it's still valid. We gain
* responsibility for the queue's reference.
*/
call = list_entry(rx->to_be_accepted.next,
struct rxrpc_call, accept_link);
list_del_init(&call->accept_link);
sk_acceptq_removed(&rx->sk);
rxrpc_see_call(call);
......@@ -588,67 +532,29 @@ int rxrpc_reject_call(struct rxrpc_sock *rx)
write_lock_bh(&call->state_lock);
switch (call->state) {
case RXRPC_CALL_SERVER_ACCEPTING:
__rxrpc_set_call_completion(call, RXRPC_CALL_SERVER_BUSY,
0, ECONNABORTED);
if (test_and_set_bit(RXRPC_CALL_EV_REJECT_BUSY, &call->events))
rxrpc_queue_call(call);
ret = 0;
break;
__rxrpc_abort_call("REJ", call, 1, RX_USER_ABORT, ECONNABORTED);
abort = true;
/* fall through */
case RXRPC_CALL_COMPLETE:
ret = call->error;
break;
goto out_discard;
default:
BUG();
}
out_discard:
write_unlock_bh(&call->state_lock);
write_unlock(&rx->call_lock);
if (abort) {
rxrpc_send_call_packet(call, RXRPC_PACKET_TYPE_ABORT);
rxrpc_release_call(rx, call);
rxrpc_put_call(call, rxrpc_call_put);
}
rxrpc_service_prealloc(rx, GFP_KERNEL);
_leave(" = %d", ret);
return ret;
}
/**
* rxrpc_kernel_accept_call - Allow a kernel service to accept an incoming call
* @sock: The socket on which the impending call is waiting
* @user_call_ID: The tag to attach to the call
* @notify_rx: Where to send notifications instead of socket queue
*
* Allow a kernel service to accept an incoming call, assuming the incoming
* call is still valid. The caller should immediately trigger their own
* notification as there must be data waiting.
*/
struct rxrpc_call *rxrpc_kernel_accept_call(struct socket *sock,
unsigned long user_call_ID,
rxrpc_notify_rx_t notify_rx)
{
struct rxrpc_call *call;
_enter(",%lx", user_call_ID);
call = rxrpc_accept_call(rxrpc_sk(sock->sk), user_call_ID, notify_rx);
_leave(" = %p", call);
return call;
}
EXPORT_SYMBOL(rxrpc_kernel_accept_call);
/**
* rxrpc_kernel_reject_call - Allow a kernel service to reject an incoming call
* @sock: The socket on which the impending call is waiting
*
* Allow a kernel service to reject an incoming call with a BUSY message,
* assuming the incoming call is still valid.
*/
int rxrpc_kernel_reject_call(struct socket *sock)
{
int ret;
_enter("");
ret = rxrpc_reject_call(rxrpc_sk(sock->sk));
_leave(" = %d", ret);
return ret;
}
EXPORT_SYMBOL(rxrpc_kernel_reject_call);
/*
* rxrpc_kernel_charge_accept - Charge up socket with preallocated calls
* @sock: The socket on which to preallocate
......
此差异已折叠。
此差异已折叠。
......@@ -15,10 +15,6 @@
#include <linux/net.h>
#include <linux/skbuff.h>
#include <linux/errqueue.h>
#include <linux/udp.h>
#include <linux/in.h>
#include <linux/in6.h>
#include <linux/icmp.h>
#include <net/sock.h>
#include <net/af_rxrpc.h>
#include <net/ip.h>
......@@ -140,16 +136,10 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
u32 abort_code, int error)
{
struct rxrpc_call *call;
bool queue;
int i, bit;
int i;
_enter("{%d},%x", conn->debug_id, abort_code);
if (compl == RXRPC_CALL_LOCALLY_ABORTED)
bit = RXRPC_CALL_EV_CONN_ABORT;
else
bit = RXRPC_CALL_EV_RCVD_ABORT;
spin_lock(&conn->channel_lock);
for (i = 0; i < RXRPC_MAXCALLS; i++) {
......@@ -157,22 +147,13 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn,
conn->channels[i].call,
lockdep_is_held(&conn->channel_lock));
if (call) {
rxrpc_see_call(call);
if (compl == RXRPC_CALL_LOCALLY_ABORTED)
trace_rxrpc_abort("CON", call->cid,
call->call_id, 0,
abort_code, error);
write_lock_bh(&call->state_lock);
if (rxrpc_set_call_completion(call, compl, abort_code,
error)) {
set_bit(bit, &call->events);
queue = true;
}
write_unlock_bh(&call->state_lock);
if (queue)
rxrpc_queue_call(call);
if (rxrpc_set_call_completion(call, compl,
abort_code, error))
rxrpc_notify_socket(call);
}
}
......@@ -251,17 +232,18 @@ static int rxrpc_abort_connection(struct rxrpc_connection *conn,
/*
* mark a call as being on a now-secured channel
* - must be called with softirqs disabled
* - must be called with BH's disabled.
*/
static void rxrpc_call_is_secure(struct rxrpc_call *call)
{
_enter("%p", call);
if (call) {
read_lock(&call->state_lock);
if (call->state < RXRPC_CALL_COMPLETE &&
!test_and_set_bit(RXRPC_CALL_EV_SECURED, &call->events))
rxrpc_queue_call(call);
read_unlock(&call->state_lock);
write_lock_bh(&call->state_lock);
if (call->state == RXRPC_CALL_SERVER_SECURING) {
call->state = RXRPC_CALL_SERVER_ACCEPTING;
rxrpc_notify_socket(call);
}
write_unlock_bh(&call->state_lock);
}
}
......@@ -278,7 +260,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
int loop, ret;
if (conn->state >= RXRPC_CONN_REMOTELY_ABORTED) {
kleave(" = -ECONNABORTED [%u]", conn->state);
_leave(" = -ECONNABORTED [%u]", conn->state);
return -ECONNABORTED;
}
......@@ -291,14 +273,14 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
return 0;
case RXRPC_PACKET_TYPE_ABORT:
if (skb_copy_bits(skb, 0, &wtmp, sizeof(wtmp)) < 0)
if (skb_copy_bits(skb, sp->offset, &wtmp, sizeof(wtmp)) < 0)
return -EPROTO;
abort_code = ntohl(wtmp);
_proto("Rx ABORT %%%u { ac=%d }", sp->hdr.serial, abort_code);
conn->state = RXRPC_CONN_REMOTELY_ABORTED;
rxrpc_abort_calls(conn, 0, RXRPC_CALL_REMOTELY_ABORTED,
abort_code);
rxrpc_abort_calls(conn, RXRPC_CALL_REMOTELY_ABORTED,
abort_code, ECONNABORTED);
return -ECONNABORTED;
case RXRPC_PACKET_TYPE_CHALLENGE:
......@@ -323,14 +305,16 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
if (conn->state == RXRPC_CONN_SERVICE_CHALLENGING) {
conn->state = RXRPC_CONN_SERVICE;
spin_unlock(&conn->state_lock);
for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
rxrpc_call_is_secure(
rcu_dereference_protected(
conn->channels[loop].call,
lockdep_is_held(&conn->channel_lock)));
} else {
spin_unlock(&conn->state_lock);
}
spin_unlock(&conn->state_lock);
spin_unlock(&conn->channel_lock);
return 0;
......@@ -433,88 +417,3 @@ void rxrpc_process_connection(struct work_struct *work)
_leave(" [EPROTO]");
goto out;
}
/*
* put a packet up for transport-level abort
*/
void rxrpc_reject_packet(struct rxrpc_local *local, struct sk_buff *skb)
{
CHECK_SLAB_OKAY(&local->usage);
skb_queue_tail(&local->reject_queue, skb);
rxrpc_queue_local(local);
}
/*
* reject packets through the local endpoint
*/
void rxrpc_reject_packets(struct rxrpc_local *local)
{
union {
struct sockaddr sa;
struct sockaddr_in sin;
} sa;
struct rxrpc_skb_priv *sp;
struct rxrpc_wire_header whdr;
struct sk_buff *skb;
struct msghdr msg;
struct kvec iov[2];
size_t size;
__be32 code;
_enter("%d", local->debug_id);
iov[0].iov_base = &whdr;
iov[0].iov_len = sizeof(whdr);
iov[1].iov_base = &code;
iov[1].iov_len = sizeof(code);
size = sizeof(whdr) + sizeof(code);
msg.msg_name = &sa;
msg.msg_control = NULL;
msg.msg_controllen = 0;
msg.msg_flags = 0;
memset(&sa, 0, sizeof(sa));
sa.sa.sa_family = local->srx.transport.family;
switch (sa.sa.sa_family) {
case AF_INET:
msg.msg_namelen = sizeof(sa.sin);
break;
default:
msg.msg_namelen = 0;
break;
}
memset(&whdr, 0, sizeof(whdr));
whdr.type = RXRPC_PACKET_TYPE_ABORT;
while ((skb = skb_dequeue(&local->reject_queue))) {
rxrpc_see_skb(skb);
sp = rxrpc_skb(skb);
switch (sa.sa.sa_family) {
case AF_INET:
sa.sin.sin_port = udp_hdr(skb)->source;
sa.sin.sin_addr.s_addr = ip_hdr(skb)->saddr;
code = htonl(skb->priority);
whdr.epoch = htonl(sp->hdr.epoch);
whdr.cid = htonl(sp->hdr.cid);
whdr.callNumber = htonl(sp->hdr.callNumber);
whdr.serviceId = htons(sp->hdr.serviceId);
whdr.flags = sp->hdr.flags;
whdr.flags ^= RXRPC_CLIENT_INITIATED;
whdr.flags &= RXRPC_CLIENT_INITIATED;
kernel_sendmsg(local->socket, &msg, iov, 2, size);
break;
default:
break;
}
rxrpc_free_skb(skb);
}
_leave("");
}
......@@ -169,7 +169,7 @@ void __rxrpc_disconnect_call(struct rxrpc_connection *conn,
chan->last_abort = call->abort_code;
chan->last_type = RXRPC_PACKET_TYPE_ABORT;
} else {
chan->last_seq = call->rx_data_eaten;
chan->last_seq = call->rx_hard_ack;
chan->last_type = RXRPC_PACKET_TYPE_ACK;
}
/* Sync with rxrpc_conn_retransmit(). */
......@@ -191,6 +191,10 @@ void rxrpc_disconnect_call(struct rxrpc_call *call)
{
struct rxrpc_connection *conn = call->conn;
spin_lock_bh(&conn->params.peer->lock);
hlist_del_init(&call->error_link);
spin_unlock_bh(&conn->params.peer->lock);
if (rxrpc_is_client_call(call))
return rxrpc_disconnect_client_call(call);
......
......@@ -65,8 +65,7 @@ struct rxrpc_connection *rxrpc_find_service_conn_rcu(struct rxrpc_peer *peer,
* Insert a service connection into a peer's tree, thereby making it a target
* for incoming packets.
*/
static struct rxrpc_connection *
rxrpc_publish_service_conn(struct rxrpc_peer *peer,
static void rxrpc_publish_service_conn(struct rxrpc_peer *peer,
struct rxrpc_connection *conn)
{
struct rxrpc_connection *cursor = NULL;
......@@ -96,7 +95,7 @@ rxrpc_publish_service_conn(struct rxrpc_peer *peer,
set_bit(RXRPC_CONN_IN_SERVICE_CONNS, &conn->flags);
write_sequnlock_bh(&peer->service_conn_lock);
_leave(" = %d [new]", conn->debug_id);
return conn;
return;
found_extant_conn:
if (atomic_read(&cursor->usage) == 0)
......@@ -143,106 +142,30 @@ struct rxrpc_connection *rxrpc_prealloc_service_connection(gfp_t gfp)
}
/*
* get a record of an incoming connection
* Set up an incoming connection. This is called in BH context with the RCU
* read lock held.
*/
struct rxrpc_connection *rxrpc_incoming_connection(struct rxrpc_local *local,
struct sockaddr_rxrpc *srx,
void rxrpc_new_incoming_connection(struct rxrpc_connection *conn,
struct sk_buff *skb)
{
struct rxrpc_connection *conn;
struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
struct rxrpc_peer *peer;
const char *new = "old";
_enter("");
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
if (!peer) {
_debug("no peer");
return ERR_PTR(-EBUSY);
}
ASSERT(sp->hdr.flags & RXRPC_CLIENT_INITIATED);
rcu_read_lock();
peer = rxrpc_lookup_peer_rcu(local, srx);
if (peer) {
conn = rxrpc_find_service_conn_rcu(peer, skb);
if (conn) {
if (sp->hdr.securityIndex != conn->security_ix)
goto security_mismatch_rcu;
if (rxrpc_get_connection_maybe(conn))
goto found_extant_connection_rcu;
/* The conn has expired but we can't remove it without
* the appropriate lock, so we attempt to replace it
* when we have a new candidate.
*/
}
if (!rxrpc_get_peer_maybe(peer))
peer = NULL;
}
rcu_read_unlock();
if (!peer) {
peer = rxrpc_lookup_peer(local, srx, GFP_NOIO);
if (!peer)
goto enomem;
}
/* We don't have a matching record yet. */
conn = rxrpc_alloc_connection(GFP_NOIO);
if (!conn)
goto enomem_peer;
conn->proto.epoch = sp->hdr.epoch;
conn->proto.cid = sp->hdr.cid & RXRPC_CIDMASK;
conn->params.local = local;
conn->params.peer = peer;
conn->params.service_id = sp->hdr.serviceId;
conn->security_ix = sp->hdr.securityIndex;
conn->out_clientflag = 0;
conn->state = RXRPC_CONN_SERVICE;
if (conn->params.service_id)
if (conn->security_ix)
conn->state = RXRPC_CONN_SERVICE_UNSECURED;
rxrpc_get_local(local);
/* We maintain an extra ref on the connection whilst it is on
* the rxrpc_connections list.
*/
atomic_set(&conn->usage, 2);
write_lock(&rxrpc_connection_lock);
list_add_tail(&conn->link, &rxrpc_connections);
list_add_tail(&conn->proc_link, &rxrpc_connection_proc_list);
write_unlock(&rxrpc_connection_lock);
else
conn->state = RXRPC_CONN_SERVICE;
/* Make the connection a target for incoming packets. */
rxrpc_publish_service_conn(peer, conn);
new = "new";
success:
_net("CONNECTION %s %d {%x}", new, conn->debug_id, conn->proto.cid);
_leave(" = %p {u=%d}", conn, atomic_read(&conn->usage));
return conn;
found_extant_connection_rcu:
rcu_read_unlock();
goto success;
security_mismatch_rcu:
rcu_read_unlock();
_leave(" = -EKEYREJECTED");
return ERR_PTR(-EKEYREJECTED);
rxrpc_publish_service_conn(conn->params.peer, conn);
enomem_peer:
rxrpc_put_peer(peer);
enomem:
_leave(" = -ENOMEM");
return ERR_PTR(-ENOMEM);
_net("CONNECTION new %d {%x}", conn->debug_id, conn->proto.cid);
}
/*
......
此差异已折叠。
......@@ -30,14 +30,18 @@ static int none_secure_packet(struct rxrpc_call *call,
return 0;
}
static int none_verify_packet(struct rxrpc_call *call,
struct sk_buff *skb,
rxrpc_seq_t seq,
u16 expected_cksum)
static int none_verify_packet(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int offset, unsigned int len,
rxrpc_seq_t seq, u16 expected_cksum)
{
return 0;
}
static void none_locate_data(struct rxrpc_call *call, struct sk_buff *skb,
unsigned int *_offset, unsigned int *_len)
{
}
static int none_respond_to_challenge(struct rxrpc_connection *conn,
struct sk_buff *skb,
u32 *_abort_code)
......@@ -79,6 +83,7 @@ const struct rxrpc_security rxrpc_no_security = {
.prime_packet_security = none_prime_packet_security,
.secure_packet = none_secure_packet,
.verify_packet = none_verify_packet,
.locate_data = none_locate_data,
.respond_to_challenge = none_respond_to_challenge,
.verify_response = none_verify_response,
.clear = none_clear,
......
......@@ -98,7 +98,7 @@ void rxrpc_process_local_events(struct rxrpc_local *local)
switch (sp->hdr.type) {
case RXRPC_PACKET_TYPE_VERSION:
if (skb_copy_bits(skb, 0, &v, 1) < 0)
if (skb_copy_bits(skb, sp->offset, &v, 1) < 0)
return;
_proto("Rx VERSION { %02x }", v);
if (v == 0)
......
......@@ -77,7 +77,6 @@ static struct rxrpc_local *rxrpc_alloc_local(const struct sockaddr_rxrpc *srx)
INIT_WORK(&local->processor, rxrpc_local_processor);
INIT_HLIST_HEAD(&local->services);
init_rwsem(&local->defrag_sem);
skb_queue_head_init(&local->accept_queue);
skb_queue_head_init(&local->reject_queue);
skb_queue_head_init(&local->event_queue);
local->client_conns = RB_ROOT;
......@@ -308,7 +307,6 @@ static void rxrpc_local_destroyer(struct rxrpc_local *local)
/* At this point, there should be no more packets coming in to the
* local endpoint.
*/
rxrpc_purge_queue(&local->accept_queue);
rxrpc_purge_queue(&local->reject_queue);
rxrpc_purge_queue(&local->event_queue);
......@@ -332,11 +330,6 @@ static void rxrpc_local_processor(struct work_struct *work)
if (atomic_read(&local->usage) == 0)
return rxrpc_local_destroyer(local);
if (!skb_queue_empty(&local->accept_queue)) {
rxrpc_accept_incoming_calls(local);
again = true;
}
if (!skb_queue_empty(&local->reject_queue)) {
rxrpc_reject_packets(local);
again = true;
......
......@@ -50,7 +50,7 @@ unsigned int rxrpc_idle_ack_delay = 0.5 * HZ;
* limit is hit, we should generate an EXCEEDS_WINDOW ACK and discard further
* packets.
*/
unsigned int rxrpc_rx_window_size = 32;
unsigned int rxrpc_rx_window_size = RXRPC_RXTX_BUFF_SIZE - 46;
/*
* Maximum Rx MTU size. This indicates to the sender the size of jumbo packet
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册