1. 06 7月, 2016 11 次提交
    • D
      rxrpc: Move data_ready peer lookup into rxrpc_find_connection() · 1291e9d1
      David Howells 提交于
      Move the peer lookup done in input.c by data_ready into
      rxrpc_find_connection().
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      1291e9d1
    • D
      rxrpc: Maintain an extra ref on a conn for the cache list · 001c1122
      David Howells 提交于
      Overhaul the usage count accounting for the rxrpc_connection struct to make
      it easier to implement RCU access from the data_ready handler.
      
      The problem is that currently we're using a lock to prevent the garbage
      collector from trying to clean up a connection that we're contemplating
      unidling.  We could just stick incoming packets on the connection we find,
      but we've then got a problem that we may race when dispatching a work item
      to process it as we need to give that a ref to prevent the rxrpc_connection
      struct from disappearing in the meantime.
      
      Further, incoming packets may get discarded if attached to an
      rxrpc_connection struct that is going away.  Whilst this is not a total
      disaster - the client will presumably resend - it would delay processing of
      the call.  This would affect the AFS client filesystem's service manager
      operation.
      
      To this end:
      
       (1) We now maintain an extra count on the connection usage count whilst it
           is on the connection list.  This mean it is not in use when its
           refcount is 1.
      
       (2) When trying to reuse an old connection, we only increment the refcount
           if it is greater than 0.  If it is 0, we replace it in the tree with a
           new candidate connection.
      
       (3) Two connection flags are added to indicate whether or not a connection
           is in the local's client connection tree (used by sendmsg) or the
           peer's service connection tree (used by data_ready).  This makes sure
           that we don't try and remove a connection if it got replaced.
      
           The flags are tested under lock with the removal operation to prevent
           the reaper from killing the rxrpc_connection struct whilst someone
           else is trying to effect a replacement.
      
           This could probably be alleviated by using memory barriers between the
           flag set/test and the rb_tree ops.  The rb_tree op would still need to
           be under the lock, however.
      
       (4) When trying to reap an old connection, we try to flip the usage count
           from 1 to 0.  If it's not 1 at that point, then it must've come back
           to life temporarily and we ignore it.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      001c1122
    • D
      rxrpc: Split service connection code out into its own file · 7877a4a4
      David Howells 提交于
      Split the service-specific connection code out into into its own file.  The
      client-specific code has already been split out.  This will leave just the
      common code in the original file.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      7877a4a4
    • D
      rxrpc: Split client connection code out into its own file · c6d2b8d7
      David Howells 提交于
      Split the client-specific connection code out into its own file.  It will
      behave somewhat differently from the service-specific connection code, so
      it makes sense to separate them.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      c6d2b8d7
    • D
      rxrpc: Call channels should have separate call number spaces · a1399f8b
      David Howells 提交于
      Each channel on a connection has a separate, independent number space from
      which to allocate callNumber values.  It is entirely possible, for example,
      to have a connection with four active calls, each with call number 1.
      
      Note that the callNumber values for any particular channel don't have to
      start at 1, but they are supposed to increment monotonically for that
      channel from a client's perspective and may not be reused once the call
      number is transmitted (until the epoch cycles all the way back round).
      
      Currently, however, call numbers are allocated on a per-connection basis
      and, further, are held in an rb-tree.  The rb-tree is redundant as the four
      channel pointers in the rxrpc_connection struct are entirely capable of
      pointing to all the calls currently in progress on a connection.
      
      To this end, make the following changes:
      
       (1) Handle call number allocation independently per channel.
      
       (2) Get rid of the conn->calls rb-tree.  This is overkill as a connection
           may have a maximum of four calls in progress at any one time.  Use the
           pointers in the channels[] array instead, indexed by the channel
           number from the packet.
      
       (3) For each channel, save the result of the last call that was in
           progress on that channel in conn->channels[] so that the final ACK or
           ABORT packet can be replayed if necessary.  Any call earlier than that
           is just ignored.  If we've seen the next call number in a packet, the
           last one is most definitely defunct.
      
       (4) When generating a RESPONSE packet for a connection, the call number
           counter for each channel must be included in it.
      
       (5) When parsing a RESPONSE packet for a connection, the call number
           counters contained therein should be used to set the minimum expected
           call numbers on each channel.
      
      To do in future commits:
      
       (1) Replay terminal packets based on the last call stored in
           conn->channels[].
      
       (2) Connections should be retired before the callNumber space on any
           channel runs out.
      
       (3) A server is expected to disregard or reject any new incoming call that
           has a call number less than the current call number counter.  The call
           number counter for that channel must be advanced to the new call
           number.
      
           Note that the server cannot just require that the next call that it
           sees on a channel be exactly the call number counter + 1 because then
           there's a scenario that could cause a problem: The client transmits a
           packet to initiate a connection, the network goes out, the server
           sends an ACK (which gets lost), the client sends an ABORT (which also
           gets lost); the network then reconnects, the client then reuses the
           call number for the next call (it doesn't know the server already saw
           the call number), but the server thinks it already has the first
           packet of this call (it doesn't know that the client doesn't know that
           it saw the call number the first time).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a1399f8b
    • D
      rxrpc: Add RCU destruction for connections and calls · dee46364
      David Howells 提交于
      Add RCU destruction for connections and calls as the RCU lookup from the
      transport socket data_ready handler is going to come along shortly.
      
      Whilst we're at it, move the cleanup workqueue flushing and RCU barrierage
      into the destruction code for the objects that need it (locals and
      connections) and add the extra RCU barrier required for connection cleanup.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      dee46364
    • D
      rxrpc: Release a call's connection ref on call disconnection · e653cfe4
      David Howells 提交于
      When a call is disconnected, clear the call's pointer to the connection and
      release the associated ref on that connection.  This means that the call no
      longer pins the connection and the connection can be discarded even before
      the call is.
      
      As the code currently stands, the call struct is effectively pinned by
      userspace until userspace has enacted a recvmsg() to retrieve the final
      call state as sk_buffs on the receive queue pin the call to which they're
      related because:
      
       (1) The rxrpc_call struct contains the userspace ID that recvmsg() has to
           include in the control message buffer to indicate which call is being
           referred to.  This ID must remain valid until the terminal packet is
           completely read and must be invalidated immediately at that point as
           userspace is entitled to immediately reuse it.
      
       (2) The final ACK to the reply to a client call isn't sent until the last
           data packet is entirely read (it's probably worth altering this in
           future to be send the ACK as soon as all the data has been received).
      
      
      This change requires a bit of rearrangement to make sure that the call
      isn't going to try and access the connection again after protocol
      completion:
      
       (1) Delete the error link earlier when we're releasing the call.  Possibly
           network errors should be distributed via connections at the cost of
           adding in an access to the rxrpc_connection struct.
      
       (2) Remove the call from the connection's call tree before disconnecting
           the call.  The call tree needs to be removed anyway and incoming
           packets delivered by channel pointer instead.
      
       (3) The release call event should be considered last after all other
           events have been processed so that we don't need access to the
           connection again.
      
       (4) Move the channel_lock taking from rxrpc_release_call() to
           rxrpc_disconnect_call() where it will be required in future.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      e653cfe4
    • D
      rxrpc: Turn connection #defines into enums and put outside struct def · bba304db
      David Howells 提交于
      Turn the connection event and state #define lists into enums and move
      outside of the struct definition.
      
      Whilst we're at it, change _SERVER to _SERVICE in those identifiers and add
      EV_ into the event name to distinguish them from flags and states.
      
      Also add a symbol indicating the number of states and use that in the state
      text array.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      bba304db
    • H
      rxrpc: Avoid using stack memory in SG lists in rxkad · a263629d
      Herbert Xu 提交于
      rxkad uses stack memory in SG lists which would not work if stacks were
      allocated from vmalloc memory.  In fact, in most cases this isn't even
      necessary as the stack memory ends up getting copied over to kmalloc
      memory.
      
      This patch eliminates all the unnecessary stack memory uses by supplying
      the final destination directly to the crypto API.  In two instances where a
      temporary buffer is actually needed we also switch use a scratch area in
      the rxrpc_call struct (only one DATA packet will be being secured or
      verified at a time).
      
      Finally there is no need to split a split-page buffer into two SG entries
      so code dealing with that has been removed.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NAndy Lutomirski <luto@kernel.org>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a263629d
    • D
      rxrpc: Check the source of a packet to a client conn · 689f4c64
      David Howells 提交于
      When looking up a client connection to which to route a packet, we need to
      check that the packet came from the correct source so that a peer can't try
      to muck around with another peer's connection.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      689f4c64
    • D
      rxrpc: Fix some sparse errors · 88b99d0b
      David Howells 提交于
      Fix the following sparse errors:
      
      ../net/rxrpc/conn_object.c:77:17: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:77:17:    expected restricted __be32 [usertype] call_id
      ../net/rxrpc/conn_object.c:77:17:    got unsigned int [unsigned] [usertype] call_id
      ../net/rxrpc/conn_object.c:84:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:86:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:357:15: warning: incorrect type in assignment (different base types)
      ../net/rxrpc/conn_object.c:357:15:    expected restricted __be32 [usertype] epoch
      ../net/rxrpc/conn_object.c:357:15:    got unsigned int [unsigned] [usertype] epoch
      ../net/rxrpc/conn_object.c:369:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:371:26: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:411:21: warning: restricted __be32 degrades to integer
      ../net/rxrpc/conn_object.c:413:26: warning: restricted __be32 degrades to integer
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      88b99d0b
  2. 22 6月, 2016 8 次提交
    • D
      rxrpc: Kill off the rxrpc_transport struct · aa390bbe
      David Howells 提交于
      The rxrpc_transport struct is now redundant, given that the rxrpc_peer
      struct is now per peer port rather than per peer host, so get rid of it.
      
      Service connection lists are transferred to the rxrpc_peer struct, as is
      the conn_lock.  Previous patches moved the client connection handling out
      of the rxrpc_transport struct and discarded the connection bundling code.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      aa390bbe
    • D
      rxrpc: Kill the client connection bundle concept · 999b69f8
      David Howells 提交于
      Kill off the concept of maintaining a bundle of connections to a particular
      target service to increase the number of call slots available for any
      beyond four for that service (there are four call slots per connection).
      
      This will make cleaning up the connection handling code easier and
      facilitate removal of the rxrpc_transport struct.  Bundling can be
      reintroduced later if necessary.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      999b69f8
    • D
      rxrpc: Provide more refcount helper functions · 5627cc8b
      David Howells 提交于
      Provide refcount helper functions for connections so that the code doesn't
      touch local or connection usage counts directly.
      
      Also make it such that local and peer put functions can take a NULL
      pointer.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      5627cc8b
    • D
      rxrpc: Use IDR to allocate client conn IDs on a machine-wide basis · 4a3388c8
      David Howells 提交于
      Use the IDR facility to allocate client connection IDs on a machine-wide
      basis so that each client connection has a unique identifier.  When the
      connection ID space wraps, we advance the epoch by 1, thereby effectively
      having a 62-bit ID space.  The IDR facility is then used to look up client
      connections during incoming packet routing instead of using an rbtree
      rooted on the transport.
      
      This change allows for the removal of the transport in the future and also
      means that client connections can be looked up directly in the data-ready
      handler by connection ID.
      
      The ID management code is placed in a new file, conn-client.c, to which all
      the client connection-specific code will eventually move.
      
      Note that the IDR tree gets very expensive on memory if the connection IDs
      are widely scattered throughout the number space, so we shall need to
      retire connections that have, say, an ID more than four times the maximum
      number of client conns away from the current allocation point to try and
      keep the IDs concentrated.  We will also need to retire connections from an
      old epoch.
      
      Also note that, for the moment, a pointer to the transport has to be passed
      through into the ID allocation function so that we can take a BH lock to
      prevent a locking issue against in-BH lookup of client connections.  This
      will go away later when RCU is used for server connections also.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      4a3388c8
    • D
      rxrpc: rxrpc_connection_lock shouldn't be a BH lock, but conn_lock is · b3f57504
      David Howells 提交于
      rxrpc_connection_lock shouldn't be accessed as a BH-excluding lock.  It's
      only accessed in a few places and none of those are in BH-context.
      
      rxrpc_transport::conn_lock, however, *is* a BH-excluding lock and should be
      accessed so consistently.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      b3f57504
    • D
      rxrpc: Pass sk_buff * rather than rxrpc_host_header * to functions · 42886ffe
      David Howells 提交于
      Pass a pointer to struct sk_buff rather than struct rxrpc_host_header to
      functions so that they can in the future get at transport protocol parameters
      rather than just RxRPC parameters.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      42886ffe
    • D
      rxrpc: Fix exclusive connection handling · cc8feb8e
      David Howells 提交于
      "Exclusive connections" are meant to be used for a single client call and
      then scrapped.  The idea is to limit the use of the negotiated security
      context.  The current code, however, isn't doing this: it is instead
      restricting the socket to a single virtual connection and doing all the
      calls over that.
      
      This is changed such that the socket no longer maintains a special virtual
      connection over which it will do all the calls, but rather gets a new one
      each time a new exclusive call is made.
      
      Further, using a socket option for this is a poor choice.  It should be
      done on sendmsg with a control message marker instead so that calls can be
      marked exclusive individually.  To that end, add RXRPC_EXCLUSIVE_CALL
      which, if passed to sendmsg() as a control message element, will cause the
      call to be done on an single-use connection.
      
      The socket option (RXRPC_EXCLUSIVE_CONNECTION) still exists and, if set,
      will override any lack of RXRPC_EXCLUSIVE_CALL being specified so that
      programs using the setsockopt() will appear to work the same.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc8feb8e
    • D
      rxrpc: Use structs to hold connection params and protocol info · 19ffa01c
      David Howells 提交于
      Define and use a structure to hold connection parameters.  This makes it
      easier to pass multiple connection parameters around.
      
      Define and use a structure to hold protocol information used to hash a
      connection for lookup on incoming packet.  Most of these fields will be
      disposed of eventually, including the duplicate local pointer.
      
      Whilst we're at it rename "proto" to "family" when referring to a protocol
      family.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      19ffa01c
  3. 13 6月, 2016 1 次提交
    • D
      rxrpc: Rename files matching ar-*.c to git rid of the "ar-" prefix · 8c3e34a4
      David Howells 提交于
      Rename files matching net/rxrpc/ar-*.c to get rid of the "ar-" prefix.
      This will aid splitting those files by making easier to come up with new
      names.
      
      Note that the not all files are simply renamed from ar-X.c to X.c.  The
      following exceptions are made:
      
       (*) ar-call.c -> call_object.c
           ar-ack.c -> call_event.c
      
           call_object.c is going to contain the core of the call object
           handling.  Call event handling is all going to be in call_event.c.
      
       (*) ar-accept.c -> call_accept.c
      
           Incoming call handling is going to be here.
      
       (*) ar-connection.c -> conn_object.c
           ar-connevent.c -> conn_event.c
      
           The former file is going to have the basic connection object handling,
           but there will likely be some differentiation between client
           connections and service connections in additional files later.  The
           latter file will have all the connection-level event handling.
      
       (*) ar-local.c -> local_object.c
      
           This will have the local endpoint object handling code.  The local
           endpoint event handling code will later be split out into
           local_event.c.
      
       (*) ar-peer.c -> peer_object.c
      
           This will have the peer endpoint object handling code.  Peer event
           handling code will be placed in peer_event.c (for the moment, there is
           none).
      
       (*) ar-error.c -> peer_event.c
      
           This will become the peer event handling code, though for the moment
           it's actually driven from the local endpoint's perspective.
      
      Note that I haven't renamed ar-transport.c to transport_object.c as the
      intention is to delete it when the rxrpc_transport struct is excised.
      
      The only file that actually has its contents changed is net/rxrpc/Makefile.
      
      net/rxrpc/ar-internal.h will need its section marker comments updating, but
      I'll do that in a separate patch to make it easier for git to follow the
      history across the rename.  I may also want to rename ar-internal.h at some
      point - but that would mean updating all the #includes and I'd rather do
      that in a separate step.
      
      Signed-off-by: David Howells <dhowells@redhat.com.
      8c3e34a4
  4. 10 6月, 2016 1 次提交
    • D
      rxrpc: Simplify connect() implementation and simplify sendmsg() op · 2341e077
      David Howells 提交于
      Simplify the RxRPC connect() implementation.  It will just note the
      destination address it is given, and if a sendmsg() comes along with no
      address, this will be assigned as the address.  No transport struct will be
      held internally, which will allow us to remove this later.
      
      Simplify sendmsg() also.  Whilst a call is active, userspace refers to it
      by a private unique user ID specified in a control message.  When sendmsg()
      sees a user ID that doesn't map to an extant call, it creates a new call
      for that user ID and attempts to add it.  If, when we try to add it, the
      user ID is now registered, we now reject the message with -EEXIST.  We
      should never see this situation unless two threads are racing, trying to
      create a call with the same ID - which would be an error.
      
      It also isn't required to provide sendmsg() with an address - provided the
      control message data holds a user ID that maps to a currently active call.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2341e077
  5. 04 6月, 2016 1 次提交
    • J
      rxrpc: Use pr_<level> and pr_fmt, reduce object size a few KB · 9b6d5398
      Joe Perches 提交于
      Use the more common kernel logging style and reduce object size.
      
      The logging message prefix changes from a mixture of
      "RxRPC:" and "RXRPC:" to "af_rxrpc: ".
      
      $ size net/rxrpc/built-in.o*
         text	   data	    bss	    dec	    hex	filename
        64172	   1972	   8304	  74448	  122d0	net/rxrpc/built-in.o.new
        67512	   1972	   8304	  77788	  12fdc	net/rxrpc/built-in.o.old
      
      Miscellanea:
      
      o Consolidate the ASSERT macros to use a single pr_err call with
        decimal and hexadecimal output and a stringified #OP argument
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b6d5398
  6. 12 4月, 2016 2 次提交
  7. 14 3月, 2016 1 次提交
  8. 04 3月, 2016 1 次提交
    • D
      rxrpc: Keep the skb private record of the Rx header in host byte order · 0d12f8a4
      David Howells 提交于
      Currently, a copy of the Rx packet header is copied into the the sk_buff
      private data so that we can advance the pointer into the buffer,
      potentially discarding the original.  At the moment, this copy is held in
      network byte order, but this means we're doing a lot of unnecessary
      translations.
      
      The reasons it was done this way are that we need the values in network
      byte order occasionally and we can use the copy, slightly modified, as part
      of an iov array when sending an ack or an abort packet.
      
      However, it seems more reasonable on review that it would be better kept in
      host byte order and that we make up a new header when we want to send
      another packet.
      
      To this end, rename the original header struct to rxrpc_wire_header (with
      BE fields) and institute a variant called rxrpc_host_header that has host
      order fields.  Change the struct in the sk_buff private data into an
      rxrpc_host_header and translate the values when filling it in.
      
      This further allows us to keep values kept in various structures in host
      byte order rather than network byte order and allows removal of some fields
      that are byteswapped duplicates.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      0d12f8a4
  9. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  10. 21 9月, 2015 1 次提交
  11. 27 2月, 2014 1 次提交
  12. 26 1月, 2014 1 次提交
  13. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  14. 17 6月, 2009 1 次提交
  15. 22 5月, 2009 1 次提交
  16. 11 12月, 2008 1 次提交
  17. 29 1月, 2008 1 次提交
  18. 27 7月, 2007 1 次提交
  19. 26 7月, 2007 1 次提交
    • J
      Cleanup non-arch xtime uses, use get_seconds() or current_kernel_time(). · 2c6b47de
      john stultz 提交于
      This avoids use of the kernel-internal "xtime" variable directly outside
      of the actual time-related functions.  Instead, use the helper functions
      that we already have available to us.
      
      This doesn't actually change any behaviour, but this will allow us to
      fix the fact that "xtime" isn't updated very often with CONFIG_NO_HZ
      (because much of the realtime information is maintained as separate
      offsets to 'xtime'), which has caused interfaces that use xtime directly
      to get a time that is out of sync with the real-time clock by up to a
      third of a second or so.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2c6b47de
  20. 16 6月, 2007 1 次提交
  21. 27 4月, 2007 2 次提交
    • D
      [AF_RXRPC]: Add an interface to the AF_RXRPC module for the AFS filesystem to use · 651350d1
      David Howells 提交于
      Add an interface to the AF_RXRPC module so that the AFS filesystem module can
      more easily make use of the services available.  AFS still opens a socket but
      then uses the action functions in lieu of sendmsg() and registers an intercept
      functions to grab messages before they're queued on the socket Rx queue.
      
      This permits AFS (or whatever) to:
      
       (1) Avoid the overhead of using the recvmsg() call.
      
       (2) Use different keys directly on individual client calls on one socket
           rather than having to open a whole slew of sockets, one for each key it
           might want to use.
      
       (3) Avoid calling request_key() at the point of issue of a call or opening of
           a socket.  This is done instead by AFS at the point of open(), unlink() or
           other VFS operation and the key handed through.
      
       (4) Request the use of something other than GFP_KERNEL to allocate memory.
      
      Furthermore:
      
       (*) The socket buffer markings used by RxRPC are made available for AFS so
           that it can interpret the cooked RxRPC messages itself.
      
       (*) rxgen (un)marshalling abort codes are made available.
      
      
      The following documentation for the kernel interface is added to
      Documentation/networking/rxrpc.txt:
      
      =========================
      AF_RXRPC KERNEL INTERFACE
      =========================
      
      The AF_RXRPC module also provides an interface for use by in-kernel utilities
      such as the AFS filesystem.  This permits such a utility to:
      
       (1) Use different keys directly on individual client calls on one socket
           rather than having to open a whole slew of sockets, one for each key it
           might want to use.
      
       (2) Avoid having RxRPC call request_key() at the point of issue of a call or
           opening of a socket.  Instead the utility is responsible for requesting a
           key at the appropriate point.  AFS, for instance, would do this during VFS
           operations such as open() or unlink().  The key is then handed through
           when the call is initiated.
      
       (3) Request the use of something other than GFP_KERNEL to allocate memory.
      
       (4) Avoid the overhead of using the recvmsg() call.  RxRPC messages can be
           intercepted before they get put into the socket Rx queue and the socket
           buffers manipulated directly.
      
      To use the RxRPC facility, a kernel utility must still open an AF_RXRPC socket,
      bind an addess as appropriate and listen if it's to be a server socket, but
      then it passes this to the kernel interface functions.
      
      The kernel interface functions are as follows:
      
       (*) Begin a new client call.
      
      	struct rxrpc_call *
      	rxrpc_kernel_begin_call(struct socket *sock,
      				struct sockaddr_rxrpc *srx,
      				struct key *key,
      				unsigned long user_call_ID,
      				gfp_t gfp);
      
           This allocates the infrastructure to make a new RxRPC call and assigns
           call and connection numbers.  The call will be made on the UDP port that
           the socket is bound to.  The call will go to the destination address of a
           connected client socket unless an alternative is supplied (srx is
           non-NULL).
      
           If a key is supplied then this will be used to secure the call instead of
           the key bound to the socket with the RXRPC_SECURITY_KEY sockopt.  Calls
           secured in this way will still share connections if at all possible.
      
           The user_call_ID is equivalent to that supplied to sendmsg() in the
           control data buffer.  It is entirely feasible to use this to point to a
           kernel data structure.
      
           If this function is successful, an opaque reference to the RxRPC call is
           returned.  The caller now holds a reference on this and it must be
           properly ended.
      
       (*) End a client call.
      
      	void rxrpc_kernel_end_call(struct rxrpc_call *call);
      
           This is used to end a previously begun call.  The user_call_ID is expunged
           from AF_RXRPC's knowledge and will not be seen again in association with
           the specified call.
      
       (*) Send data through a call.
      
      	int rxrpc_kernel_send_data(struct rxrpc_call *call, struct msghdr *msg,
      				   size_t len);
      
           This is used to supply either the request part of a client call or the
           reply part of a server call.  msg.msg_iovlen and msg.msg_iov specify the
           data buffers to be used.  msg_iov may not be NULL and must point
           exclusively to in-kernel virtual addresses.  msg.msg_flags may be given
           MSG_MORE if there will be subsequent data sends for this call.
      
           The msg must not specify a destination address, control data or any flags
           other than MSG_MORE.  len is the total amount of data to transmit.
      
       (*) Abort a call.
      
      	void rxrpc_kernel_abort_call(struct rxrpc_call *call, u32 abort_code);
      
           This is used to abort a call if it's still in an abortable state.  The
           abort code specified will be placed in the ABORT message sent.
      
       (*) Intercept received RxRPC messages.
      
      	typedef void (*rxrpc_interceptor_t)(struct sock *sk,
      					    unsigned long user_call_ID,
      					    struct sk_buff *skb);
      
      	void
      	rxrpc_kernel_intercept_rx_messages(struct socket *sock,
      					   rxrpc_interceptor_t interceptor);
      
           This installs an interceptor function on the specified AF_RXRPC socket.
           All messages that would otherwise wind up in the socket's Rx queue are
           then diverted to this function.  Note that care must be taken to process
           the messages in the right order to maintain DATA message sequentiality.
      
           The interceptor function itself is provided with the address of the socket
           and handling the incoming message, the ID assigned by the kernel utility
           to the call and the socket buffer containing the message.
      
           The skb->mark field indicates the type of message:
      
      	MARK				MEANING
      	===============================	=======================================
      	RXRPC_SKB_MARK_DATA		Data message
      	RXRPC_SKB_MARK_FINAL_ACK	Final ACK received for an incoming call
      	RXRPC_SKB_MARK_BUSY		Client call rejected as server busy
      	RXRPC_SKB_MARK_REMOTE_ABORT	Call aborted by peer
      	RXRPC_SKB_MARK_NET_ERROR	Network error detected
      	RXRPC_SKB_MARK_LOCAL_ERROR	Local error encountered
      	RXRPC_SKB_MARK_NEW_CALL		New incoming call awaiting acceptance
      
           The remote abort message can be probed with rxrpc_kernel_get_abort_code().
           The two error messages can be probed with rxrpc_kernel_get_error_number().
           A new call can be accepted with rxrpc_kernel_accept_call().
      
           Data messages can have their contents extracted with the usual bunch of
           socket buffer manipulation functions.  A data message can be determined to
           be the last one in a sequence with rxrpc_kernel_is_data_last().  When a
           data message has been used up, rxrpc_kernel_data_delivered() should be
           called on it..
      
           Non-data messages should be handled to rxrpc_kernel_free_skb() to dispose
           of.  It is possible to get extra refs on all types of message for later
           freeing, but this may pin the state of a call until the message is finally
           freed.
      
       (*) Accept an incoming call.
      
      	struct rxrpc_call *
      	rxrpc_kernel_accept_call(struct socket *sock,
      				 unsigned long user_call_ID);
      
           This is used to accept an incoming call and to assign it a call ID.  This
           function is similar to rxrpc_kernel_begin_call() and calls accepted must
           be ended in the same way.
      
           If this function is successful, an opaque reference to the RxRPC call is
           returned.  The caller now holds a reference on this and it must be
           properly ended.
      
       (*) Reject an incoming call.
      
      	int rxrpc_kernel_reject_call(struct socket *sock);
      
           This is used to reject the first incoming call on the socket's queue with
           a BUSY message.  -ENODATA is returned if there were no incoming calls.
           Other errors may be returned if the call had been aborted (-ECONNABORTED)
           or had timed out (-ETIME).
      
       (*) Record the delivery of a data message and free it.
      
      	void rxrpc_kernel_data_delivered(struct sk_buff *skb);
      
           This is used to record a data message as having been delivered and to
           update the ACK state for the call.  The socket buffer will be freed.
      
       (*) Free a message.
      
      	void rxrpc_kernel_free_skb(struct sk_buff *skb);
      
           This is used to free a non-DATA socket buffer intercepted from an AF_RXRPC
           socket.
      
       (*) Determine if a data message is the last one on a call.
      
      	bool rxrpc_kernel_is_data_last(struct sk_buff *skb);
      
           This is used to determine if a socket buffer holds the last data message
           to be received for a call (true will be returned if it does, false
           if not).
      
           The data message will be part of the reply on a client call and the
           request on an incoming call.  In the latter case there will be more
           messages, but in the former case there will not.
      
       (*) Get the abort code from an abort message.
      
      	u32 rxrpc_kernel_get_abort_code(struct sk_buff *skb);
      
           This is used to extract the abort code from a remote abort message.
      
       (*) Get the error number from a local or network error message.
      
      	int rxrpc_kernel_get_error_number(struct sk_buff *skb);
      
           This is used to extract the error number from a message indicating either
           a local error occurred or a network error occurred.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      651350d1
    • D
      [AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both · 17926a79
      David Howells 提交于
      Provide AF_RXRPC sockets that can be used to talk to AFS servers, or serve
      answers to AFS clients.  KerberosIV security is fully supported.  The patches
      and some example test programs can be found in:
      
      	http://people.redhat.com/~dhowells/rxrpc/
      
      This will eventually replace the old implementation of kernel-only RxRPC
      currently resident in net/rxrpc/.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      17926a79