1. 11 10月, 2007 1 次提交
    • E
      [NET]: Support multiple network namespaces with netlink · b4b51029
      Eric W. Biederman 提交于
      Each netlink socket will live in exactly one network namespace,
      this includes the controlling kernel sockets.
      
      This patch updates all of the existing netlink protocols
      to only support the initial network namespace.  Request
      by clients in other namespaces will get -ECONREFUSED.
      As they would if the kernel did not have the support for
      that netlink protocol compiled in.
      
      As each netlink protocol is updated to be multiple network
      namespace safe it can register multiple kernel sockets
      to acquire a presence in the rest of the network namespaces.
      
      The implementation in af_netlink is a simple filter implementation
      at hash table insertion and hash table look up time.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b4b51029
  2. 19 7月, 2007 2 次提交
  3. 06 5月, 2007 1 次提交
  4. 26 4月, 2007 5 次提交
  5. 13 2月, 2007 1 次提交
    • M
      [PATCH] eCryptfs: Public key transport mechanism · 88b4a07e
      Michael Halcrow 提交于
      This is the transport code for public key functionality in eCryptfs.  It
      manages encryption/decryption request queues with a transport mechanism.
      Currently, netlink is the only implemented transport.
      
      Each inode has a unique File Encryption Key (FEK).  Under passphrase, a File
      Encryption Key Encryption Key (FEKEK) is generated from a salt/passphrase
      combo on mount.  This FEKEK encrypts each FEK and writes it into the header of
      each file using the packet format specified in RFC 2440.  This is all
      symmetric key encryption, so it can all be done via the kernel crypto API.
      
      These new patches introduce public key encryption of the FEK.  There is no
      asymmetric key encryption support in the kernel crypto API, so eCryptfs pushes
      the FEK encryption and decryption out to a userspace daemon.  After
      considering our requirements and determining the complexity of using various
      transport mechanisms, we settled on netlink for this communication.
      
      eCryptfs stores authentication tokens into the kernel keyring.  These tokens
      correlate with individual keys.  For passphrase mode of operation, the
      authentication token contains the symmetric FEKEK.  For public key, the
      authentication token contains a PKI type and an opaque data blob managed by
      individual PKI modules in userspace.
      
      Each user who opens a file under an eCryptfs partition mounted in public key
      mode must be running a daemon.  That daemon has the user's credentials and has
      access to all of the keys to which the user should have access.  The daemon,
      when started, initializes the pluggable PKI modules available on the system
      and registers itself with the eCryptfs kernel module.  Userspace utilities
      register public key authentication tokens into the user session keyring.
      These authentication tokens correlate key signatures with PKI modules and PKI
      blobs.  The PKI blobs contain PKI-specific information necessary for the PKI
      module to carry out asymmetric key encryption and decryption.
      
      When the eCryptfs module parses the header of an existing file and finds a Tag
      1 (Public Key) packet (see RFC 2440), it reads in the public key identifier
      (signature).  The asymmetrically encrypted FEK is in the Tag 1 packet;
      eCryptfs puts together a decrypt request packet containing the signature and
      the encrypted FEK, then it passes it to the daemon registered for the
      current->euid via a netlink unicast to the PID of the daemon, which was
      registered at the time the daemon was started by the user.
      
      The daemon actually just makes calls to libecryptfs, which implements request
      packet parsing and manages PKI modules.  libecryptfs grabs the public key
      authentication token for the given signature from the user session keyring.
      This auth tok tells libecryptfs which PKI module should receive the request.
      libecryptfs then makes a decrypt() call to the PKI module, and it passes along
      the PKI block from the auth tok.  The PKI uses the blob to figure out how it
      should decrypt the data passed to it; it performs the decryption and passes
      the decrypted data back to libecryptfs.  libecryptfs then puts together a
      reply packet with the decrypted FEK and passes that back to the eCryptfs
      module.
      
      The eCryptfs module manages these request callouts to userspace code via
      message context structs.  The module maintains an array of message context
      structs and places the elements of the array on two lists: a free and an
      allocated list.  When eCryptfs wants to make a request, it moves a msg ctx
      from the free list to the allocated list, sets its state to pending, and fires
      off the message to the user's registered daemon.
      
      When eCryptfs receives a netlink message (via the callback), it correlates the
      msg ctx struct in the alloc list with the data in the message itself.  The
      msg->index contains the offset of the array of msg ctx structs.  It verifies
      that the registered daemon PID is the same as the PID of the process that sent
      the message.  It also validates a sequence number between the received packet
      and the msg ctx.  Then, it copies the contents of the message (the reply
      packet) into the msg ctx struct, sets the state in the msg ctx to done, and
      wakes up the process that was sleeping while waiting for the reply.
      
      The sleeping process was whatever was performing the sys_open().  This process
      originally called ecryptfs_send_message(); it is now in
      ecryptfs_wait_for_response().  When it wakes up and sees that the msg ctx
      state was set to done, it returns a pointer to the message contents (the reply
      packet) and returns.  If all went well, this packet contains the decrypted
      FEK, which is then copied into the crypt_stat struct, and life continues as
      normal.
      
      The case for creation of a new file is very similar, only instead of a decrypt
      request, eCryptfs sends out an encrypt request.
      
      > - We have a great clod of key mangement code in-kernel.  Why is that
      >   not suitable (or growable) for public key management?
      
      eCryptfs uses Howells' keyring to store persistent key data and PKI state
      information.  It defers public key cryptographic transformations to userspace
      code.  The userspace data manipulation request really is orthogonal to key
      management in and of itself.  What eCryptfs basically needs is a secure way to
      communicate with a particular daemon for a particular task doing a syscall,
      based on the UID.  Nothing running under another UID should be able to access
      that channel of communication.
      
      > - Is it appropriate that new infrastructure for public key
      > management be private to a particular fs?
      
      The messaging.c file contains a lot of code that, perhaps, could be extracted
      into a separate kernel service.  In essence, this would be a sort of
      request/reply mechanism that would involve a userspace daemon.  I am not aware
      of anything that does quite what eCryptfs does, so I was not aware of any
      existing tools to do just what we wanted.
      
      >   What happens if one of these daemons exits without sending a quit
      >   message?
      
      There is a stale uid<->pid association in the hash table for that user.  When
      the user registers a new daemon, eCryptfs cleans up the old association and
      generates a new one.  See ecryptfs_process_helo().
      
      > - _why_ does it use netlink?
      
      Netlink provides the transport mechanism that would minimize the complexity of
      the implementation, given that we can have multiple daemons (one per user).  I
      explored the possibility of using relayfs, but that would involve having to
      introduce control channels and a protocol for creating and tearing down
      channels for the daemons.  We do not have to worry about any of that with
      netlink.
      Signed-off-by: NMichael Halcrow <mhalcrow@us.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      88b4a07e
  6. 03 12月, 2006 2 次提交
  7. 03 9月, 2006 1 次提交
  8. 23 6月, 2006 1 次提交
  9. 01 5月, 2006 1 次提交
  10. 21 3月, 2006 1 次提交
  11. 10 2月, 2006 1 次提交
    • A
      [NETLINK]: Fix a severe bug · a70ea994
      Alexey Kuznetsov 提交于
      netlink overrun was broken while improvement of netlink.
      Destination socket is used in the place where it was meant to be source socket,
      so that now overrun is never sent to user netlink sockets, when it should be,
      and it even can be set on kernel socket, which results in complete deadlock
      of rtnetlink.
      
      Suggested fix is to restore status quo passing source socket as additional
      argument to netlink_attachskb().
      
      A little explanation: overrun is set on a socket, when it failed
      to receive some message and sender of this messages does not or even
      have no way to handle this error. This happens in two cases:
      1. when kernel sends something. Kernel never retransmits and cannot
         wait for buffer space.
      2. when user sends a broadcast and the message was not delivered
         to some recipients.
      Signed-off-by: NAlexey Kuznetsov <kuznet@ms2.inr.ac.ru>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a70ea994
  12. 10 11月, 2005 1 次提交
  13. 09 10月, 2005 1 次提交
  14. 15 9月, 2005 1 次提交
  15. 12 9月, 2005 1 次提交
    • E
      [NET]: Add netlink connector. · 7672d0b5
      Evgeniy Polyakov 提交于
      Kernel connector - new userspace <-> kernel space easy to use
      communication module which implements easy to use bidirectional
      message bus using netlink as it's backend.  Connector was created to
      eliminate complex skb handling both in send and receive message bus
      direction.
      
      Connector driver adds possibility to connect various agents using as
      one of it's backends netlink based network.  One must register
      callback and identifier. When driver receives special netlink message
      with appropriate identifier, appropriate callback will be called.
      
      From the userspace point of view it's quite straightforward:
      
      	socket();
      	bind();
      	send();
      	recv();
      
      But if kernelspace want to use full power of such connections, driver
      writer must create special sockets, must know about struct sk_buff
      handling...  Connector allows any kernelspace agents to use netlink
      based networking for inter-process communication in a significantly
      easier way:
      
      int cn_add_callback(struct cb_id *id, char *name, void (*callback) (void *));
      void cn_netlink_send(struct cn_msg *msg, u32 __groups, int gfp_mask);
      
      struct cb_id
      {
      	__u32			idx;
      	__u32			val;
      };
      
      idx and val are unique identifiers which must be registered in
      connector.h for in-kernel usage.  void (*callback) (void *) - is a
      callback function which will be called when message with above idx.val
      will be received by connector core.
      
      Using connector completely hides low-level transport layer from it's
      users.
      
      Connector uses new netlink ability to have many groups in one socket.
      
      [ Incorporating many cleanups and fixes by myself and
        Andrew Morton -DaveM ]
      Signed-off-by: NEvgeniy Polyakov <johnpol@2ka.mipt.ru>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7672d0b5
  16. 30 8月, 2005 7 次提交
  17. 09 8月, 2005 1 次提交
  18. 25 7月, 2005 1 次提交
  19. 12 7月, 2005 1 次提交
  20. 29 6月, 2005 1 次提交
  21. 22 6月, 2005 1 次提交
  22. 21 6月, 2005 1 次提交
    • R
      [NETLINK]: fib_lookup() via netlink · 246955fe
      Robert Olsson 提交于
      Below is a more generic patch to do fib_lookup via netlink. For others 
      we should say that we discussed this as a way to verify route selection.
      It's also possible there are others uses for this.
      
      In short the fist half of struct fib_result_nl is filled in by caller 
      and netlink call fills in the other half and returns it.
      
      In case anyone is interested there is a corresponding user app to compare 
      the full routing table this was used to test implementation of the LC-trie. 
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      246955fe
  23. 19 6月, 2005 2 次提交
  24. 29 4月, 2005 1 次提交
  25. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4