1. 09 1月, 2008 2 次提交
  2. 27 12月, 2007 2 次提交
  3. 21 12月, 2007 1 次提交
  4. 07 12月, 2007 2 次提交
  5. 29 11月, 2007 1 次提交
  6. 26 11月, 2007 1 次提交
    • P
      [IPV4]: Fix memory leak in inet_hashtables.h when NUMA is on · 218ad12f
      Pavel Emelyanov 提交于
      The inet_ehash_locks_alloc() looks like this:
      
      #ifdef CONFIG_NUMA
      	if (size > PAGE_SIZE)
      		x = vmalloc(...);
      	else
      #endif
      		x = kmalloc(...);
      
      Unlike it, the inet_ehash_locks_alloc() looks like this:
      
      #ifdef CONFIG_NUMA
      	if (size > PAGE_SIZE)
      		vfree(x);
      	else
      #else
      		kfree(x);
      #endif
      
      The error is obvious - if the NUMA is on and the size
      is less than the PAGE_SIZE we leak the pointer (kfree is
      inside the #else branch).
      
      Compiler doesn't warn us because after the kfree(x) there's
      a "x = NULL" assignment, so here's another (minor?) bug: we 
      don't set x to NULL under certain circumstances.
      
      Boring explanation, I know... Patch explains it better.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      218ad12f
  7. 21 11月, 2007 1 次提交
  8. 20 11月, 2007 4 次提交
  9. 19 11月, 2007 1 次提交
    • H
      [TCP]: Fix TCP header misalignment · 21df56c6
      Herbert Xu 提交于
      Indeed my previous change to alloc_pskb has made it possible
      for the TCP header to be misaligned iff the MTU is not a multiple
      of 4 (and less than a page).  So I suspect the optimised IPsec
      MTU calculation is giving you just such an MTU :)
      
      This patch fixes it by changing alloc_pskb to make sure that
      the size is at least 32-bit aligned.  This does not cause the
      problem fixed by the previous patch because max_header is always
      32-bit aligned which means that in the SG/NOTSO case this will
      be a no-op.
      
      I thought about putting this in the callers but all the current
      callers are from TCP.  If and when we get a non-TCP caller we
      can always create a TCP wrapper for this function and move the
      alignment over there.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      21df56c6
  10. 15 11月, 2007 2 次提交
    • P
      [INET]: Fix potential kfree on vmalloc-ed area of request_sock_queue · dab6ba36
      Pavel Emelyanov 提交于
      The request_sock_queue's listen_opt is either vmalloc-ed or
      kmalloc-ed depending on the number of table entries. Thus it 
      is expected to be handled properly on free, which is done in 
      the reqsk_queue_destroy().
      
      However the error path in inet_csk_listen_start() calls 
      the lite version of reqsk_queue_destroy, called 
      __reqsk_queue_destroy, which calls the kfree unconditionally. 
      
      Fix this and move the __reqsk_queue_destroy into a .c file as 
      it looks too big to be inline.
      
      As David also noticed, this is an error recovery path only,
      so no locking is required and the lopt is known to be not NULL.
      
      reqsk_queue_yank_listen_sk is also now only used in
      net/core/request_sock.c so we should move it there too.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Acked-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dab6ba36
    • H
      [TCP]: Fix size calculation in sk_stream_alloc_pskb · fb93134d
      Herbert Xu 提交于
      We round up the header size in sk_stream_alloc_pskb so that
      TSO packets get zero tail room.  Unfortunately this rounding
      up is not coordinated with the select_size() function used by
      TCP to calculate the second parameter of sk_stream_alloc_pskb.
      
      As a result, we may allocate more than a page of data in the
      non-TSO case when exactly one page is desired.
      
      In fact, rounding up the head room is detrimental in the non-TSO
      case because it makes memory that would otherwise be available to
      the payload head room.  TSO doesn't need this either, all it wants
      is the guarantee that there is no tail room.
      
      So this patch fixes this by adjusting the skb_reserve call so that
      exactly the requested amount (which all callers have calculated in
      a precise way) is made available as tail room.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fb93134d
  11. 13 11月, 2007 3 次提交
  12. 11 11月, 2007 6 次提交
  13. 10 11月, 2007 3 次提交
  14. 08 11月, 2007 4 次提交
  15. 07 11月, 2007 5 次提交
    • E
      [INET]: Remove per bucket rwlock in tcp/dccp ehash table. · 230140cf
      Eric Dumazet 提交于
      As done two years ago on IP route cache table (commit
      22c047cc) , we can avoid using one
      lock per hash bucket for the huge TCP/DCCP hash tables.
      
      On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for
      litle performance differences. (we hit a different cache line for the
      rwlock, but then the bucket cache line have a better sharing factor
      among cpus, since we dirty it less often). For netstat or ss commands
      that want a full scan of hash table, we perform fewer memory accesses.
      
      Using a 'small' table of hashed rwlocks should be more than enough to
      provide correct SMP concurrency between different buckets, without
      using too much memory. Sizing of this table depends on
      num_possible_cpus() and various CONFIG settings.
      
      This patch provides some locking abstraction that may ease a future
      work using a different model for TCP/DCCP table.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      230140cf
    • R
      [IPVS]: Synchronize closing of Connections · efac5276
      Rumen G. Bogdanovski 提交于
      This patch makes the master daemon to sync the connection when it is about
      to close.  This makes the connections on the backup to close or timeout
      according their state.  Before the sync was performed only if the
      connection is in ESTABLISHED state which always made the connections to
      timeout in the hard coded 3 minutes. However the Andy Gospodarek's patch
      ([IPVS]: use proper timeout instead of fixed value) effectively did nothing
      more than increasing this to 15 minutes (Established state timeout).  So
      this patch makes use of proper timeout since it syncs the connections on
      status changes to FIN_WAIT (2min timeout) and CLOSE (10sec timeout).
      However if the backup misses CLOSE hopefully it did not miss FIN_WAIT.
      Otherwise we will just have to wait for the ESTABLISHED state timeout. As
      it is without this patch.  This way the number of the hanging connections
      on the backup is kept to minimum. And very few of them will be left to
      timeout with a long timeout.
      
      This is important if we want to make use of the fix for the real server
      overcommit on master/backup fail-over.
      Signed-off-by: NRumen G. Bogdanovski <rumen@voicecho.com>
      Signed-off-by: NSimon Horman <horms@verge.net.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      efac5276
    • R
      [IPVS]: Bind connections on stanby if the destination exists · 1e356f9c
      Rumen G. Bogdanovski 提交于
      This patch fixes the problem with node overload on director fail-over.
      Given the scenario: 2 nodes each accepting 3 connections at a time and 2
      directors, director failover occurs when the nodes are fully loaded (6
      connections to the cluster) in this case the new director will assign
      another 6 connections to the cluster, If the same real servers exist
      there.
      
      The problem turned to be in not binding the inherited connections to
      the real servers (destinations) on the backup director. Therefore:
      "ipvsadm -l" reports 0 connections:
      root@test2:~# ipvsadm -l
      IP Virtual Server version 1.2.1 (size=4096)
      Prot LocalAddress:Port Scheduler Flags
        -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
      TCP  test2.local:5999 wlc
        -> node473.local:5999           Route   1000   0          0
        -> node484.local:5999           Route   1000   0          0
      
      while "ipvs -lnc" is right
      root@test2:~# ipvsadm -lnc
      IPVS connection entries
      pro expire state       source             virtual            destination
      TCP 14:56  ESTABLISHED 192.168.0.10:39164 192.168.0.222:5999
      192.168.0.51:5999
      TCP 14:59  ESTABLISHED 192.168.0.10:39165 192.168.0.222:5999
      192.168.0.52:5999
      
      So the patch I am sending fixes the problem by binding the received
      connections to the appropriate service on the backup director, if it
      exists, else the connection will be handled the old way. So if the
      master and the backup directors are synchronized in terms of real
      services there will be no problem with server over-committing since
      new connections will not be created on the nonexistent real services
      on the backup. However if the service is created later on the backup,
      the binding will be performed when the next connection update is
      received. With this patch the inherited connections will show as
      inactive on the backup:
      
      root@test2:~# ipvsadm -l
      IP Virtual Server version 1.2.1 (size=4096)
      Prot LocalAddress:Port Scheduler Flags
        -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
      TCP  test2.local:5999 wlc
        -> node473.local:5999           Route   1000   0          1
        -> node484.local:5999           Route   1000   0          1
      
      rumen@test2:~$ cat /proc/net/ip_vs
      IP Virtual Server version 1.2.1 (size=4096)
      Prot LocalAddress:Port Scheduler Flags
        -> RemoteAddress:Port Forward Weight ActiveConn InActConn
      TCP  C0A800DE:176F wlc
        -> C0A80033:176F      Route   1000   0          1
        -> C0A80032:176F      Route   1000   0          1
      
      Regards,
      Rumen Bogdanovski
      Acked-by: NJulian Anastasov <ja@ssi.bg>
      Signed-off-by: NRumen G. Bogdanovski <rumen@voicecho.com>
      Signed-off-by: NSimon Horman <horms@verge.net.au>
      1e356f9c
    • P
      [IPV4]: Compact some ifdefs in the fib code. · c3e9a353
      Pavel Emelyanov 提交于
      There are places that check for CONFIG_IP_MULTIPLE_TABLES
      twice in the same file, but the internals of these #ifdefs
      can be merged.
      
      As a side effect - remove one ifdef from inside a function.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c3e9a353
    • E
      [NET]: Define infrastructure to keep 'inuse' changes in an efficent SMP/NUMA way. · 286ab3d4
      Eric Dumazet 提交于
      "struct proto" currently uses an array stats[NR_CPUS] to track change on
      'inuse' sockets per protocol.
      
      If NR_CPUS is big, this means we use a big memory area for this.
      Moreover, all this memory area is located on a single node on NUMA
      machines, increasing memory pressure on the boot node.
      
      In this patch, I tried to :
      
      - Keep a fast !CONFIG_SMP implementation
      - Keep a fast CONFIG_SMP implementation for often used protocols
      (tcp,udp,raw,...)
      - Introduce a NUMA efficient implementation
      
      Some helper macros are defined in include/net/sock.h
      These macros take into account CONFIG_SMP
      
      If a "struct proto" is declared without using DEFINE_PROTO_INUSE /
      REF_PROTO_INUSE
      macros, it will automatically use a default implementation, using a
      dynamically allocated percpu zone.
      This default implementation will be NUMA efficient, but might use 32/64
      bytes per possible cpu
      because of current alloc_percpu() implementation.
      However it still should be better than previous implementation based on
      stats[NR_CPUS] field.
      
      When a "struct proto" is changed to use the new macros, we use a single
      static "int" percpu variable,
      lowering the memory and cpu costs, still preserving NUMA efficiency.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      286ab3d4
  16. 02 11月, 2007 1 次提交
  17. 01 11月, 2007 1 次提交