1. 01 2月, 2011 1 次提交
    • A
      appletalk: move to staging · a6238f21
      Arnd Bergmann 提交于
      For all I know, Appletalk is dead, the only reasonable
      use right now would be nostalgia, and that can be served
      well enough by old kernels. The code is largely not
      in a bad shape, but it still uses the big kernel lock,
      and nobody seems motivated to change that.
      
      FWIW, the last release of MacOS that supported Appletalk
      was MacOS X 10.5, made in 2007, and it has been abandoned
      by Apple with 10.6. Using TCP/IP instead of Appletalk has
      been supported since MacOS 7.6, which was released in
      1997 and is able to run on most of the legacy hardware.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: netdev@vger.kernel.org
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      a6238f21
  2. 15 1月, 2011 1 次提交
  3. 14 1月, 2011 2 次提交
    • E
      net: remove dev_txq_stats_fold() · 1ac9ad13
      Eric Dumazet 提交于
      After recent changes, (percpu stats on vlan/tunnels...), we dont need
      anymore per struct netdev_queue tx_bytes/tx_packets/tx_dropped counters.
      
      Only remaining users are ixgbe, sch_teql, gianfar & macvlan :
      
      1) ixgbe can be converted to use existing tx_ring counters.
      
      2) macvlan incremented txq->tx_dropped, it can use the
      dev->stats.tx_dropped counter.
      
      3) sch_teql : almost revert ab35cd4b (Use net_device internal stats)
          Now we have ndo_get_stats64(), use it, even for "unsigned long"
      fields (No need to bring back a struct net_device_stats)
      
      4) gianfar adds a stats structure per tx queue to hold
      tx_bytes/tx_packets
      
      This removes a lockdep warning (and possible lockup) in rndis gadget,
      calling dev_get_stats() from hard IRQ context.
      
      Ref: http://www.spinics.net/lists/netdev/msg149202.htmlReported-by: NNeil Jones <neiljay@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Alexander Duyck <alexander.h.duyck@intel.com>
      CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      CC: Sandeep Gopalpet <sandeep.kumar@freescale.com>
      CC: Michal Nazarewicz <mina86@mina86.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ac9ad13
    • P
      netfilter: ctnetlink: fix loop in ctnetlink_get_conntrack() · f31e8d49
      Pablo Neira Ayuso 提交于
      This patch fixes a loop in ctnetlink_get_conntrack() that can be
      triggered if you use the same socket to receive events and to
      perform a GET operation. Under heavy load, netlink_unicast()
      may return -EAGAIN, this error code is reserved in nfnetlink for
      the module load-on-demand. Instead, we return -ENOBUFS which is
      the appropriate error code that has to be propagated to
      user-space.
      Reported-by: NHolger Eitzenberger <holger@eitzenberger.org>
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      f31e8d49
  4. 13 1月, 2011 7 次提交
  5. 12 1月, 2011 8 次提交
  6. 11 1月, 2011 9 次提交
  7. 10 1月, 2011 8 次提交
  8. 07 1月, 2011 4 次提交
    • G
      dccp: make upper bound for seq_window consistent on 32/64 bit · bfbb2346
      Gerrit Renker 提交于
      The 'seq_window' sysctl sets the initial value for the DCCP Sequence Window,
      which may range from 32..2^46-1 (RFC 4340, 7.5.2). The patch sets the upper
      bound consistently to 2^32-1 on both 32 and 64 bit systems, which should be
      sufficient - with a RTT of 1sec and 1-byte packets, a seq_window of 2^32-1
      corresponds to a link speed of 34 Gbps.
      Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
      bfbb2346
    • S
      dccp: fix bug in updating the GSR · 763dadd4
      Samuel Jero 提交于
      Currently dccp_check_seqno allows any valid packet to update the Greatest
      Sequence Number Received, even if that packet's sequence number is less than
      the current GSR. This patch adds a check to make sure that the new packet's
      sequence number is greater than GSR.
      Signed-off-by: NSamuel Jero <sj323707@ohio.edu>
      Signed-off-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
      763dadd4
    • S
      dccp: fix return value for sequence-invalid packets · 2cf5be93
      Samuel Jero 提交于
      Currently dccp_check_seqno returns 0 (indicating a valid packet) if the
      acknowledgment number is out of bounds and the sync that RFC 4340 mandates at
      this point is currently being rate-limited. This function should return -1,
      indicating an invalid packet.
      Signed-off-by: NSamuel Jero <sj323707@ohio.edu>
      Acked-by: NGerrit Renker <gerrit@erg.abdn.ac.uk>
      2cf5be93
    • N
      fs: scale mntget/mntput · b3e19d92
      Nick Piggin 提交于
      The problem that this patch aims to fix is vfsmount refcounting scalability.
      We need to take a reference on the vfsmount for every successful path lookup,
      which often go to the same mount point.
      
      The fundamental difficulty is that a "simple" reference count can never be made
      scalable, because any time a reference is dropped, we must check whether that
      was the last reference. To do that requires communication with all other CPUs
      that may have taken a reference count.
      
      We can make refcounts more scalable in a couple of ways, involving keeping
      distributed counters, and checking for the global-zero condition less
      frequently.
      
      - check the global sum once every interval (this will delay zero detection
        for some interval, so it's probably a showstopper for vfsmounts).
      
      - keep a local count and only taking the global sum when local reaches 0 (this
        is difficult for vfsmounts, because we can't hold preempt off for the life of
        a reference, so a counter would need to be per-thread or tied strongly to a
        particular CPU which requires more locking).
      
      - keep a local difference of increments and decrements, which allows us to sum
        the total difference and hence find the refcount when summing all CPUs. Then,
        keep a single integer "long" refcount for slow and long lasting references,
        and only take the global sum of local counters when the long refcount is 0.
      
      This last scheme is what I implemented here. Attached mounts and process root
      and working directory references are "long" references, and everything else is
      a short reference.
      
      This allows scalable vfsmount references during path walking over mounted
      subtrees and unattached (lazy umounted) mounts with processes still running
      in them.
      
      This results in one fewer atomic op in the fastpath: mntget is now just a
      per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock
      and non-atomic decrement in the common case. However code is otherwise bigger
      and heavier, so single threaded performance is basically a wash.
      Signed-off-by: NNick Piggin <npiggin@kernel.dk>
      b3e19d92