1. 17 12月, 2010 10 次提交
  2. 16 12月, 2010 1 次提交
  3. 15 12月, 2010 1 次提交
  4. 14 12月, 2010 5 次提交
  5. 13 12月, 2010 7 次提交
  6. 11 12月, 2010 12 次提交
  7. 10 12月, 2010 4 次提交
    • T
      hso: IP checksuming doesn't work on GE0301 option cards · 6934d335
      Thomas Bogendoerfer 提交于
      There is definitly a problem, that some option cards send up broken
      IP pakets leading to corrupted IP packets. These corruptions aren't
      detected, because the driver claims that the packets are already
      checksummed. This change removes the CHECKSUM_UNNECESSARY option
      and let IP detect broken data.
      Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6934d335
    • T
      xfrm: Fix xfrm_state_migrate leak · 78347c8c
      Thomas Egerer 提交于
      xfrm_state_migrate calls kfree instead of xfrm_state_put to free
      a failed state. According to git commit 553f9118 this can cause
      memory leaks.
      Signed-off-by: NThomas Egerer <thomas.egerer@secunet.com>
      Signed-off-by: NSteffen Klassert <steffen.klassert@secunet.com>
      Acked-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      78347c8c
    • N
      net: Convert netpoll blocking api in bonding driver to be a counter · fb4fa76a
      Neil Horman 提交于
      A while back I made some changes to enable netpoll in the bonding driver.  Among
      them was a per-cpu flag that indicated we were in a path that held locks which
      could cause the netpoll path to block in during tx, and as such the tx path
      should queue the frame for later use.  This appears to have given rise to a
      regression.  If one of those paths on which we hold the per-cpu flag yields the
      cpu, its possible for us to come back on a different cpu, leading to us clearing
      a different flag than we set.  This results in odd netpoll drops, and BUG
      backtraces appearing in the log, as we check to make sure that we only clear set
      bits, and only set clear bits.  I had though briefly about changing the
      offending paths so that they wouldn't sleep, but looking at my origional work
      more closely, it doesn't appear that a per-cpu flag is warranted.  We alrady
      gate the checking of this flag on IFF_IN_NETPOLL, so we don't hit this in the
      normal tx case anyway.  And practically speaking, the normal use case for
      netpoll is to only have one client anyway, so we're not going to erroneously
      queue netpoll frames when its actually safe to do so.  As such, lets just
      convert that per-cpu flag to an atomic counter.  It fixes the rescheduling bugs,
      is equivalent from a performance perspective and actually eliminates some code
      in the process.
      
      Tested by the reporter and myself, successfully
      Reported-by: NLiang Zheng <lzheng@redhat.com>
      CC: Jay Vosburgh <fubar@us.ibm.com>
      CC: Andy Gospodarek <andy@greyhouse.net>
      CC: David S. Miller <davem@davemloft.net>
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fb4fa76a
    • W
      iwlagn: implement layout-agnostic EEPROM reading · 6942fec9
      Wey-Yi Guy 提交于
      From: Johannes Berg <johannes.berg@intel.com>
      
      The current EEPROM reading code has some layout
      assumptions that now turned out to be false with
      some newer versions of the EEPROM. Luckily, we
      can avoid all such assumptions by using data in
      the EEPROM itself, so implement using that.
      
      However, for risk mitigation purposes, keep the
      old reading code for current hardware for now.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Signed-off-by: NWey-Yi Guy <wey-yi.w.guy@intel.com>
      6942fec9