1. 05 2月, 2013 1 次提交
    • B
      net: usbnet: fix tx_dropped statistics · bf414b36
      Bjørn Mork 提交于
      It is normal for minidrivers accumulating frames to return NULL
      from their tx_fixup function. We do not want to count this as a
      drop, or log any debug messages.  A different exit path is
      therefore chosen for such drivers, skipping the debug message
      and the tx_dropped increment.
      
      The test for accumulating drivers was however completely bogus,
      making the exit path selection depend on whether the user had
      enabled tx_err logging or not. This would arbitrarily mess up
      accounting for both accumulating and non-accumulating minidrivers,
      and would result in unwanted debug messages for the accumulating
      drivers.
      
      Fix by testing for FLAG_MULTI_PACKET instead, which probably was
      the intention from the beginning.  This usage match the documented
      behaviour of this flag:
      
       Indicates to usbnet, that USB driver accumulates multiple IP packets.
       Affects statistic (counters) and short packet handling.
      Signed-off-by: NBjørn Mork <bjorn@mork.no>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bf414b36
  2. 03 2月, 2013 1 次提交
  3. 01 2月, 2013 1 次提交
  4. 31 1月, 2013 3 次提交
  5. 30 1月, 2013 8 次提交
  6. 29 1月, 2013 3 次提交
  7. 28 1月, 2013 6 次提交
  8. 27 1月, 2013 14 次提交
  9. 24 1月, 2013 3 次提交
    • P
      mfd: vexpress-sysreg: Don't skip initialization on probe · 52666298
      Pawel Moll 提交于
      The vexpress-sysreg driver does not have to be initialized
      early, when the platform doesn't require this. Unfortunately
      in such case it wasn't initialized correctly - master site
      lookup and config bridge registration were missing. Fixed now.
      Signed-off-by: NPawel Moll <pawel.moll@arm.com>
      52666298
    • E
      Revert "iwlwifi: fix the reclaimed packet tracking upon flush queue" · ae023b27
      Emmanuel Grumbach 提交于
      This reverts commit f590dcec
      which has been reported to cause issues.
      
      See https://lkml.org/lkml/2013/1/20/4 for further details.
      
      Cc: stable@vger.kernel.org [3.7]
      Signed-off-by: NEmmanuel Grumbach <emmanuel.grumbach@intel.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      ae023b27
    • J
      DM-RAID: Fix RAID10's check for sufficient redundancy · 55ebbb59
      Jonathan Brassow 提交于
      Before attempting to activate a RAID array, it is checked for sufficient
      redundancy.  That is, we make sure that there are not too many failed
      devices - or devices specified for rebuild - to undermine our ability to
      activate the array.  The current code performs this check twice - once to
      ensure there were not too many devices specified for rebuild by the user
      ('validate_rebuild_devices') and again after possibly experiencing a failure
      to read the superblock ('analyse_superblocks').  Neither of these checks are
      sufficient.  The first check is done properly but with insufficient
      information about the possible failure state of the devices to make a good
      determination if the array can be activated.  The second check is simply
      done wrong in the case of RAID10 because it doesn't account for the
      independence of the stripes (i.e. mirror sets).  The solution is to use the
      properly written check ('validate_rebuild_devices'), but perform the check
      after the superblocks have been read and we know which devices have failed.
      This gives us one check instead of two and performs it in a location where
      it can be done right.
      
      Only RAID10 was affected and it was affected in the following ways:
      - the code did not properly catch the condition where a user specified
        a device for rebuild that already had a failed device in the same mirror
        set.  (This condition would, however, be caught at a deeper level in MD.)
      - the code triggers a false positive and denies activation when devices in
        independent mirror sets have failed - counting the failures as though they
        were all in the same set.
      
      The most likely place this error was introduced (or this patch should have
      been included) is in commit 4ec1e369 - first introduced in v3.7-rc1.
      Consequently this fix should also go in v3.7.y, however there is a
      small conflict on the .version in raid_target, so I'll submit a
      separate patch to -stable.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      55ebbb59