1. 23 6月, 2011 3 次提交
    • G
      ath9k: add external_reset callback to ath9k_platfom_data for AR9330 · 7d95847c
      Gabor Juhos 提交于
      The patch adds a callback to ath9k_platform_data. If the
      callback is provided by the platform code, then it can be
      used to hard reset the WMAC device.
      
      The callback is required for doing a hard reset of the AR9330
      chips to get them working again after a hang.
      Signed-off-by: NGabor Juhos <juhosg@openwrt.org>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      7d95847c
    • G
      ath9k: add MAC revision detection for AR9330 · 3762561a
      Gabor Juhos 提交于
      The AR9330 1.0 and 1.1 are using the same revision,
      thus it is not possible to distinguish the two chips.
      The platform setup code can distinguish the chips based
      on the SoC revision.
      
      Add a callback function to ath9k_platform_data in order
      to allow getting the revision number from the platform code.
      Signed-off-by: NGabor Juhos <juhosg@openwrt.org>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      3762561a
    • J
      netlink: advertise incomplete dumps · 670dc283
      Johannes Berg 提交于
      Consider the following situation:
       * a dump that would show 8 entries, four in the first
         round, and four in the second
       * between the first and second rounds, 6 entries are
         removed
       * now the second round will not show any entry, and
         even if there is a sequence/generation counter the
         application will not know
      
      To solve this problem, add a new flag NLM_F_DUMP_INTR
      to the netlink header that indicates the dump wasn't
      consistent, this flag can also be set on the MSG_DONE
      message that terminates the dump, and as such above
      situation can be detected.
      
      To achieve this, add a sequence counter to the netlink
      callback struct. Of course, netlink code still needs
      to use this new functionality. The correct way to do
      that is to always set cb->seq when a dumpit callback
      is invoked and call nl_dump_check_consistent() for
      each new message. The core code will also call this
      function for the final MSG_DONE message.
      
      To make it usable with generic netlink, a new function
      genlmsg_nlhdr() is needed to obtain the netlink header
      from the genetlink user header.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      670dc283
  2. 22 6月, 2011 4 次提交
    • A
      net: remove mm.h inclusion from netdevice.h · b7f080cf
      Alexey Dobriyan 提交于
      Remove linux/mm.h inclusion from netdevice.h -- it's unused (I've checked manually).
      
      To prevent mm.h inclusion via other channels also extract "enum dma_data_direction"
      definition into separate header. This tiny piece is what gluing netdevice.h with mm.h
      via "netdevice.h => dmaengine.h => dma-mapping.h => scatterlist.h => mm.h".
      Removal of mm.h from scatterlist.h was tried and was found not feasible
      on most archs, so the link was cutoff earlier.
      
      Hope people are OK with tiny include file.
      
      Note, that mm_types.h is still dragged in, but it is a separate story.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b7f080cf
    • J
      dcb: Add ieee_dcb_delapp() and dcb op to delete app entry · f9ae7e4b
      John Fastabend 提交于
      Now that we allow multiple IEEE App entries we need a way
      to remove specific entries. To do this add the ieee_dcb_delapp()
      routine.
      
      Additionaly drivers may need to remove the APP entry from
      their firmware tables. Add dcb ops routine to handle this.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9ae7e4b
    • J
      net: dcbnl, add multicast group for DCB · 314b4778
      John Fastabend 提交于
      Now that dcbnl is being used in many cases by more
      than a single agent it is beneficial to be notified
      when some entity either driver or user space has
      changed the DCB attributes.
      
      Today applications either end up polling the interface
      or relying on a user space database to maintain the DCB
      state and post events. Polling is a poor solution for
      obvious reasons. And relying on a user space database
      has its own downside. Namely it has created strange
      boot dependencies requiring the database be populated
      before any applications dependent on DCB attributes
      starts or the application goes into a polling loop.
      Populating the database requires negotiating link
      setting with the peer and can take anywhere from less
      than a second up to a few seconds depending on the switch
      implementation.
      
      Perhaps more importantly if another application or an
      embedded agent sets a DCB link attribute the database
      has no way of knowing other than polling the kernel.
      This prevents applications from responding quickly to
      changes in link events which at least in the FCoE case
      and probably any other protocols expecting a lossless
      link may result in IO errors.
      
      By adding a multicast group for DCB we have clean way
      to disseminate kernel DCB link attributes up to user
      space. Avoiding the need for user space to maintain
      a coherant database and disperse events that potentially
      do not reflect the current link state.
      Signed-off-by: NJohn Fastabend <john.r.fastabend@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      314b4778
    • M
      cnic, bnx2i: Add support for new devices - 57800, 57810, and 57840 · f4b5ad26
      Michael Chan 提交于
      And change iSCSI RQ doorbell size from 16B to 64B to match new firmware.
      Signed-off-by: NMichael Chan <mchan@broadcom.com>
      Signed-off-by: NEddie Wai <eddie.wai@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4b5ad26
  3. 21 6月, 2011 2 次提交
    • L
      vfs: i_state needs to be 'unsigned long' for now · 79568f5b
      Linus Torvalds 提交于
      Commit 13e12d14 ("vfs: reorganize 'struct inode' layout a bit")
      moved things around a bit changed i_state to be unsigned int instead of
      unsigned long.  That was to help structure layout for the 64-bit case,
      and shrink 'struct inode' a bit (admittedly that only happened when
      spinlock debugging was on and i_flags didn't pack with i_lock).
      
      However, Meelis Roos reports that this results in unaligned exceptions
      on sprc, and it turns out that the bit-locking primitives that we use
      for the I_NEW bit want to use the bitops.  Which want 'unsigned long',
      not 'unsigned int'.
      
      We really should fix the bit locking code to not have that kind of
      requirement, but that's a much bigger change.  So for now, revert that
      field back to 'unsigned long' (but keep the other re-ordering changes
      from the commit that caused this).
      
      Andi points out that we have played games with this in 'struct page', so
      it's solvable with other hacks too, but since right now the struct inode
      size advantage only happens with some rare config options, it's not
      worth fighting.
      
      It _would_ be worth fixing the bitlocking code, though.  Especially
      since there is no type safety in the bitlocking code (this never caused
      any warnings, and worked fine on x86-64, because the bitlocks take a
      'void *' and x86-64 doesn't care that deeply about alignment).  So it's
      currently a very easy problem to trigger by mistake and never notice.
      Reported-by: NMeelis Roos <mroos@linux.ee>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: David Miller <davem@davemloft.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      79568f5b
    • R
      bcma: clean exports of functions · 440ca98f
      Rafał Miłecki 提交于
      Function managing IRQs is needed for external drivers like b43.
      On the other side we do not expect writing any hosts drivers outside of
      bcma, so this is safe to do not export functions related to this.
      Signed-off-by: NRafał Miłecki <zajec5@gmail.com>
      Signed-off-by: NJohn W. Linville <linville@tuxdriver.com>
      440ca98f
  4. 20 6月, 2011 2 次提交
  5. 18 6月, 2011 2 次提交
  6. 17 6月, 2011 13 次提交
  7. 16 6月, 2011 7 次提交
  8. 15 6月, 2011 2 次提交
    • M
      tg3: Migrate phy preprocessor defs to system defs · 221c5637
      Matt Carlson 提交于
      This patch changes to code to use some of the preprocessor
      definitions from mii.h over its homegrown equivalents.
      Signed-off-by: NMatt Carlson <mcarlson@broadcom.com>
      Reviewed-by: NMichael Chan <mchan@broadcom.com>
      Reviewed-by: NBenjamin Li <benli@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@conan.davemloft.net>
      221c5637
    • S
      rcu: Use softirq to address performance regression · 09223371
      Shaohua Li 提交于
      Commit a26ac245(rcu: move TREE_RCU from softirq to kthread)
      introduced performance regression. In an AIM7 test, this commit degraded
      performance by about 40%.
      
      The commit runs rcu callbacks in a kthread instead of softirq. We observed
      high rate of context switch which is caused by this. Out test system has
      64 CPUs and HZ is 1000, so we saw more than 64k context switch per second
      which is caused by RCU's per-CPU kthread.  A trace showed that most of
      the time the RCU per-CPU kthread doesn't actually handle any callbacks,
      but instead just does a very small amount of work handling grace periods.
      This means that RCU's per-CPU kthreads are making the scheduler do quite
      a bit of work in order to allow a very small amount of RCU-related
      processing to be done.
      
      Alex Shi's analysis determined that this slowdown is due to lock
      contention within the scheduler.  Unfortunately, as Peter Zijlstra points
      out, the scheduler's real-time semantics require global action, which
      means that this contention is inherent in real-time scheduling.  (Yes,
      perhaps someone will come up with a workaround -- otherwise, -rt is not
      going to do well on large SMP systems -- but this patch will work around
      this issue in the meantime.  And "the meantime" might well be forever.)
      
      This patch therefore re-introduces softirq processing to RCU, but only
      for core RCU work.  RCU callbacks are still executed in kthread context,
      so that only a small amount of RCU work runs in softirq context in the
      common case.  This should minimize ksoftirqd execution, allowing us to
      skip boosting of ksoftirqd for CONFIG_RCU_BOOST=y kernels.
      Signed-off-by: NShaohua Li <shaohua.li@intel.com>
      Tested-by: N"Alex,Shi" <alex.shi@intel.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      09223371
  9. 13 6月, 2011 1 次提交
    • A
      Delay struct net freeing while there's a sysfs instance refering to it · a685e089
      Al Viro 提交于
      	* new refcount in struct net, controlling actual freeing of the memory
      	* new method in kobj_ns_type_operations (->drop_ns())
      	* ->current_ns() semantics change - it's supposed to be followed by
      corresponding ->drop_ns().  For struct net in case of CONFIG_NET_NS it bumps
      the new refcount; net_drop_ns() decrements it and calls net_free() if the
      last reference has been dropped.  Method renamed to ->grab_current_ns().
      	* old net_free() callers call net_drop_ns() instead.
      	* sysfs_exit_ns() is gone, along with a large part of callchain
      leading to it; now that the references stored in ->ns[...] stay valid we
      do not need to hunt them down and replace them with NULL.  That fixes
      problems in sysfs_lookup() and sysfs_readdir(), along with getting rid
      of sb->s_instances abuse.
      
      	Note that struct net *shutdown* logics has not changed - net_cleanup()
      is called exactly when it used to be called.  The only thing postponed by
      having a sysfs instance refering to that struct net is actual freeing of
      memory occupied by struct net.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      a685e089
  10. 12 6月, 2011 3 次提交
    • J
      vlan: Fix the ingress VLAN_FLAG_REORDER_HDR check · 0b5c9db1
      Jiri Pirko 提交于
      Testing of VLAN_FLAG_REORDER_HDR does not belong in vlan_untag
      but rather in vlan_do_receive.  Otherwise the vlan header
      will not be properly put on the packet in the case of
      vlan header accelleration.
      
      As we remove the check from vlan_check_reorder_header
      rename it vlan_reorder_header to keep the naming clean.
      
      Fix up the skb->pkt_type early so we don't look at the packet
      after adding the vlan tag, which guarantees we don't goof
      and look at the wrong field.
      
      Use a simple if statement instead of a complicated switch
      statement to decided that we need to increment rx_stats
      for a multicast packet.
      
      Hopefully at somepoint we will just declare the case where
      VLAN_FLAG_REORDER_HDR is cleared as unsupported and remove
      the code.  Until then this keeps it working correctly.
      Signed-off-by: NEric W. Biederman <ebiederm@xmission.com>
      Signed-off-by: NJiri Pirko <jpirko@redhat.com>
      Acked-by: NChangli Gao <xiaosuo@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b5c9db1
    • J
      virtio_net: introduce VIRTIO_NET_HDR_F_DATA_VALID · 10a8d94a
      Jason Wang 提交于
      There's no need for the guest to validate the checksum if it have been
      validated by host nics. So this patch introduces a new flag -
      VIRTIO_NET_HDR_F_DATA_VALID which is used to bypass the checksum
      examing in guest. The backend (tap/macvtap) may set this flag when
      met skbs with CHECKSUM_UNNECESSARY to save cpu utilization.
      
      No feature negotiation is needed as old driver just ignore this flag.
      
      Iperf shows 12%-30% performance improvement for UDP traffic. For TCP,
      when gro is on no difference as it produces skb with partial
      checksum. But when gro is disabled, 20% or even higher improvement
      could be measured by netperf.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10a8d94a
    • D
      linux/seqlock.h should #include asm/processor.h for cpu_relax() · 56a21052
      David Howells 提交于
      It uses cpu_relax(), and so needs <asm/processor.h>
      
      Without this patch, I see:
      
         CC      arch/mn10300/kernel/asm-offsets.s
        In file included from include/linux/time.h:8,
                         from include/linux/timex.h:56,
                         from include/linux/sched.h:57,
                         from arch/mn10300/kernel/asm-offsets.c:7:
        include/linux/seqlock.h: In function 'read_seqbegin':
        include/linux/seqlock.h:91: error: implicit declaration of function 'cpu_relax'
      
      whilst building asb2364_defconfig on MN10300.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      56a21052
  11. 11 6月, 2011 1 次提交