1. 05 5月, 2014 1 次提交
  2. 22 4月, 2014 2 次提交
  3. 11 4月, 2014 2 次提交
  4. 09 4月, 2014 5 次提交
    • C
      mac80211: update last_tx_rate only for data frame · 00a9a6d1
      Chun-Yeow Yeoh 提交于
      Rate controller in firmware may also return the Tx Rate
      used for management frame that is usually sent as lowest
      Tx Rate (1Mbps in 2.4GHz). So update the last_tx_rate only
      if it is data frame.
      
      This patch is tested with ath9k_htc.
      Signed-off-by: NChun-Yeow Yeoh <yeohchunyeow@gmail.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      00a9a6d1
    • M
      mac80211: fix radar_enabled propagation · 9b4816f5
      Michal Kazior 提交于
      If chandef had non-HT width it was possible for
      radar_enabled update to not be propagated properly
      through drv_config(). This happened because
      ieee80211_hw_conf_chan() would never see different
      local->hw.conf.chandef and local->_oper_chandef.
      
      This wasn't a problem with HT chandefs because
      _oper_chandef width is reset to non-HT in
      ieee80211_free_chanctx() making
      ieee80211_hw_conf_chan() to kick in.
      
      This problem led (at least) ath10k to not start
      CAC if prior CAC was cancelled and both CACs were
      requested for identical non-HT chandefs.
      Signed-off-by: NMichal Kazior <michal.kazior@tieto.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      9b4816f5
    • I
      mac80211: Disable SMPS for the monitor interface · 7b8a9cdd
      Ido Yariv 提交于
      All antennas should be operational when monitoring to maximize
      reception.
      Signed-off-by: NIdo Yariv <idox.yariv@intel.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      7b8a9cdd
    • J
      mac80211: fix software remain-on-channel implementation · 115b943a
      Johannes Berg 提交于
      Jouni reported that when doing off-channel transmissions mixed
      with on-channel transmissions, the on-channel ones ended up on
      the off-channel in some cases.
      
      The reason for that is that during the refactoring of the off-
      channel code, I lost the part that stopped all activity and as
      a consequence the on-channel frames (including data frames)
      were no longer queued but would be transmitted on the temporary
      channel.
      
      Fix this by simply restoring the lost activity stop call.
      
      Cc: stable@vger.kernel.org
      Fixes: 2eb278e0 ("mac80211: unify SW/offload remain-on-channel")
      Reported-by: NJouni Malinen <j@w1.fi>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      115b943a
    • D
      net: sctp: wake up all assocs if sndbuf policy is per socket · 52c35bef
      Daniel Borkmann 提交于
      SCTP charges chunks for wmem accounting via skb->truesize in
      sctp_set_owner_w(), and sctp_wfree() respectively as the
      reverse operation. If a sender runs out of wmem, it needs to
      wait via sctp_wait_for_sndbuf(), and gets woken up by a call
      to __sctp_write_space() mostly via sctp_wfree().
      
      __sctp_write_space() is being called per association. Although
      we assign sk->sk_write_space() to sctp_write_space(), which
      is then being done per socket, it is only used if send space
      is increased per socket option (SO_SNDBUF), as SOCK_USE_WRITE_QUEUE
      is set and therefore not invoked in sock_wfree().
      
      Commit 4c3a5bda ("sctp: Don't charge for data in sndbuf
      again when transmitting packet") fixed an issue where in case
      sctp_packet_transmit() manages to queue up more than sndbuf
      bytes, sctp_wait_for_sndbuf() will never be woken up again
      unless it is interrupted by a signal. However, a still
      remaining issue is that if net.sctp.sndbuf_policy=0, that is
      accounting per socket, and one-to-many sockets are in use,
      the reclaimed write space from sctp_wfree() is 'unfairly'
      handed back on the server to the association that is the lucky
      one to be woken up again via __sctp_write_space(), while
      the remaining associations are never be woken up again
      (unless by a signal).
      
      The effect disappears with net.sctp.sndbuf_policy=1, that
      is wmem accounting per association, as it guarantees a fair
      share of wmem among associations.
      
      Therefore, if we have reclaimed memory in case of per socket
      accounting, wake all related associations to a socket in a
      fair manner, that is, traverse the socket association list
      starting from the current neighbour of the association and
      issue a __sctp_write_space() to everyone until we end up
      waking ourselves. This guarantees that no association is
      preferred over another and even if more associations are
      taken into the one-to-many session, all receivers will get
      messages from the server and are not stalled forever on
      high load. This setting still leaves the advantage of per
      socket accounting in touch as an association can still use
      up global limits if unused by others.
      
      Fixes: 4eb701df ("[SCTP] Fix SCTP sendbuffer accouting.")
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Thomas Graf <tgraf@suug.ch>
      Cc: Neil Horman <nhorman@tuxdriver.com>
      Cc: Vlad Yasevich <vyasevic@redhat.com>
      Acked-by: NVlad Yasevich <vyasevic@redhat.com>
      Acked-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      52c35bef
  5. 08 4月, 2014 6 次提交
    • C
      net: replace __this_cpu_inc in route.c with raw_cpu_inc · 3ed66e91
      Christoph Lameter 提交于
      The RT_CACHE_STAT_INC macro triggers the new preemption checks
      for __this_cpu ops.
      
      I do not see any other synchronization that would allow the use of a
      __this_cpu operation here however in commit dbd2915c ("[IPV4]:
      RT_CACHE_STAT_INC() warning fix") Andrew justifies the use of
      raw_smp_processor_id() here because "we do not care" about races.  In
      the past we agreed that the price of disabling interrupts here to get
      consistent counters would be too high.  These counters may be inaccurate
      due to race conditions.
      
      The use of __this_cpu op improves the situation already from what commit
      dbd2915c did since the single instruction emitted on x86 does not
      allow the race to occur anymore.  However, non x86 platforms could still
      experience a race here.
      
      Trace:
      
        __this_cpu_add operation in preemptible [00000000] code: avahi-daemon/1193
        caller is __this_cpu_preempt_check+0x38/0x60
        CPU: 1 PID: 1193 Comm: avahi-daemon Tainted: GF            3.12.0-rc4+ #187
        Call Trace:
          check_preemption_disabled+0xec/0x110
          __this_cpu_preempt_check+0x38/0x60
          __ip_route_output_key+0x575/0x8c0
          ip_route_output_flow+0x27/0x70
          udp_sendmsg+0x825/0xa20
          inet_sendmsg+0x85/0xc0
          sock_sendmsg+0x9c/0xd0
          ___sys_sendmsg+0x37c/0x390
          __sys_sendmsg+0x49/0x90
          SyS_sendmsg+0x12/0x20
          tracesys+0xe1/0xe6
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3ed66e91
    • V
      netdev: remove potentially harmful checks · 6859e7df
      Veaceslav Falico 提交于
      Currently we're checking a variable for != NULL after actually
      dereferencing it, in netdev_lower_get_next_private*().
      
      It's counter-intuitive at best, and can lead to faulty usage (as it implies
      that the variable can be NULL), so fix it by removing the useless checks.
      Reported-by: NDaniel Borkmann <dborkman@redhat.com>
      CC: "David S. Miller" <davem@davemloft.net>
      CC: Eric Dumazet <edumazet@google.com>
      CC: Nicolas Dichtel <nicolas.dichtel@6wind.com>
      CC: Jiri Pirko <jiri@resnulli.us>
      CC: stephen hemminger <stephen@networkplumber.org>
      CC: Jerry Chu <hkchu@google.com>
      Signed-off-by: NVeaceslav Falico <vfalico@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6859e7df
    • D
      pktgen: fix xmit test for BQL enabled devices · 6f25cd47
      Daniel Borkmann 提交于
      Similarly as in commit 8e2f1a63 ("packet: fix packet_direct_xmit
      for BQL enabled drivers"), we test for __QUEUE_STATE_STACK_XOFF bit
      in pktgen's xmit, which would not fully fill the device's TX ring for
      BQL drivers that use netdev_tx_sent_queue(). Fix is to use, similarly
      as we do in packet sockets, netif_xmit_frozen_or_drv_stopped() test.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6f25cd47
    • G
      tipc: Let tipc_release() return 0 · 065d7e39
      Geert Uytterhoeven 提交于
      net/tipc/socket.c: In function ‘tipc_release’:
      net/tipc/socket.c:352: warning: ‘res’ is used uninitialized in this function
      
      Introduced by commit 24be34b5 ("tipc:
      eliminate upcall function pointers between port and socket"), which
      removed the sole initializer of "res".
      
      Just return 0 to fix it.
      Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      065d7e39
    • J
      mac802154: fix duplicate #include headers · 6c6a9855
      Jean Sacren 提交于
      The commit e6278d92 ("mac802154: use header operations to
      create/parse headers") included the header
      
      		net/ieee802154_netdev.h
      
      which had been included by the commit b70ab2e8 ("ieee802154:
      enforce consistent endianness in the 802.15.4 stack"). Fix this
      duplicate #include by deleting the latter one as the required header
      has already been in place.
      Signed-off-by: NJean Sacren <sakiwit@gmail.com>
      Cc: Alexander Smirnov <alex.bluesman.smirnov@gmail.com>
      Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
      Cc: Phoebe Buckheister <phoebe.buckheister@itwm.fraunhofer.de>
      Cc: linux-zigbee-devel@lists.sourceforge.net
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c6a9855
    • D
      net: filter: be more defensive on div/mod by X==0 · 5f9fde5f
      Daniel Borkmann 提交于
      The old interpreter behaviour was that we returned with 0
      whenever we found a division by 0 would take place. In the new
      interpreter we would currently just skip that instead and
      continue execution.
      
      It's true that a value of 0 as return might not be appropriate
      in all cases, but current users (socket filters -> drop
      packet, seccomp -> SECCOMP_RET_KILL, cls_bpf -> unclassified,
      etc) seem fine with that behaviour. Better this than undefined
      BPF program behaviour as it's expected that A contains the
      result of the division. In future, as more use cases open up,
      we could further adapt this return value to our needs, if
      necessary.
      
      So reintroduce return of 0 for division by 0 as in the old
      interpreter. Also in case of K which is guaranteed to be 32bit
      wide, sk_chk_filter() already takes care of preventing division
      by 0 invoked through K, so we can generally spare us these tests.
      Signed-off-by: NDaniel Borkmann <dborkman@redhat.com>
      Reviewed-by: NAlexei Starovoitov <ast@plumgrid.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5f9fde5f
  6. 05 4月, 2014 24 次提交