1. 10 3月, 2013 1 次提交
  2. 09 3月, 2013 3 次提交
  3. 08 3月, 2013 5 次提交
  4. 06 3月, 2013 13 次提交
    • D
      net/ipv4: Timestamp option cannot overflow with prespecified addresses · fa2b04f4
      David Ward 提交于
      When a router forwards a packet that contains the IPv4 timestamp option,
      if there is no space left in the option for the router to add its own
      timestamp, then the router increments the Overflow value in the option.
      
      However, if the addresses of the routers are prespecified in the option,
      then the overflow condition cannot happen: the option is structured so
      that each prespecified router has a place to write its timestamp. Other
      routers do not add a timestamp, so there will never be a lack of space.
      
      This fix ensures that the Overflow value in the IPv4 timestamp option is
      not incremented when the addresses of the routers are prespecified, even
      if the Pointer value is greater than the Length value.
      Signed-off-by: NDavid Ward <david.ward@ll.mit.edu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fa2b04f4
    • E
      net: reduce net_rx_action() latency to 2 HZ · d1f41b67
      Eric Dumazet 提交于
      We should use time_after_eq() to get maximum latency of two ticks,
      instead of three.
      
      Bug added in commit 24f8b238 (net: increase receive packet quantum)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d1f41b67
    • R
      net: fix new kernel-doc warnings in net core · 691b3b7e
      Randy Dunlap 提交于
      Fix new kernel-doc warnings in net/core/dev.c:
      
      Warning(net/core/dev.c:4788): No description found for parameter 'new_carrier'
      Warning(net/core/dev.c:4788): Excess function parameter 'new_carries' description in 'dev_change_carrier'
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      691b3b7e
    • P
      pkt_sched: sch_qfq: remove a useless invocation of qfq_update_eligible · 76e4cb0d
      Paolo Valente 提交于
      QFQ+ can select for service only 'eligible' aggregates, i.e.,
      aggregates that would have started to be served also in the emulated
      ideal system.  As a consequence, for QFQ+ to be work conserving, at
      least one of the active aggregates must be eligible when it is time to
      choose the next aggregate to serve.
      
      The set of eligible aggregates is updated through the function
      qfq_update_eligible(), which does guarantee that, after its
      invocation, at least one of the active aggregates is eligible.
      Because of this property, this function is invoked in
      qfq_deactivate_agg() to guarantee that at least one of the active
      aggregates is still eligible after an aggregate has been deactivated.
      In particular, the critical case is when there are other active
      aggregates, but the aggregate being deactivated happens to be the only
      one eligible.
      
      However, this precaution is not needed for QFQ+ to be work conserving,
      because update_eligible() is always invoked also at the beginning of
      qfq_choose_next_agg(). This patch removes the additional invocation of
      update_eligible() in qfq_deactivate_agg().
      Signed-off-by: NPaolo Valente <paolo.valente@unimore.it>
      Reviewed-by: NFabio Checconi <fchecconi@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      76e4cb0d
    • P
      pkt_sched: sch_qfq: do not allow virtual time to jump if an aggregate is in service · 40dd2d54
      Paolo Valente 提交于
      By definition of (the algorithm of) QFQ+, the system virtual time must
      be pushed up only if there is no 'eligible' aggregate, i.e. no
      aggregate that would have started to be served also in the ideal
      system emulated by QFQ+.  QFQ+ serves only eligible aggregates, hence
      the aggregate currently in service is eligible.  As a consequence, to
      decide whether there is no eligible aggregate, QFQ+ must also check
      whether there is no aggregate in service.
      Signed-off-by: NPaolo Valente <paolo.valente@unimore.it>
      Reviewed-by: NFabio Checconi <fchecconi@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      40dd2d54
    • P
      pkt_sched: sch_qfq: prevent budget from wrapping around after a dequeue · a0143efa
      Paolo Valente 提交于
      Aggregate budgets are computed so as to guarantee that, after an
      aggregate has been selected for service, that aggregate has enough
      budget to serve at least one maximum-size packet for the classes it
      contains. For this reason, after a new aggregate has been selected
      for service, its next packet is immediately dequeued, without any
      further control.
      
      The maximum packet size for a class, lmax, can be changed through
      qfq_change_class(). In case the user sets lmax to a lower value than
      the the size of some of the still-to-arrive packets, QFQ+ will
      automatically push up lmax as it enqueues these packets.  This
      automatic push up is likely to happen with TSO/GSO.
      
      In any case, if lmax is assigned a lower value than the size of some
      of the packets already enqueued for the class, then the following
      problem may occur: the size of the next packet to dequeue for the
      class may happen to be larger than lmax, after the aggregate to which
      the class belongs has been just selected for service. In this case,
      even the budget of the aggregate, which is an unsigned value, may be
      lower than the size of the next packet to dequeue. After dequeueing
      this packet and subtracting its size from the budget, the latter would
      wrap around.
      
      This fix prevents the budget from wrapping around after any packet
      dequeue.
      Signed-off-by: NPaolo Valente <paolo.valente@unimore.it>
      Reviewed-by: NFabio Checconi <fchecconi@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a0143efa
    • P
      pkt_sched: sch_qfq: serve activated aggregates immediately if the scheduler is empty · 2f3b89a1
      Paolo Valente 提交于
      If no aggregate is in service, then the function qfq_dequeue() does
      not dequeue any packet. For this reason, to guarantee QFQ+ to be work
      conserving, a just-activated aggregate must be set as in service
      immediately if it happens to be the only active aggregate.
      This is done by the function qfq_enqueue().
      
      Unfortunately, the function qfq_add_to_agg(), used to add a class to
      an aggregate, does not perform this important additional operation.
      In particular, if: 1) qfq_add_to_agg() is invoked to complete the move
      of a class from a source aggregate, becoming, for this move, inactive,
      to a destination aggregate, becoming instead active, and 2) the
      destination aggregate becomes the only active aggregate, then this
      aggregate is not however set as in service. QFQ+ remains then in a
      non-work-conserving state until a new invocation of qfq_enqueue()
      recovers the situation.
      
      This fix solves the problem by moving the logic for setting an
      aggregate as in service directly into the function qfq_activate_agg().
      Hence, from whatever point qfq_activate_aggregate() is invoked, QFQ+
      remains work conserving.  Since the more-complex logic of this new
      version of activate_aggregate() is not necessary, in qfq_dequeue(), to
      reschedule an aggregate that finishes its budget, then the aggregate
      is now rescheduled by invoking directly the functions needed.
      Signed-off-by: NPaolo Valente <paolo.valente@unimore.it>
      Reviewed-by: NFabio Checconi <fchecconi@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f3b89a1
    • P
      pkt_sched: sch_qfq: fix the update of eligible-group sets · 624b85fb
      Paolo Valente 提交于
      Between two invocations of make_eligible, the system virtual time may
      happen to grow enough that, in its binary representation, a bit with
      higher order than 31 flips. This happens especially with
      TSO/GSO. Before this fix, the mask used in make_eligible was computed
      as (1UL<<index_of_last_flipped_bit)-1, whose value is well defined on
      a 64-bit architecture, because index_of_flipped_bit <= 63, but is in
      general undefined on a 32-bit architecture if index_of_flipped_bit > 31.
      The fix just replaces 1UL with 1ULL.
      Signed-off-by: NPaolo Valente <paolo.valente@unimore.it>
      Reviewed-by: NFabio Checconi <fchecconi@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      624b85fb
    • P
      pkt_sched: sch_qfq: properly cap timestamps in charge_actual_service · 9b99b7e9
      Paolo Valente 提交于
      QFQ+ schedules the active aggregates in a group using a bucket list
      (one list per group). The bucket in which each aggregate is inserted
      depends on the aggregate's timestamps, and the number
      of buckets in a group is enough to accomodate the possible (range of)
      values of the timestamps of all the aggregates in the group. For this
      property to hold, timestamps must however be computed correctly.  One
      necessary condition for computing timestamps correctly is that the
      number of bits dequeued for each aggregate, while the aggregate is in
      service, does not exceed the maximum budget budgetmax assigned to the
      aggregate.
      
      For each aggregate, budgetmax is proportional to the number of classes
      in the aggregate. If the number of classes of the aggregate is
      decreased through qfq_change_class(), then budgetmax is decreased
      automatically as well.  Problems may occur if the aggregate is in
      service when budgetmax is decreased, because the current remaining
      budget of the aggregate and/or the service already received by the
      aggregate may happen to be larger than the new value of budgetmax.  In
      this case, when the aggregate is eventually deselected and its
      timestamps are updated, the aggregate may happen to have received an
      amount of service larger than budgetmax.  This may cause the aggregate
      to be assigned a higher virtual finish time than the maximum
      acceptable value for the last bucket in the bucket list of the group.
      
      This fix introduces a cap that addresses this issue.
      Signed-off-by: NPaolo Valente <paolo.valente@unimore.it>
      Reviewed-by: NFabio Checconi <fchecconi@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9b99b7e9
    • P
      net/irda: Raise dtr in non-blocking open · f74861ca
      Peter Hurley 提交于
      DTR/RTS need to be raised, regardless of the open() mode, but not
      if the port has already shutdown.
      Signed-off-by: NPeter Hurley <peter@hurleysoftware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f74861ca
    • P
      net/irda: Use barrier to set task state · 0b176ce3
      Peter Hurley 提交于
      Without a memory and compiler barrier, the task state change
      can migrate relative to the condition testing in a blocking loop.
      However, the task state change must be visible across all cpus
      prior to testing those conditions. Failing to do this can result
      in the familiar 'lost wakeup' and this task will hang until killed.
      Signed-off-by: NPeter Hurley <peter@hurleysoftware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0b176ce3
    • P
      net/irda: Hold port lock while bumping blocked_open · 2f7c069b
      Peter Hurley 提交于
      Although tty_lock() already protects concurrent update to
      blocked_open, that fails to meet the separation-of-concerns between
      tty_port and tty.
      Signed-off-by: NPeter Hurley <peter@hurleysoftware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f7c069b
    • P
      net/irda: Fix port open counts · a4ed2e73
      Peter Hurley 提交于
      Saving the port count bump is unsafe. If the tty is hung up while
      this open was blocking, the port count is zeroed.
      
      Explicitly check if the tty was hung up while blocking, and correct
      the port count if not.
      Signed-off-by: NPeter Hurley <peter@hurleysoftware.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a4ed2e73
  5. 05 3月, 2013 6 次提交
  6. 04 3月, 2013 2 次提交
  7. 03 3月, 2013 4 次提交
  8. 02 3月, 2013 4 次提交
  9. 01 3月, 2013 2 次提交
    • M
      mac80211: fix oops on mesh PS broadcast forwarding · 7cbf9d01
      Marco Porsch 提交于
      Introduced with de74a1d9
      "mac80211: fix WPA with VLAN on AP side with ps-sta".
      Apparently overwrites the sdata pointer with non-valid data in
      the case of mesh.
      Fix this by checking for IFTYPE_AP_VLAN.
      Signed-off-by: NMarco Porsch <marco@cozybit.com>
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      7cbf9d01
    • J
      nl80211: increase wiphy dump size dynamically · 645e77de
      Johannes Berg 提交于
      Given a device with many channels capabilities the wiphy
      information can still overflow even though its size in
      3.9 was reduced to 3.8 levels. For new userspace and
      kernel 3.10 we're going to implement a new "split dump"
      protocol that can use multiple messages per wiphy.
      
      For now though, add a workaround to be able to send more
      information to userspace. Since generic netlink doesn't
      have a way to set the minimum dump size globally, and we
      wouldn't really want to set it globally anyway, increase
      the size only when needed, as described in the comments.
      As userspace might not be prepared for large buffers, we
      can only use 4k.
      
      Also increase the size for the get_wiphy command.
      Signed-off-by: NJohannes Berg <johannes.berg@intel.com>
      645e77de