1. 21 3月, 2013 1 次提交
  2. 14 3月, 2013 2 次提交
    • P
      skb: Propagate pfmemalloc on skb from head page only · cca7af38
      Pavel Emelyanov 提交于
      Hi.
      
      I'm trying to send big chunks of memory from application address space via
      TCP socket using vmsplice + splice like this
      
         mem = mmap(128Mb);
         vmsplice(pipe[1], mem); /* splice memory into pipe */
         splice(pipe[0], tcp_socket); /* send it into network */
      
      When I'm lucky and a huge page splices into the pipe and then into the socket
      _and_ client and server ends of the TCP connection are on the same host,
      communicating via lo, the whole connection gets stuck! The sending queue
      becomes full and app stops writing/splicing more into it, but the receiving
      queue remains empty, and that's why.
      
      The __skb_fill_page_desc observes a tail page of a huge page and erroneously
      propagates its page->pfmemalloc value onto socket (the pfmemalloc on tail pages
      contain garbage). Then this skb->pfmemalloc leaks through lo and due to the
      
          tcp_v4_rcv
          sk_filter
              if (skb->pfmemalloc && !sock_flag(sk, SOCK_MEMALLOC)) /* true */
                  return -ENOMEM
              goto release_and_discard;
      
      no packets reach the socket. Even TCP re-transmits are dropped by this, as skb
      cloning clones the pfmemalloc flag as well.
      
      That said, here's the proper page->pfmemalloc propagation onto socket: we
      must check the huge-page's head page only, other pages' pfmemalloc and mapping
      values do not contain what is expected in this place. However, I'm not sure
      whether this fix is _complete_, since pfmemalloc propagation via lo also
      oesn't look great.
      
      Both, bit propagation from page to skb and this check in sk_filter, were
      introduced by c48a11c7 (netvm: propagate page->pfmemalloc to skb), in v3.5 so
      Mel and stable@ are in Cc.
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cca7af38
    • E
      tcp: fix skb_availroom() · 16fad69c
      Eric Dumazet 提交于
      Chrome OS team reported a crash on a Pixel ChromeBook in TCP stack :
      
      https://code.google.com/p/chromium/issues/detail?id=182056
      
      commit a21d4572 (tcp: avoid order-1 allocations on wifi and tx
      path) did a poor choice adding an 'avail_size' field to skb, while
      what we really needed was a 'reserved_tailroom' one.
      
      It would have avoided commit 22b4a4f2 (tcp: fix retransmit of
      partially acked frames) and this commit.
      
      Crash occurs because skb_split() is not aware of the 'avail_size'
      management (and should not be aware)
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NMukesh Agrawal <quiche@chromium.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      16fad69c
  3. 10 3月, 2013 2 次提交
  4. 16 2月, 2013 2 次提交
  5. 14 2月, 2013 1 次提交
    • P
      net: Fix possible wrong checksum generation. · c9af6db4
      Pravin B Shelar 提交于
      Patch cef401de (net: fix possible wrong checksum
      generation) fixed wrong checksum calculation but it broke TSO by
      defining new GSO type but not a netdev feature for that type.
      net_gso_ok() would not allow hardware checksum/segmentation
      offload of such packets without the feature.
      
      Following patch fixes TSO and wrong checksum. This patch uses
      same logic that Eric Dumazet used. Patch introduces new flag
      SKBTX_SHARED_FRAG if at least one frag can be modified by
      the user. but SKBTX_SHARED_FRAG flag is kept in skb shared
      info tx_flags rather than gso_type.
      
      tx_flags is better compared to gso_type since we can have skb with
      shared frag without gso packet. It does not link SHARED_FRAG to
      GSO, So there is no need to define netdev feature for this.
      Signed-off-by: NPravin B Shelar <pshelar@nicira.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c9af6db4
  6. 09 2月, 2013 1 次提交
  7. 28 1月, 2013 1 次提交
    • E
      net: fix possible wrong checksum generation · cef401de
      Eric Dumazet 提交于
      Pravin Shelar mentioned that GSO could potentially generate
      wrong TX checksum if skb has fragments that are overwritten
      by the user between the checksum computation and transmit.
      
      He suggested to linearize skbs but this extra copy can be
      avoided for normal tcp skbs cooked by tcp_sendmsg().
      
      This patch introduces a new SKB_GSO_SHARED_FRAG flag, set
      in skb_shinfo(skb)->gso_type if at least one frag can be
      modified by the user.
      
      Typical sources of such possible overwrites are {vm}splice(),
      sendfile(), and macvtap/tun/virtio_net drivers.
      
      Tested:
      
      $ netperf -H 7.7.8.84
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
      7.7.8.84 () port 0 AF_INET
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    10.00    3959.52
      
      $ netperf -H 7.7.8.84 -t TCP_SENDFILE
      TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.8.84 ()
      port 0 AF_INET
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    10.00    3216.80
      
      Performance of the SENDFILE is impacted by the extra allocation and
      copy, and because we use order-0 pages, while the TCP_STREAM uses
      bigger pages.
      Reported-by: NPravin Shelar <pshelar@nicira.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cef401de
  8. 09 1月, 2013 1 次提交
    • E
      net: introduce skb_transport_header_was_set() · fda55eca
      Eric Dumazet 提交于
      We have skb_mac_header_was_set() helper to tell if mac_header
      was set on a skb. We would like the same for transport_header.
      
      __netif_receive_skb() doesn't reset the transport header if already
      set by GRO layer.
      
      Note that network stacks usually reset the transport header anyway,
      after pulling the network header, so this change only allows
      a followup patch to have more precise qdisc pkt_len computation
      for GSO packets at ingress side.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Jamal Hadi Salim <jhs@mojatatu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fda55eca
  9. 09 12月, 2012 1 次提交
  10. 03 11月, 2012 2 次提交
  11. 01 11月, 2012 1 次提交
    • W
      net: compute skb->rxhash if nic hash may be 3-tuple · ecd5cf5d
      Willem de Bruijn 提交于
      Network device drivers can communicate a Toeplitz hash in skb->rxhash,
      but devices differ in their hashing capabilities. All compute a 5-tuple
      hash for TCP over IPv4, but for other connection-oriented protocols,
      they may compute only a 3-tuple. This breaks RPS load balancing, e.g.,
      for TCP over IPv6 flows. Additionally, for GRE and other tunnels,
      the kernel computes a 5-tuple hash over the inner packet if possible,
      but devices do not.
      
      This patch recomputes the rxhash in software in all cases where it
      cannot be certain that a 5-tuple was computed. Device drivers can avoid
      recomputation by setting the skb->l4_rxhash flag.
      
      Recomputing adds cycles to each packet when RPS is enabled or the
      packet arrives over a tunnel. A comparison of 200x TCP_STREAM between
      two servers running unmodified netnext with rxhash computation
      in hardware vs software (using ethtool -K eth0 rxhash [on|off]) shows
      how much time is spent in __skb_get_rxhash in this worst case:
      
           0.03%          swapper  [kernel.kallsyms]     [k] __skb_get_rxhash
           0.03%          swapper  [kernel.kallsyms]     [k] __skb_get_rxhash
           0.05%          swapper  [kernel.kallsyms]     [k] __skb_get_rxhash
      
      With 200x TCP_RR it increases to
      
           0.10%          netperf  [kernel.kallsyms]     [k] __skb_get_rxhash
           0.10%          netperf  [kernel.kallsyms]     [k] __skb_get_rxhash
           0.10%          netperf  [kernel.kallsyms]     [k] __skb_get_rxhash
      
      I considered having the patch explicitly skips recomputation when it knows
      that it will not improve the hash (TCP over IPv4), but that conditional
      complicates code without saving many cycles in practice, because it has
      to take place after flow dissector.
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ecd5cf5d
  12. 07 10月, 2012 1 次提交
    • E
      net: remove skb recycling · acb600de
      Eric Dumazet 提交于
      Over time, skb recycling infrastructure got litle interest and
      many bugs. Generic rx path skb allocation is now using page
      fragments for efficient GRO / TCP coalescing, and recyling
      a tx skb for rx path is not worth the pain.
      
      Last identified bug is that fat skbs can be recycled
      and it can endup using high order pages after few iterations.
      
      With help from Maxime Bizon, who pointed out that commit
      87151b86 (net: allow pskb_expand_head() to get maximum tailroom)
      introduced this regression for recycled skbs.
      
      Instead of fixing this bug, lets remove skb recycling.
      
      Drivers wanting really hot skbs should use build_skb() anyway,
      to allocate/populate sk_buff right before netif_receive_skb()
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Maxime Bizon <mbizon@freebox.fr>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      acb600de
  13. 04 8月, 2012 1 次提交
  14. 01 8月, 2012 3 次提交
    • M
      netvm: propagate page->pfmemalloc from skb_alloc_page to skb · 0614002b
      Mel Gorman 提交于
      The skb->pfmemalloc flag gets set to true iff during the slab allocation
      of data in __alloc_skb that the the PFMEMALLOC reserves were used.  If
      page splitting is used, it is possible that pages will be allocated from
      the PFMEMALLOC reserve without propagating this information to the skb.
      This patch propagates page->pfmemalloc from pages allocated for fragments
      to the skb.
      
      It works by reintroducing and expanding the skb_alloc_page() API to take
      an skb.  If the page was allocated from pfmemalloc reserves, it is
      automatically copied.  If the driver allocates the page before the skb, it
      should call skb_propagate_pfmemalloc() after the skb is allocated to
      ensure the flag is copied properly.
      
      Failure to do so is not critical.  The resulting driver may perform slower
      if it is used for swap-over-NBD or swap-over-NFS but it should not result
      in failure.
      
      [davem@davemloft.net: API rename and consistency]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0614002b
    • M
      netvm: propagate page->pfmemalloc to skb · c48a11c7
      Mel Gorman 提交于
      The skb->pfmemalloc flag gets set to true iff during the slab allocation
      of data in __alloc_skb that the the PFMEMALLOC reserves were used.  If the
      packet is fragmented, it is possible that pages will be allocated from the
      PFMEMALLOC reserve without propagating this information to the skb.  This
      patch propagates page->pfmemalloc from pages allocated for fragments to
      the skb.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c48a11c7
    • M
      netvm: allow skb allocation to use PFMEMALLOC reserves · c93bdd0e
      Mel Gorman 提交于
      Change the skb allocation API to indicate RX usage and use this to fall
      back to the PFMEMALLOC reserve when needed.  SKBs allocated from the
      reserve are tagged in skb->pfmemalloc.  If an SKB is allocated from the
      reserve and the socket is later found to be unrelated to page reclaim, the
      packet is dropped so that the memory remains available for page reclaim.
      Network protocols are expected to recover from this packet loss.
      
      [a.p.zijlstra@chello.nl: Ideas taken from various patches]
      [davem@davemloft.net: Use static branches, coding style corrections]
      [sebastian@breakpoint.cc: Avoid unnecessary cast, fix !CONFIG_NET build]
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Eric B Munson <emunson@mgebm.net>
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c93bdd0e
  15. 23 7月, 2012 1 次提交
  16. 16 6月, 2012 1 次提交
  17. 30 5月, 2012 1 次提交
    • F
      skb: avoid unnecessary reallocations in __skb_cow · 617c8c11
      Felix Fietkau 提交于
      At the beginning of __skb_cow, headroom gets set to a minimum of
      NET_SKB_PAD. This causes unnecessary reallocations if the buffer was not
      cloned and the headroom is just below NET_SKB_PAD, but still more than the
      amount requested by the caller.
      This was showing up frequently in my tests on VLAN tx, where
      vlan_insert_tag calls skb_cow_head(skb, VLAN_HLEN).
      
      Locally generated packets should have enough headroom, and for forward
      paths, we already have NET_SKB_PAD bytes of headroom, so we don't need to
      add any extra space here.
      Signed-off-by: NFelix Fietkau <nbd@openwrt.org>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      617c8c11
  18. 20 5月, 2012 1 次提交
    • E
      net: introduce skb_try_coalesce() · bad43ca8
      Eric Dumazet 提交于
      Move tcp_try_coalesce() protocol independent part to
      skb_try_coalesce().
      
      skb_try_coalesce() can be used in IPv4 defrag and IPv6 reassembly,
      to build optimized skbs (less sk_buff, and possibly less 'headers')
      
      skb_try_coalesce() is zero copy, unless the copy can fit in destination
      header (its a rare case)
      
      kfree_skb_partial() is also moved to net/core/skbuff.c and exported,
      because IPv6 will need it in patch (ipv6: use skb coalescing in
      reassembly).
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Alexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bad43ca8
  19. 19 5月, 2012 1 次提交
    • E
      net: introduce netdev_alloc_frag() · 6f532612
      Eric Dumazet 提交于
      Fix two issues introduced in commit a1c7fff7
      ( net: netdev_alloc_skb() use build_skb() )
      
      - Must be IRQ safe (non NAPI drivers can use it)
      - Must not leak the frag if build_skb() fails to allocate sk_buff
      
      This patch introduces netdev_alloc_frag() for drivers willing to
      use build_skb() instead of __netdev_alloc_skb() variants.
      
      Factorize code so that :
      __dev_alloc_skb() is a wrapper around __netdev_alloc_skb(), and
      dev_alloc_skb() a wrapper around netdev_alloc_skb()
      
      Use __GFP_COLD flag.
      
      Almost all network drivers now benefit from skb->head_frag
      infrastructure.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6f532612
  20. 07 5月, 2012 1 次提交
  21. 04 5月, 2012 1 次提交
  22. 01 5月, 2012 4 次提交
    • E
      net: fix two typos in skbuff.h · d9619496
      Eric Dumazet 提交于
      fix kernel doc typos in function names
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d9619496
    • E
      net: skb_peek()/skb_peek_tail() cleanups · 18d07000
      Eric Dumazet 提交于
      remove useless casts and rename variables for less confusion.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      18d07000
    • E
      net: make GRO aware of skb->head_frag · d7e8883c
      Eric Dumazet 提交于
      GRO can check if skb to be merged has its skb->head mapped to a page
      fragment, instead of a kmalloc() area.
      
      We 'upgrade' skb->head as a fragment in itself
      
      This avoids the frag_list fallback, and permits to build true GRO skb
      (one sk_buff and up to 16 fragments), using less memory.
      
      This reduces number of cache misses when user makes its copy, since a
      single sk_buff is fetched.
      
      This is a followup of patch "net: allow skb->head to be a page fragment"
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d7e8883c
    • E
      net: allow skb->head to be a page fragment · d3836f21
      Eric Dumazet 提交于
      skb->head is currently allocated from kmalloc(). This is convenient but
      has the drawback the data cannot be converted to a page fragment if
      needed.
      
      We have three spots were it hurts :
      
      1) GRO aggregation
      
       When a linear skb must be appended to another skb, GRO uses the
      frag_list fallback, very inefficient since we keep all struct sk_buff
      around. So drivers enabling GRO but delivering linear skbs to network
      stack aren't enabling full GRO power.
      
      2) splice(socket -> pipe).
      
       We must copy the linear part to a page fragment.
       This kind of defeats splice() purpose (zero copy claim)
      
      3) TCP coalescing.
      
       Recently introduced, this permits to group several contiguous segments
      into a single skb. This shortens queue lengths and save kernel memory,
      and greatly reduce probabilities of TCP collapses. This coalescing
      doesnt work on linear skbs (or we would need to copy data, this would be
      too slow)
      
      Given all these issues, the following patch introduces the possibility
      of having skb->head be a fragment in itself. We use a new skb flag,
      skb->head_frag to carry this information.
      
      build_skb() is changed to accept a frag_size argument. Drivers willing
      to provide a page fragment instead of kmalloc() data will set a non zero
      value, set to the fragment size.
      
      Then, on situations we need to convert the skb head to a frag in itself,
      we can check if skb->head_frag is set and avoid the copies or various
      fallbacks we have.
      
      This means drivers currently using frags could be updated to avoid the
      current skb->head allocation and reduce their memory footprint (aka skb
      truesize). (thats 512 or 1024 bytes saved per skb). This also makes
      bpf/netfilter faster since the 'first frag' will be part of skb linear
      part, no need to copy data.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      Cc: Ben Hutchings <bhutchings@solarflare.com>
      Cc: Matt Carlson <mcarlson@broadcom.com>
      Cc: Michael Chan <mchan@broadcom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d3836f21
  23. 24 4月, 2012 1 次提交
  24. 20 4月, 2012 1 次提交
  25. 14 4月, 2012 1 次提交
  26. 11 4月, 2012 1 次提交
    • E
      tcp: avoid order-1 allocations on wifi and tx path · a21d4572
      Eric Dumazet 提交于
      Marc Merlin reported many order-1 allocations failures in TX path on its
      wireless setup, that dont make any sense with MTU=1500 network, and non
      SG capable hardware.
      
      After investigation, it turns out TCP uses sk_stream_alloc_skb() and
      used as a convention skb_tailroom(skb) to know how many bytes of data
      payload could be put in this skb (for non SG capable devices)
      
      Note : these skb used kmalloc-4096 (MTU=1500 + MAX_HEADER +
      sizeof(struct skb_shared_info) being above 2048)
      
      Later, mac80211 layer need to add some bytes at the tail of skb
      (IEEE80211_ENCRYPT_TAILROOM = 18 bytes) and since no more tailroom is
      available has to call pskb_expand_head() and request order-1
      allocations.
      
      This patch changes sk_stream_alloc_skb() so that only
      sk->sk_prot->max_header bytes of headroom are reserved, and use a new
      skb field, avail_size to hold the data payload limit.
      
      This way, order-0 allocations done by TCP stack can leave more than 2 KB
      of tailroom and no more allocation is performed in mac80211 layer (or
      any layer needing some tailroom)
      
      avail_size is unioned with mark/dropcount, since mark will be set later
      in IP stack for output packets. Therefore, skb size is unchanged.
      Reported-by: NMarc MERLIN <marc@merlins.org>
      Tested-by: NMarc MERLIN <marc@merlins.org>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a21d4572
  27. 29 3月, 2012 1 次提交
  28. 26 3月, 2012 1 次提交
  29. 20 3月, 2012 1 次提交
  30. 10 3月, 2012 1 次提交
  31. 05 3月, 2012 1 次提交
    • P
      BUG: headers with BUG/BUG_ON etc. need linux/bug.h · 187f1882
      Paul Gortmaker 提交于
      If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any
      other BUG variant in a static inline (i.e. not in a #define) then
      that header really should be including <linux/bug.h> and not just
      expecting it to be implicitly present.
      
      We can make this change risk-free, since if the files using these
      headers didn't have exposure to linux/bug.h already, they would have
      been causing compile failures/warnings.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      187f1882