1. 29 12月, 2010 20 次提交
  2. 24 12月, 2010 1 次提交
    • D
      Revert "ipv4: Allow configuring subnets as local addresses" · e0584649
      David S. Miller 提交于
      This reverts commit 4465b469.
      
      Conflicts:
      
      	net/ipv4/fib_frontend.c
      
      As reported by Ben Greear, this causes regressions:
      
      > Change 4465b469 caused rules
      > to stop matching the input device properly because the
      > FLOWI_FLAG_MATCH_ANY_IIF is always defined in ip_dev_find().
      >
      > This breaks rules such as:
      >
      > ip rule add pref 512 lookup local
      > ip rule del pref 0 lookup local
      > ip link set eth2 up
      > ip -4 addr add 172.16.0.102/24 broadcast 172.16.0.255 dev eth2
      > ip rule add to 172.16.0.102 iif eth2 lookup local pref 10
      > ip rule add iif eth2 lookup 10001 pref 20
      > ip route add 172.16.0.0/24 dev eth2 table 10001
      > ip route add unreachable 0/0 table 10001
      >
      > If you had a second interface 'eth0' that was on a different
      > subnet, pinging a system on that interface would fail:
      >
      >   [root@ct503-60 ~]# ping 192.168.100.1
      >   connect: Invalid argument
      Reported-by: NBen Greear <greearb@candelatech.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e0584649
  3. 23 12月, 2010 2 次提交
    • J
      taskstats: pad taskstats netlink response for aligment issues on ia64 · 4be2c95d
      Jeff Mahoney 提交于
      The taskstats structure is internally aligned on 8 byte boundaries but the
      layout of the aggregrate reply, with two NLA headers and the pid (each 4
      bytes), actually force the entire structure to be unaligned.  This causes
      the kernel to issue unaligned access warnings on some architectures like
      ia64.  Unfortunately, some software out there doesn't properly unroll the
      NLA packet and assumes that the start of the taskstats structure will
      always be 20 bytes from the start of the netlink payload.  Aligning the
      start of the taskstats structure breaks this software, which we don't
      want.  So, for now the alignment only happens on architectures that
      require it and those users will have to update to fixed versions of those
      packages.  Space is reserved in the packet only when needed.  This ifdef
      should be removed in several years e.g.  2012 once we can be confident
      that fixed versions are installed on most systems.  We add the padding
      before the aggregate since the aggregate is already a defined type.
      
      Commit 85893120 ("delayacct: align to 8 byte boundary on 64-bit systems")
      previously addressed the alignment issues by padding out the pid field.
      This was supposed to be a compatible change but the circumstances
      described above mean that it wasn't.  This patch backs out that change,
      since it was a hack, and introduces a new NULL attribute type to provide
      the padding.  Padding the response with 4 bytes avoids allocating an
      aligned taskstats structure and copying it back.  Since the structure
      weighs in at 328 bytes, it's too big to do it on the stack.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Reported-by: NBrian Rogers <brian@xyzw.org>
      Cc: Jeff Mahoney <jeffm@suse.com>
      Cc: Guillaume Chazarain <guichaz@gmail.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4be2c95d
    • W
      include/linux/unaligned: pack the whole struct rather than just the field · 4e06fd14
      Will Newton 提交于
      The current packed struct implementation of unaligned access adds the
      packed attribute only to the field within the unaligned struct rather than
      to the struct as a whole.  This is not sufficient to enforce proper
      behaviour on architectures with a default struct alignment of more than
      one byte.
      
      For example, the current implementation of __get_unaligned_cpu16 when
      compiled for arm with gcc -O1 -mstructure-size-boundary=32 assumes the
      struct is on a 4 byte boundary so performs the load of the 16bit packed
      field as if it were on a 4 byte boundary:
      
      __get_unaligned_cpu16:
              ldrh    r0, [r0, #0]
              bx      lr
      
      Moving the packed attribute to the struct rather than the field causes the
      proper unaligned access code to be generated:
      
      __get_unaligned_cpu16:
      	ldrb	r3, [r0, #0]	@ zero_extendqisi2
      	ldrb	r0, [r0, #1]	@ zero_extendqisi2
      	orr	r0, r3, r0, asl #8
      	bx	lr
      Signed-off-by: NWill Newton <will.newton@gmail.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4e06fd14
  4. 22 12月, 2010 2 次提交
  5. 21 12月, 2010 3 次提交
  6. 20 12月, 2010 1 次提交
  7. 18 12月, 2010 4 次提交
  8. 17 12月, 2010 5 次提交
    • M
      block: max hardware sectors limit wrapper · 72d4cd9f
      Mike Snitzer 提交于
      Implement blk_limits_max_hw_sectors() and make
      blk_queue_max_hw_sectors() a wrapper around it.
      
      DM needs this to avoid setting queue_limits' max_hw_sectors and
      max_sectors directly.  dm_set_device_limits() now leverages
      blk_limits_max_hw_sectors() logic to establish the appropriate
      max_hw_sectors minimum (PAGE_SIZE).  Fixes issue where DM was
      incorrectly setting max_sectors rather than max_hw_sectors (which
      caused dm_merge_bvec()'s max_hw_sectors check to be ineffective).
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@kernel.org
      Acked-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      72d4cd9f
    • M
      block: Deprecate QUEUE_FLAG_CLUSTER and use queue_limits instead · e692cb66
      Martin K. Petersen 提交于
      When stacking devices, a request_queue is not always available. This
      forced us to have a no_cluster flag in the queue_limits that could be
      used as a carrier until the request_queue had been set up for a
      metadevice.
      
      There were several problems with that approach. First of all it was up
      to the stacking device to remember to set queue flag after stacking had
      completed. Also, the queue flag and the queue limits had to be kept in
      sync at all times. We got that wrong, which could lead to us issuing
      commands that went beyond the max scatterlist limit set by the driver.
      
      The proper fix is to avoid having two flags for tracking the same thing.
      We deprecate QUEUE_FLAG_CLUSTER and use the queue limit directly in the
      block layer merging functions. The queue_limit 'no_cluster' is turned
      into 'cluster' to avoid double negatives and to ease stacking.
      Clustering defaults to being enabled as before. The queue flag logic is
      removed from the stacking function, and explicitly setting the cluster
      flag is no longer necessary in DM and MD.
      Reported-by: NEd Lin <ed.lin@promise.com>
      Signed-off-by: NMartin K. Petersen <martin.petersen@oracle.com>
      Acked-by: NMike Snitzer <snitzer@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NJens Axboe <jaxboe@fusionio.com>
      e692cb66
    • O
      net: fix nulls list corruptions in sk_prot_alloc · fcbdf09d
      Octavian Purdila 提交于
      Special care is taken inside sk_port_alloc to avoid overwriting
      skc_node/skc_nulls_node. We should also avoid overwriting
      skc_bind_node/skc_portaddr_node.
      
      The patch fixes the following crash:
      
       BUG: unable to handle kernel paging request at fffffffffffffff0
       IP: [<ffffffff812ec6dd>] udp4_lib_lookup2+0xad/0x370
       [<ffffffff812ecc22>] __udp4_lib_lookup+0x282/0x360
       [<ffffffff812ed63e>] __udp4_lib_rcv+0x31e/0x700
       [<ffffffff812bba45>] ? ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ? ip_local_deliver+0x88/0xa0
       [<ffffffff812eda35>] udp_rcv+0x15/0x20
       [<ffffffff812bba45>] ip_local_deliver_finish+0x65/0x190
       [<ffffffff812bbbf8>] ip_local_deliver+0x88/0xa0
       [<ffffffff812bb2cd>] ip_rcv_finish+0x32d/0x6f0
       [<ffffffff8128c14c>] ? netif_receive_skb+0x99c/0x11c0
       [<ffffffff812bb94b>] ip_rcv+0x2bb/0x350
       [<ffffffff8128c14c>] netif_receive_skb+0x99c/0x11c0
      Signed-off-by: NLeonard Crestez <lcrestez@ixiacom.com>
      Signed-off-by: NOctavian Purdila <opurdila@ixiacom.com>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fcbdf09d
    • H
      SSB: Fix nvram_get on BCM47xx platform · 3f84622d
      Hauke Mehrtens 提交于
      The nvram_get function was never in the mainline kernel, it only existed in
      an external OpenWrt patch. Use nvram_getenv function, which is in mainline
      and use an include instead of an extra function declaration.  et0macaddr
      contains the mac address in text from like 00:11:22:33:44:55. We have to
      parse it before adding it into macaddr.
      
      nvram_parse_macaddr will be merged into asm/mach-bcm47xx/nvram.h through
      the MIPS git tree and will be available soon. It will not build now without
      nvram_parse_macaddr, but it hasn't before either.
      Signed-off-by: NHauke Mehrtens <hauke@hauke-m.de>
      To: linux-mips@linux-mips.org
      Cc: mb@bu3sch.de
      Cc: netdev@vger.kernel.org
      Cc: Hauke Mehrtens <hauke@hauke-m.de>
      Acked-by: NMichael Buesch <mb@bu3sch.de>
      Patchwork: https://patchwork.linux-mips.org/patch/1849/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      3f84622d
    • R
      PM / Runtime: Fix pm_runtime_suspended() · f08f5a0a
      Rafael J. Wysocki 提交于
      There are some situations (e.g. in __pm_generic_call()), where
      pm_runtime_suspended() is used to decide whether or not to execute
      a device's (system) ->suspend() callback.  The callback is not
      executed if pm_runtime_suspended() returns true, but it does so
      for devices that don't even support runtime PM, because the
      power.disable_depth device field is ignored by it.  This leads to
      problems (i.e. devices are not suspened when they should), so rework
      pm_runtime_suspended() so that it returns false if the device's
      power.disable_depth field is different from zero.
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: stable@kernel.org
      f08f5a0a
  9. 16 12月, 2010 2 次提交