1. 21 1月, 2016 1 次提交
  2. 20 1月, 2016 23 次提交
  3. 19 1月, 2016 6 次提交
  4. 18 1月, 2016 8 次提交
  5. 17 1月, 2016 1 次提交
    • M
      include/linux/kernel.h: change abs() macro so it uses consistent return type · 8f57e4d9
      Michal Nazarewicz 提交于
      Rewrite abs() so that its return type does not depend on the
      architecture and no unexpected type conversion happen inside of it.  The
      only conversion is from unsigned to signed type.  char is left as a
      return type but treated as a signed type regradless of it's actual
      signedness.
      
      With the old version, int arguments were promoted to long and depending
      on architecture a long argument might result in s64 or long return type
      (which may or may not be the same).
      
      This came after some back and forth with Nicolas.  The current macro has
      different return type (for the same input type) depending on
      architecture which might be midly iritating.
      
      An alternative version would promote to int like so:
      
      	#define abs(x)	__abs_choose_expr(x, long long,			\
      			__abs_choose_expr(x, long,			\
      			__builtin_choose_expr(				\
      				sizeof(x) <= sizeof(int),		\
      				({ int __x = (x); __x<0?-__x:__x; }),	\
      				((void)0))))
      
      I have no preference but imagine Linus might.  :] Nicolas argument against
      is that promoting to int causes iconsistent behaviour:
      
      	int main(void) {
      		unsigned short a = 0, b = 1, c = a - b;
      		unsigned short d = abs(a - b);
      		unsigned short e = abs(c);
      		printf("%u %u\n", d, e);  // prints: 1 65535
      	}
      
      Then again, no sane person expects consistent behaviour from C integer
      arithmetic.  ;)
      
      Note:
      
        __builtin_types_compatible_p(unsigned char, char) is always false, and
        __builtin_types_compatible_p(signed char, char) is also always false.
      Signed-off-by: NMichal Nazarewicz <mina86@mina86.com>
      Reviewed-by: NNicolas Pitre <nico@linaro.org>
      Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Cc: Wey-Yi Guy <wey-yi.w.guy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8f57e4d9
  6. 16 1月, 2016 1 次提交
    • I
      bna: fix Rx data corruption with VLAN stripping enabled and MTU > 4096 · 6c3f5aef
      Ivan Vecera 提交于
      The multi-buffer Rx mode implemented in the past introduced
      a regression that causes a data corruption for received VLAN
      traffic when VLAN tag stripping is enabled. This mode is supported
      only be newer chipsets (1860) and is enabled when MTU > 4096.
      
      When this mode is enabled Rx queue contains buffers with fixed size
      2048 bytes. Any incoming packet larger than 2048 is divided into
      multiple buffers that are attached as skb frags in polling routine.
      
      The driver assumes that all buffers associated with a packet except
      the last one is fully used (e.g. packet with size 5000 are divided
      into 3 buffers 2048 + 2048 + 904 bytes) and ignores true size reported
      in completions. This assumption is usually true but not when VLAN
      packet is received and VLAN tag stripping is enabled. In this case
      the first buffer is 2044 bytes long but as the driver always assumes
      2048 bytes then 4 extra random bytes are included between the first
      and the second frag. Additionally the driver sets checksum as correct
      so the packet is properly processed by the core.
      
      The driver needs to check the size of used space in each Rx buffer
      reported by FW and not blindly use the fixed value.
      
      Cc: Rasesh Mody <rasesh.mody@qlogic.com>
      Signed-off-by: NIvan Vecera <ivecera@redhat.com>
      Reviewed-by: NRasesh Mody <rasesh.mody@qlogic.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6c3f5aef