1. 10 10月, 2007 26 次提交
  2. 09 10月, 2007 1 次提交
  3. 08 10月, 2007 2 次提交
    • A
      [ROSE]: Fix rose.ko oops on unload · 891e6a93
      Alexey Dobriyan 提交于
      Commit a3d38402 aka
      "[AX.25]: Fix unchecked rose_add_loopback_neigh uses"
      transformed rose_loopback_neigh var into statically allocated one.
      However, on unload it will be kfree's which can't work.
      
      Steps to reproduce:
      
      	modprobe rose
      	rmmod rose
      
      BUG: unable to handle kernel NULL pointer dereference at virtual address 00000008
       printing eip:
      c014c664
      *pde = 00000000
      Oops: 0000 [#1]
      PREEMPT DEBUG_PAGEALLOC
      Modules linked in: rose ax25 fan ufs loop usbhid rtc snd_intel8x0 snd_ac97_codec ehci_hcd ac97_bus uhci_hcd thermal usbcore button processor evdev sr_mod cdrom
      CPU:    0
      EIP:    0060:[<c014c664>]    Not tainted VLI
      EFLAGS: 00210086   (2.6.23-rc9 #3)
      EIP is at kfree+0x48/0xa1
      eax: 00000556   ebx: c1734aa0   ecx: f6a5e000   edx: f7082000
      esi: 00000000   edi: f9a55d20   ebp: 00200287   esp: f6a5ef28
      ds: 007b   es: 007b   fs: 0000  gs: 0033  ss: 0068
      Process rmmod (pid: 1823, ti=f6a5e000 task=f7082000 task.ti=f6a5e000)
      Stack: f9a55d20 f9a5200c 00000000 00000000 00000000 f6a5e000 f9a5200c f9a55a00 
             00000000 bf818cf0 f9a51f3f f9a55a00 00000000 c0132c60 65736f72 00000000 
             f69f9630 f69f9528 c014244a f6a4e900 00200246 f7082000 c01025e6 00000000 
      Call Trace:
       [<f9a5200c>] rose_rt_free+0x1d/0x49 [rose]
       [<f9a5200c>] rose_rt_free+0x1d/0x49 [rose]
       [<f9a51f3f>] rose_exit+0x4c/0xd5 [rose]
       [<c0132c60>] sys_delete_module+0x15e/0x186
       [<c014244a>] remove_vma+0x40/0x45
       [<c01025e6>] sysenter_past_esp+0x8f/0x99
       [<c012bacf>] trace_hardirqs_on+0x118/0x13b
       [<c01025b6>] sysenter_past_esp+0x5f/0x99
       =======================
      Code: 05 03 1d 80 db 5b c0 8b 03 25 00 40 02 00 3d 00 40 02 00 75 03 8b 5b 0c 8b 73 10 8b 44 24 18 89 44 24 04 9c 5d fa e8 77 df fd ff <8b> 56 08 89 f8 e8 84 f4 fd ff e8 bd 32 06 00 3b 5c 86 60 75 0f 
      EIP: [<c014c664>] kfree+0x48/0xa1 SS:ESP 0068:f6a5ef28
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      891e6a93
    • L
      Don't do load-average calculations at even 5-second intervals · 0c2043ab
      Linus Torvalds 提交于
      It turns out that there are a few other five-second timers in the
      kernel, and if the timers get in sync, the load-average can get
      artificially inflated by events that just happen to coincide.
      
      So just offset the load average calculation it by a timer tick.
      
      Noticed by Anders Boström, for whom the coincidence started triggering
      on one of his machines with the JBD jiffies rounding code (JBD is one of
      the subsystems that also end up using a 5-second timer by default).
      Tested-by: NAnders Boström <anders@bostrom.dyndns.org>
      Cc: Chuck Ebbert <cebbert@redhat.com>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0c2043ab
  4. 05 10月, 2007 1 次提交
    • S
      Remove unnecessary cast in prefetch() · 4ecbca85
      Serge Belyshev 提交于
      It is ok to call prefetch() function with NULL argument, as specifically
      commented in include/linux/prefetch.h.  But in standard C, it is invalid
      to dereference NULL pointer (see C99 standard 6.5.3.2 paragraph 4 and
      note #84).
      
      prefetch() has a memory reference for its argument.
      
      Newer gcc versions (4.3 and above) will use that to conclude that "x"
      argument is non-null and thus wreaking havok everywhere prefetch() was
      inlined.
      
      Fixed by removing cast and changing asm constraint.
      
      [ It seems in theory gcc 4.2 could miscompile this too; although no
        cases known.  In 2.6.24 we should probably switch to
        __builtin_prefetch() instead, but this is a simpler fix for now.
      				-- AK ]
      Signed-off-by: NSerge Belyshev <belyshev@depni.sinp.msu.ru>
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4ecbca85
  5. 04 10月, 2007 2 次提交
  6. 03 10月, 2007 2 次提交
  7. 01 10月, 2007 1 次提交
  8. 30 9月, 2007 1 次提交
    • N
      i386: remove bogus comment about memory barrier · 4827bbb0
      Nick Piggin 提交于
      The comment being removed by this patch is incorrect and misleading.
      
      In the following situation:
      
      	1. load  ...
      	2. store 1 -> X
      	3. wmb
      	4. rmb
      	5. load  a <- Y
      	6. store ...
      
      4 will only ensure ordering of 1 with 5.
      3 will only ensure ordering of 2 with 6.
      
      Further, a CPU with strictly in-order stores will still only provide that
      2 and 6 are ordered (effectively, it is the same as a weakly ordered CPU
      with wmb after every store).
      
      In all cases, 5 may still be executed before 2 is visible to other CPUs!
      
      The additional piece of the puzzle that mb() provides is the store/load
      ordering, which fundamentally cannot be achieved with any combination of
      rmb()s and wmb()s.
      
      This can be an unexpected result if one expected any sort of global ordering
      guarantee to barriers (eg. that the barriers themselves are sequentially
      consistent with other types of barriers).  However sfence or lfence barriers
      need only provide an ordering partial ordering of memory operations -- Consider
      that wmb may be implemented as nothing more than inserting a special barrier
      entry in the store queue, or, in the case of x86, it can be a noop as the store
      queue is in order. And an rmb may be implemented as a directive to prevent
      subsequent loads only so long as their are no previous outstanding loads (while
      there could be stores still in store queues).
      
      I can actually see the occasional load/store being reordered around lfence on
      my core2. That doesn't prove my above assertions, but it does show the comment
      is wrong (unless my program is -- can send it out by request).
      
      So:
         mb() and smp_mb() always have and always will require a full mfence
         or lock prefixed instruction on x86.  And we should remove this comment.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Paul McKenney <paulmck@us.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4827bbb0
  9. 29 9月, 2007 2 次提交
    • D
      [TCP]: Fix MD5 signature handling on big-endian. · f8ab18d2
      David S. Miller 提交于
      Based upon a report and initial patch by Peter Lieven.
      
      tcp4_md5sig_key and tcp6_md5sig_key need to start with
      the exact same members as tcp_md5sig_key.  Because they
      are both cast to that type by tcp_v{4,6}_md5_do_lookup().
      
      Unfortunately tcp{4,6}_md5sig_key use a u16 for the key
      length instead of a u8, which is what tcp_md5sig_key
      uses.  This just so happens to work by accident on
      little-endian, but on big-endian it doesn't.
      
      Instead of casting, just place tcp_md5sig_key as the first member of
      the address-family specific structures, adjust the access sites, and
      kill off the ugly casts.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f8ab18d2
    • R
      [MIPS] Fix CONFIG_BUILD_ELF64 kernels with symbols in CKSEG0. · 9ae6399f
      Ralf Baechle 提交于
      The __pa() for those did assume that all symbols have XKPHYS values and
      the math fails for any other address range.
      Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
      9ae6399f
  10. 28 9月, 2007 1 次提交
  11. 27 9月, 2007 1 次提交