1. 11 8月, 2010 1 次提交
    • G
      tty: remove remaining Hayes ESP ioctls · a3c8ed69
      Greg Kroah-Hartman 提交于
      As Jeff Dike pointed out, the Hayes ESP driver was removed in commit
      f53a2ade, so these ioctl definitions
      should also be removed.  This cleans up the remaining arch-specific
      locations of this ioctl value.
      
      Thanks to Arnd for pointing these out.
      
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Alan Cox <alan@linux.intel.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@suse.de>
      a3c8ed69
  2. 27 7月, 2010 1 次提交
  3. 09 6月, 2010 1 次提交
    • P
      arch: Implement local64_t · 1996bda2
      Peter Zijlstra 提交于
      On 64bit, local_t is of size long, and thus we make local64_t an alias.
      On 32bit, we fall back to atomic64_t. (architecture can provide optimized
      32-bit version)
      
      (This new facility is to be used by perf events optimizations.)
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: linux-arch@vger.kernel.org
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1996bda2
  4. 28 5月, 2010 1 次提交
  5. 17 5月, 2010 1 次提交
  6. 14 5月, 2010 1 次提交
  7. 13 3月, 2010 1 次提交
    • C
      avr32: use generic ptrace_resume code · 1d839317
      Christoph Hellwig 提交于
      Use the generic ptrace_resume code for PTRACE_SYSCALL, PTRACE_CONT,
      PTRACE_KILL and PTRACE_SINGLESTEP.  This implies defining
      arch_has_single_step in <asm/ptrace.h> and implementing the
      user_enable_single_step and user_disable_single_step functions, which also
      causes the breakpoint information to be cleared on fork, which could be
      considered a bug fix.
      
      Also the TIF_SYSCALL_TRACE thread flag is now cleared on PTRACE_KILL which
      it previously wasn't which is consistent with all architectures using the
      modern ptrace code.
      
      Currently avr32 doesn't implement any code to disable single stepping when
      one of the non-syscall requests is called which seems wrong, but I've left
      it as-is for now.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Roland McGrath <roland@redhat.com>
      Acked-by: NHaavard Skinnemoen <hskinnemoen@atmel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1d839317
  8. 21 2月, 2010 1 次提交
    • R
      MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself · 4b3073e1
      Russell King 提交于
      On VIVT ARM, when we have multiple shared mappings of the same file
      in the same MM, we need to ensure that we have coherency across all
      copies.  We do this via make_coherent() by making the pages
      uncacheable.
      
      This used to work fine, until we allowed highmem with highpte - we
      now have a page table which is mapped as required, and is not available
      for modification via update_mmu_cache().
      
      Ralf Beache suggested getting rid of the PTE value passed to
      update_mmu_cache():
      
        On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
        to construct a pointer to the pte again.  Passing a pte_t * is much
        more elegant.  Maybe we might even replace the pte argument with the
        pte_t?
      
      Ben Herrenschmidt would also like the pte pointer for PowerPC:
      
        Passing the ptep in there is exactly what I want.  I want that
        -instead- of the PTE value, because I have issue on some ppc cases,
        for I$/D$ coherency, where set_pte_at() may decide to mask out the
        _PAGE_EXEC.
      
      So, pass in the mapped page table pointer into update_mmu_cache(), and
      remove the PTE value, updating all implementations and call sites to
      suit.
      
      Includes a fix from Stephen Rothwell:
      
        sparc: fix fallout from update_mmu_cache API change
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      4b3073e1
  9. 16 12月, 2009 1 次提交
  10. 15 12月, 2009 1 次提交
  11. 12 12月, 2009 1 次提交
  12. 11 12月, 2009 1 次提交
  13. 06 12月, 2009 1 次提交
  14. 26 11月, 2009 1 次提交
    • I
      block: add helpers to run flush_dcache_page() against a bio and a request's pages · 2d4dc890
      Ilya Loginov 提交于
      Mtdblock driver doesn't call flush_dcache_page for pages in request.  So,
      this causes problems on architectures where the icache doesn't fill from
      the dcache or with dcache aliases.  The patch fixes this.
      
      The ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE symbol was introduced to avoid
      pointless empty cache-thrashing loops on architectures for which
      flush_dcache_page() is a no-op.  Every architecture was provided with this
      flush pages on architectires where ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE is
      equal 1 or do nothing otherwise.
      
      See "fix mtd_blkdevs problem with caches on some architectures" discussion
      on LKML for more information.
      Signed-off-by: NIlya Loginov <isloginov@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Peter Horton <phorton@bitbox.co.uk>
      Cc: "Ed L. Cashin" <ecashin@coraid.com>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      2d4dc890
  15. 13 10月, 2009 1 次提交
    • N
      net: Generalize socket rx gap / receive queue overflow cmsg · 3b885787
      Neil Horman 提交于
      Create a new socket level option to report number of queue overflows
      
      Recently I augmented the AF_PACKET protocol to report the number of frames lost
      on the socket receive queue between any two enqueued frames.  This value was
      exported via a SOL_PACKET level cmsg.  AFter I completed that work it was
      requested that this feature be generalized so that any datagram oriented socket
      could make use of this option.  As such I've created this patch, It creates a
      new SOL_SOCKET level option called SO_RXQ_OVFL, which when enabled exports a
      SOL_SOCKET level cmsg that reports the nubmer of times the sk_receive_queue
      overflowed between any two given frames.  It also augments the AF_PACKET
      protocol to take advantage of this new feature (as it previously did not touch
      sk->sk_drops, which this patch uses to record the overflow count).  Tested
      successfully by me.
      
      Notes:
      
      1) Unlike my previous patch, this patch simply records the sk_drops value, which
      is not a number of drops between packets, but rather a total number of drops.
      Deltas must be computed in user space.
      
      2) While this patch currently works with datagram oriented protocols, it will
      also be accepted by non-datagram oriented protocols. I'm not sure if thats
      agreeable to everyone, but my argument in favor of doing so is that, for those
      protocols which aren't applicable to this option, sk_drops will always be zero,
      and reporting no drops on a receive queue that isn't used for those
      non-participating protocols seems reasonable to me.  This also saves us having
      to code in a per-protocol opt in mechanism.
      
      3) This applies cleanly to net-next assuming that commit
      97775007 (my af packet cmsg patch) is reverted
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b885787
  16. 22 9月, 2009 2 次提交
  17. 02 9月, 2009 1 次提交
  18. 06 8月, 2009 2 次提交
    • J
      net: implement a SO_DOMAIN getsockoption · 0d6038ee
      Jan Engelhardt 提交于
      This sockopt goes in line with SO_TYPE and SO_PROTOCOL. It makes it
      possible for userspace programs to pass around file descriptors — I
      am referring to arguments-to-functions, but it may even work for the
      fd passing over UNIX sockets — without needing to also pass the
      auxiliary information (PF_INET6/IPPROTO_TCP).
      Signed-off-by: NJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0d6038ee
    • J
      net: implement a SO_PROTOCOL getsockoption · 49c794e9
      Jan Engelhardt 提交于
      Similar to SO_TYPE returning the socket type, SO_PROTOCOL allows to
      retrieve the protocol used with a given socket.
      
      I am not quite sure why we have that-many copies of socket.h, and why
      the values are not the same on all arches either, but for where hex
      numbers dominate, I use 0x1029 for SO_PROTOCOL as that seems to be
      the next free unused number across a bunch of operating systems, or
      so Google results make me want to believe. SO_PROTOCOL for others
      just uses the next free Linux number, 38.
      Signed-off-by: NJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      49c794e9
  19. 28 7月, 2009 1 次提交
    • B
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt 提交于
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: NNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  20. 11 7月, 2009 1 次提交
  21. 12 6月, 2009 3 次提交
  22. 13 5月, 2009 1 次提交
  23. 03 3月, 2009 1 次提交
  24. 16 2月, 2009 1 次提交
    • P
      net: new user space API for time stamping of incoming and outgoing packets · cb9eff09
      Patrick Ohly 提交于
      User space can request hardware and/or software time stamping.
      Reporting of the result(s) via a new control message is enabled
      separately for each field in the message because some of the
      fields may require additional computation and thus cause overhead.
      User space can tell the different kinds of time stamps apart
      and choose what suits its needs.
      
      When a TX timestamp operation is requested, the TX skb will be cloned
      and the clone will be time stamped (in hardware or software) and added
      to the socket error queue of the skb, if the skb has a socket
      associated with it.
      
      The actual TX timestamp will reach userspace as a RX timestamp on the
      cloned packet. If timestamping is requested and no timestamping is
      done in the device driver (potentially this may use hardware
      timestamping), it will be done in software after the device's
      start_hard_xmit routine.
      Signed-off-by: NPatrick Ohly <patrick.ohly@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cb9eff09
  25. 13 2月, 2009 1 次提交
    • S
      preempt-count: force hardirq-count to max of 10 · 5a5fb7db
      Steven Rostedt 提交于
      To add a bit in the preempt_count to be set when in NMI context, we
      found that some archs did not have enough bits to spare. This is
      due to the hardirq_count being a mask that can hold NR_IRQS.
      
      Some archs allow for over 16000 IRQs, and that would require a mask
      of 14 bits. The sofitrq mask is 8 bits and the preempt disable mask
      is also 8 bits.  The PREEMP_ACTIVE bit is bit 30, and bit 31 would
      make the preempt_count (which is type int) a negative number.
      A negative preempt_count is a sign of failure.
      
      Add them up 14+8+8+1+1 you get 32 bits. No room for the NMI bit.
      
      But the hardirq_count is to track the number of nested IRQs, not
      the number of total IRQs.  This originally took the paranoid approach
      of setting the max nesting to NR_IRQS. But when we have archs with
      over 1000 IRQs, it is not practical to think they will ever all
      nest on a single CPU. Not to mention that this would most definitely
      cause a stack overflow.
      
      This patch sets a max of 10 bits to be used for IRQ nesting.
      I did a 'git grep HARDIRQ' to examine all users of HARDIRQ_BITS and
      HARDIRQ_MASK, and found that making it a max of 10 would not hurt
      anyone. I did find that the m68k expected it to be 8 bits, so
      I allow for the archs to set the number to be less than 10.
      
      I removed the setting of HARDIRQ_BITS from the archs that set it
      to more than 10. This includes ALPHA, ia64 and avr32.
      
      This will always allow room for the NMI bit, and if we need to allow
      for NMI nesting, we have 4 bits to play with.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      5a5fb7db
  26. 01 2月, 2009 1 次提交
  27. 16 1月, 2009 1 次提交
    • H
      avr32: fix out-of-range rjmp instruction on large kernels · 61f3632f
      Haavard Skinnemoen 提交于
      Use .subsection to place fixups closer to their jump targets. This
      increases the maximum size of the kernel before we get link errors
      significantly.
      
      The problem here is that we don't have a "call"-ish pseudo-instruction
      to use instead of rjmp...we could add one, but that means we'll have to
      wait for a new toolchain release, wait until we're fairly sure most
      people are using it, etc...
      
      As an added bonus, it should decrease the RAM footprint slightly,
      though it might pollute the icache a bit more.
      Signed-off-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com>
      61f3632f
  28. 15 1月, 2009 1 次提交
  29. 07 1月, 2009 5 次提交
  30. 05 1月, 2009 1 次提交
  31. 01 1月, 2009 1 次提交
  32. 20 10月, 2008 1 次提交
    • M
      container freezer: add TIF_FREEZE flag to all architectures · 83224b08
      Matt Helsley 提交于
      This patch series introduces a cgroup subsystem that utilizes the swsusp
      freezer to freeze a group of tasks.  It's immediately useful for batch job
      management scripts.  It should also be useful in the future for
      implementing container checkpoint/restart.
      
      The freezer subsystem in the container filesystem defines a cgroup file
      named freezer.state.  Reading freezer.state will return the current state
      of the cgroup.  Writing "FROZEN" to the state file will freeze all tasks
      in the cgroup.  Subsequently writing "RUNNING" will unfreeze the tasks in
      the cgroup.
      
      * Examples of usage :
      
         # mkdir /containers/freezer
         # mount -t cgroup -ofreezer freezer  /containers
         # mkdir /containers/0
         # echo $some_pid > /containers/0/tasks
      
      to get status of the freezer subsystem :
      
         # cat /containers/0/freezer.state
         RUNNING
      
      to freeze all tasks in the container :
      
         # echo FROZEN > /containers/0/freezer.state
         # cat /containers/0/freezer.state
         FREEZING
         # cat /containers/0/freezer.state
         FROZEN
      
      to unfreeze all tasks in the container :
      
         # echo RUNNING > /containers/0/freezer.state
         # cat /containers/0/freezer.state
         RUNNING
      
      This patch:
      
      The first step in making the refrigerator() available to all
      architectures, even for those without power management.
      
      The purpose of such a change is to be able to use the refrigerator() in a
      new control group subsystem which will implement a control group freezer.
      
      [akpm@linux-foundation.org: fix sparc]
      Signed-off-by: NCedric Le Goater <clg@fr.ibm.com>
      Signed-off-by: NMatt Helsley <matthltc@us.ibm.com>
      Acked-by: NPavel Machek <pavel@suse.cz>
      Acked-by: NSerge E. Hallyn <serue@us.ibm.com>
      Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Acked-by: NNigel Cunningham <nigel@tuxonice.net>
      Tested-by: NMatt Helsley <matthltc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83224b08