1. 06 12月, 2009 1 次提交
  2. 04 12月, 2009 6 次提交
  3. 03 12月, 2009 7 次提交
    • S
      GFS2: Tag all metadata with jid · 0ab7d13f
      Steven Whitehouse 提交于
      There are two spare field in the header common to all GFS2
      metadata. One is just the right size to fit a journal id
      in it, and this patch updates the journal code so that each
      time a metadata block is modified, we tag it with the journal
      id of the node which is performing the modification.
      
      The reason for this is that it should make it much easier to
      debug issues which arise if we can tell which node was the
      last to modify a particular metadata block.
      
      Since the field is updated before the block is written into
      the journal, each journal should only contain metadata which
      is tagged with its own journal id. The one exception to this
      is the journal header block, which might have a different node's
      id in it, if that journal was recovered by another node in the
      cluster.
      
      Thus each journal will contain a record of which nodes recovered
      it, via the journal header.
      
      The other field in the metadata header could potentially be
      used to hold information about what kind of operation was
      performed, but for the time being we just zero it on each
      transaction so that if we use it for that in future, we'll
      know that the information (where it exists) is reliable.
      
      I did consider using the other field to hold the journal
      sequence number, however since in GFS2's journaling we write
      the modified data into the journal and not the original
      data, this gives no information as to what action caused the
      modification, so I think we can probably come up with a better
      use for those 64 bits in the future.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      0ab7d13f
    • S
      VFS: Export dquot_send_warning · 86e931a3
      Steven Whitehouse 提交于
      Sending a message to userspace in a generic format to warn
      of events (e.g. quota exceeded) in the quota subsystem is
      a generically useful feature. This patch makes some minor
      changes to the send_message function from dquot.c renaming
      it quota_send_message, moving it to quota.c and exporting it
      for use by filesystems which do not use the dquot code.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      86e931a3
    • S
      VFS: Add forget_all_cached_acls() · 796bd952
      Steven Whitehouse 提交于
      This is required for cluster filesystems which want to use
      cached ACLs so that they can invalidate the cache when
      required.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      Cc: Alexander Viro <aviro@redhat.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      796bd952
    • W
      TCPCT part 1d: define TCP cookie option, extend existing struct's · 435cf559
      William Allen Simpson 提交于
      Data structures are carefully composed to require minimal additions.
      For example, the struct tcp_options_received cookie_plus variable fits
      between existing 16-bit and 8-bit variables, requiring no additional
      space (taking alignment into consideration).  There are no additions to
      tcp_request_sock, and only 1 pointer in tcp_sock.
      
      This is a significantly revised implementation of an earlier (year-old)
      patch that no longer applies cleanly, with permission of the original
      author (Adam Langley):
      
          http://thread.gmane.org/gmane.linux.network/102586
      
      The principle difference is using a TCP option to carry the cookie nonce,
      instead of a user configured offset in the data.  This is more flexible and
      less subject to user configuration error.  Such a cookie option has been
      suggested for many years, and is also useful without SYN data, allowing
      several related concepts to use the same extension option.
      
          "Re: SYN floods (was: does history repeat itself?)", September 9, 1996.
          http://www.merit.net/mail.archives/nanog/1996-09/msg00235.html
      
          "Re: what a new TCP header might look like", May 12, 1998.
          ftp://ftp.isi.edu/end2end/end2end-interest-1998.mail
      
      These functions will also be used in subsequent patches that implement
      additional features.
      
      Requires:
         TCPCT part 1a: add request_values parameter for sending SYNACK
         TCPCT part 1b: generate Responder Cookie secret
         TCPCT part 1c: sysctl_tcp_cookie_size, socket option TCP_COOKIE_TRANSACTIONS
      
      Signed-off-by: William.Allen.Simpson@gmail.com
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      435cf559
    • W
      TCPCT part 1c: sysctl_tcp_cookie_size, socket option TCP_COOKIE_TRANSACTIONS · 519855c5
      William Allen Simpson 提交于
      Define sysctl (tcp_cookie_size) to turn on and off the cookie option
      default globally, instead of a compiled configuration option.
      
      Define per socket option (TCP_COOKIE_TRANSACTIONS) for setting constant
      data values, retrieving variable cookie values, and other facilities.
      
      Move inline tcp_clear_options() unchanged from net/tcp.h to linux/tcp.h,
      near its corresponding struct tcp_options_received (prior to changes).
      
      This is a straightforward re-implementation of an earlier (year-old)
      patch that no longer applies cleanly, with permission of the original
      author (Adam Langley):
      
          http://thread.gmane.org/gmane.linux.network/102586
      
      These functions will also be used in subsequent patches that implement
      additional features.
      
      Requires:
         net: TCP_MSS_DEFAULT, TCP_MSS_DESIRED
      
      Signed-off-by: William.Allen.Simpson@gmail.com
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      519855c5
    • W
      TCPCT part 1b: generate Responder Cookie secret · da5c78c8
      William Allen Simpson 提交于
      Define (missing) hash message size for SHA1.
      
      Define hashing size constants specific to TCP cookies.
      
      Add new function: tcp_cookie_generator().
      
      Maintain global secret values for tcp_cookie_generator().
      
      This is a significantly revised implementation of earlier (15-year-old)
      Photuris [RFC-2522] code for the KA9Q cooperative multitasking platform.
      
      Linux RCU technique appears to be well-suited to this application, though
      neither of the circular queue items are freed.
      
      These functions will also be used in subsequent patches that implement
      additional features.
      
      Signed-off-by: William.Allen.Simpson@gmail.com
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da5c78c8
    • A
      skbuff: remove skb_dma_map/unmap · c81c2d95
      Alexander Duyck 提交于
      The two functions skb_dma_map/unmap are unsafe to use as they cause
      problems when packets are cloned and sent to multiple devices while a HW
      IOMMU is enabled.  Due to this it is best to remove the code so it is not
      used by any other network driver maintainters.
      Signed-off-by: NAlexander Duyck <alexander.h.duyck@intel.com>
      Signed-off-by: NJeff Kirsher <jeffrey.t.kirsher@intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c81c2d95
  4. 02 12月, 2009 4 次提交
  5. 01 12月, 2009 1 次提交
  6. 29 11月, 2009 2 次提交
  7. 27 11月, 2009 3 次提交
  8. 26 11月, 2009 1 次提交
  9. 24 11月, 2009 4 次提交
    • T
      locking: Use __[SPIN|RW]_LOCK_UNLOCKED in [spin|rw]_lock_init() · a49ed0bf
      Thomas Gleixner 提交于
      SPIN_LOCK_UNLOCKED and RW_LOCK_UNLOCKED are deprecated. Replace them
      with the __*_LOCK_UNLOCKED variants.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a49ed0bf
    • T
      locking: Remove unused prototype · c9286b7e
      Thomas Gleixner 提交于
      commit 910067d1(remove generic__raw_read_trylock()) removed the
      implementation but left the prototype around. Remove it.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c9286b7e
    • S
      remove CONFIG_SECURITY_FILE_CAPABILITIES compile option · b3a222e5
      Serge E. Hallyn 提交于
      As far as I know, all distros currently ship kernels with default
      CONFIG_SECURITY_FILE_CAPABILITIES=y.  Since having the option on
      leaves a 'no_file_caps' option to boot without file capabilities,
      the main reason to keep the option is that turning it off saves
      you (on my s390x partition) 5k.  In particular, vmlinux sizes
      came to:
      
      without patch fscaps=n:		 	53598392
      without patch fscaps=y:		 	53603406
      with this patch applied:		53603342
      
      with the security-next tree.
      
      Against this we must weigh the fact that there is no simple way for
      userspace to figure out whether file capabilities are supported,
      while things like per-process securebits, capability bounding
      sets, and adding bits to pI if CAP_SETPCAP is in pE are not supported
      with SECURITY_FILE_CAPABILITIES=n, leaving a bit of a problem for
      applications wanting to know whether they can use them and/or why
      something failed.
      
      It also adds another subtly different set of semantics which we must
      maintain at the risk of severe security regressions.
      
      So this patch removes the SECURITY_FILE_CAPABILITIES compile
      option.  It drops the kernel size by about 50k over the stock
      SECURITY_FILE_CAPABILITIES=y kernel, by removing the
      cap_limit_ptraced_target() function.
      
      Changelog:
      	Nov 20: remove cap_limit_ptraced_target() as it's logic
      		was ifndef'ed.
      Signed-off-by: NSerge E. Hallyn <serue@us.ibm.com>
      Acked-by: NAndrew G. Morgan" <morgan@kernel.org>
      Signed-off-by: NJames Morris <jmorris@namei.org>
      b3a222e5
    • W
      sctp: implement definition for SACK-IMMEDIATELY extension · 475cba4e
      Wei Yongjun 提交于
      This patch implement the definition for SACK-IMMEDIATELY
      extension.
      
      Section 3.  The I-bit in the DATA Chunk Header
      
         The following Figure 1 shows the extended DATA chunk.
      
          0                   1                   2                   3
          0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |   Type = 0    |  Res  |I|U|B|E|           Length              |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |                              TSN                              |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |        Stream Identifier      |     Stream Sequence Number    |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         |                  Payload Protocol Identifier                  |
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
         \                                                               \
         /                           User Data                           /
         \                                                               \
         +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
      
                                       Figure 1
      
         The only difference between the DATA chunk in Figure 1 and the DATA
         chunk defined in [RFC4960] is the addition of the I-bit in the flags
         field of the chunk header.
      Signed-off-by: NWei Yongjun <yjwei@cn.fujitsu.com>
      Signed-off-by: NVlad Yasevich <vladislav.yasevich@hp.com>
      475cba4e
  10. 23 11月, 2009 3 次提交
    • P
      netfilter: nf_ct_tcp: improve out-of-sync situation in TCP tracking · c4832c7b
      Pablo Neira Ayuso 提交于
      Without this patch, if we receive a SYN packet from the client while
      the firewall is out-of-sync, we let it go through. Then, if we see
      the SYN/ACK reply coming from the server, we destroy the conntrack
      entry and drop the packet to trigger a new retransmission. Then,
      the retransmision from the client is used to start a new clean
      session.
      
      This patch improves the current handling. Basically, if we see an
      unexpected SYN packet, we annotate the TCP options. Then, if we
      see the reply SYN/ACK, this means that the firewall was indeed
      out-of-sync. Therefore, we set a clean new session from the existing
      entry based on the annotated values.
      
      This patch adds two new 8-bits fields that fit in a 16-bits gap of
      the ip_ct_tcp structure.
      
      This patch is particularly useful for conntrackd since the
      asynchronous nature of the state-synchronization allows to have
      backup nodes that are not perfect copies of the master. This helps
      to improve the recovery under some worst-case scenarios.
      
      I have tested this by creating lots of conntrack entries in wrong
      state:
      
      for ((i=1024;i<65535;i++)); do conntrack -I -p tcp -s 192.168.2.101 -d 192.168.2.2 --sport $i --dport 80 -t 800 --state ESTABLISHED -u ASSURED,SEEN_REPLY; done
      
      Then, I make some TCP connections:
      
      $ echo GET / | nc 192.168.2.2 80
      
      The events show the result:
      
       [UPDATE] tcp      6 60 SYN_RECV src=192.168.2.101 dst=192.168.2.2 sport=33220 dport=80 src=192.168.2.2 dst=192.168.2.101 sport=80 dport=33220 [ASSURED]
       [UPDATE] tcp      6 432000 ESTABLISHED src=192.168.2.101 dst=192.168.2.2 sport=33220 dport=80 src=192.168.2.2 dst=192.168.2.101 sport=80 dport=33220 [ASSURED]
       [UPDATE] tcp      6 120 FIN_WAIT src=192.168.2.101 dst=192.168.2.2 sport=33220 dport=80 src=192.168.2.2 dst=192.168.2.101 sport=80 dport=33220 [ASSURED]
       [UPDATE] tcp      6 30 LAST_ACK src=192.168.2.101 dst=192.168.2.2 sport=33220 dport=80 src=192.168.2.2 dst=192.168.2.101 sport=80 dport=33220 [ASSURED]
       [UPDATE] tcp      6 120 TIME_WAIT src=192.168.2.101 dst=192.168.2.2 sport=33220 dport=80 src=192.168.2.2 dst=192.168.2.101 sport=80 dport=33220 [ASSURED]
      
      and tcpdump shows no retransmissions:
      
      20:47:57.271951 IP 192.168.2.101.33221 > 192.168.2.2.www: S 435402517:435402517(0) win 5840 <mss 1460,sackOK,timestamp 4294961827 0,nop,wscale 6>
      20:47:57.273538 IP 192.168.2.2.www > 192.168.2.101.33221: S 3509927945:3509927945(0) ack 435402518 win 5792 <mss 1460,sackOK,timestamp 235681024 4294961827,nop,wscale 4>
      20:47:57.273608 IP 192.168.2.101.33221 > 192.168.2.2.www: . ack 3509927946 win 92 <nop,nop,timestamp 4294961827 235681024>
      20:47:57.273693 IP 192.168.2.101.33221 > 192.168.2.2.www: P 435402518:435402524(6) ack 3509927946 win 92 <nop,nop,timestamp 4294961827 235681024>
      20:47:57.275492 IP 192.168.2.2.www > 192.168.2.101.33221: . ack 435402524 win 362 <nop,nop,timestamp 235681024 4294961827>
      20:47:57.276492 IP 192.168.2.2.www > 192.168.2.101.33221: P 3509927946:3509928082(136) ack 435402524 win 362 <nop,nop,timestamp 235681025 4294961827>
      20:47:57.276515 IP 192.168.2.101.33221 > 192.168.2.2.www: . ack 3509928082 win 108 <nop,nop,timestamp 4294961828 235681025>
      20:47:57.276521 IP 192.168.2.2.www > 192.168.2.101.33221: F 3509928082:3509928082(0) ack 435402524 win 362 <nop,nop,timestamp 235681025 4294961827>
      20:47:57.277369 IP 192.168.2.101.33221 > 192.168.2.2.www: F 435402524:435402524(0) ack 3509928083 win 108 <nop,nop,timestamp 4294961828 235681025>
      20:47:57.279491 IP 192.168.2.2.www > 192.168.2.101.33221: . ack 435402525 win 362 <nop,nop,timestamp 235681025 4294961828>
      
      I also added a rule to log invalid packets, with no occurrences  :-) .
      Signed-off-by: NPablo Neira Ayuso <pablo@netfilter.org>
      Acked-by: NJozsef Kadlecsik <kadlec@blackhole.kfki.hu>
      Signed-off-by: NPatrick McHardy <kaber@trash.net>
      c4832c7b
    • P
      rcu: Re-arrange code to reduce #ifdef pain · 6ebb237b
      Paul E. McKenney 提交于
      Remove #ifdefs from kernel/rcupdate.c and
      include/linux/rcupdate.h by moving code to
      include/linux/rcutiny.h, include/linux/rcutree.h, and
      kernel/rcutree.c.
      
      Also remove some definitions that are no longer used.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <1258908830885-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ebb237b
    • P
      rcu: Eliminate unneeded function wrapping · 9f680ab4
      Paul E. McKenney 提交于
      The functions rcu_init() is a wrapper for __rcu_init(), and also
      sets up the CPU-hotplug notifier for rcu_barrier_cpu_hotplug().
      But TINY_RCU doesn't need CPU-hotplug notification, and the
      rcu_barrier_cpu_hotplug() is a simple wrapper for
      rcu_cpu_notify().
      
      So push rcu_init() out to kernel/rcutree.c and kernel/rcutiny.c
      and get rid of the wrapper function rcu_barrier_cpu_hotplug().
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: laijs@cn.fujitsu.com
      Cc: dipankar@in.ibm.com
      Cc: mathieu.desnoyers@polymtl.ca
      Cc: josh@joshtriplett.org
      Cc: dvhltc@us.ibm.com
      Cc: niv@us.ibm.com
      Cc: peterz@infradead.org
      Cc: rostedt@goodmis.org
      Cc: Valdis.Kletnieks@vt.edu
      Cc: dhowells@redhat.com
      LKML-Reference: <12589088302320-git-send-email->
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9f680ab4
  11. 21 11月, 2009 1 次提交
  12. 20 11月, 2009 7 次提交
    • K
      i2c: i2c-pnx: Made buf type unsigned to prevent sign extension · 4ced24c8
      Kevin Wells 提交于
      Made buf type unsigned to prevent sign extension
      Signed-off-by: NKevin Wells <kevin.wells@nxp.com>
      Signed-off-by: NBen Dooks <ben-linux@fluff.org>
      4ced24c8
    • A
      vt: Fix use of "new" in a struct field · 308efab5
      Alan Cox 提交于
      As this struct is exposed to user space and the API was added for this
      release it's a bit of a pain for the C++ world and we still have time to
      fix it. Rename the fields before we end up with that pain in an actual
      release.
      Signed-off-by: NAlan Cox <alan@linux.intel.com>
      Reported-by: Olivier Goffart
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      308efab5
    • D
      CacheFiles: Catch an overly long wait for an old active object · fee096de
      David Howells 提交于
      Catch an overly long wait for an old, dying active object when we want to
      replace it with a new one.  The probability is that all the slow-work threads
      are hogged, and the delete can't get a look in.
      
      What we do instead is:
      
       (1) if there's nothing in the slow work queue, we sleep until either the dying
           object has finished dying or there is something in the slow work queue
           behind which we can queue our object.
      
       (2) if there is something in the slow work queue, we return ETIMEDOUT to
           fscache_lookup_object(), which then puts us back on the slow work queue,
           presumably behind the deletion that we're blocked by.  We are then
           deferred for a while until we work our way back through the queue -
           without blocking a slow-work thread unnecessarily.
      
      A backtrace similar to the following may appear in the log without this patch:
      
      	INFO: task kslowd004:5711 blocked for more than 120 seconds.
      	"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      	kslowd004     D 0000000000000000     0  5711      2 0x00000080
      	 ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000
      	 ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8
      	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8
      	Call Trace:
      	 [<ffffffff81058e21>] ? trace_hardirqs_on+0xd/0xf
      	 [<ffffffffa011c4d8>] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
      	 [<ffffffffa011c4e1>] cachefiles_wait_bit+0x9/0xd [cachefiles]
      	 [<ffffffff81353153>] __wait_on_bit+0x43/0x76
      	 [<ffffffff8111ae39>] ? ext3_xattr_get+0x1ec/0x270
      	 [<ffffffff813531ef>] out_of_line_wait_on_bit+0x69/0x74
      	 [<ffffffffa011c4d8>] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
      	 [<ffffffff8104c125>] ? wake_bit_function+0x0/0x2e
      	 [<ffffffffa011bc79>] cachefiles_mark_object_active+0x203/0x23b [cachefiles]
      	 [<ffffffffa011c209>] cachefiles_walk_to_object+0x558/0x827 [cachefiles]
      	 [<ffffffffa011a429>] cachefiles_lookup_object+0xac/0x12a [cachefiles]
      	 [<ffffffffa00aa1e9>] fscache_lookup_object+0x1c7/0x214 [fscache]
      	 [<ffffffffa00aafc5>] fscache_object_state_machine+0xa5/0x52d [fscache]
      	 [<ffffffffa00ab4ac>] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]
      	 [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1
      	 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308
      	 [<ffffffff8104be91>] kthread+0x7a/0x82
      	 [<ffffffff8100beda>] child_rip+0xa/0x20
      	 [<ffffffff8100b87c>] ? restore_args+0x0/0x30
      	 [<ffffffff8104be17>] ? kthread+0x0/0x82
      	 [<ffffffff8100bed0>] ? child_rip+0x0/0x20
      	1 lock held by kslowd004/5711:
      	 #0:  (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffffa011be64>] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      fee096de
    • D
      CacheFiles: Don't write a full page if there's only a partial page to cache · a17754fb
      David Howells 提交于
      cachefiles_write_page() writes a full page to the backing file for the last
      page of the netfs file, even if the netfs file's last page is only a partial
      page.
      
      This causes the EOF on the backing file to be extended beyond the EOF of the
      netfs, and thus the backing file will be truncated by cachefiles_attr_changed()
      called from cachefiles_lookup_object().
      
      So we need to limit the write we make to the backing file on that last page
      such that it doesn't push the EOF too far.
      
      Also, if a backing file that has a partial page at the end is expanded, we
      discard the partial page and refetch it on the basis that we then have a hole
      in the file with invalid data, and should the power go out...  A better way to
      deal with this could be to record a note that the partial page contains invalid
      data until the correct data is written into it.
      
      This isn't a problem for netfs's that discard the whole backing file if the
      file size changes (such as NFS).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a17754fb
    • D
      FS-Cache: Start processing an object's operations on that object's death · 60d543ca
      David Howells 提交于
      Start processing an object's operations when that object moves into the DYING
      state as the object cannot be destroyed until all its outstanding operations
      have completed.
      
      Furthermore, make sure that read and allocation operations handle being woken
      up on a dead object.  Such events are recorded in the Allocs.abt and
      Retrvls.abt statistics as viewable through /proc/fs/fscache/stats.
      
      The code for waiting for object activation for the read and allocation
      operations is also extracted into its own function as it is much the same in
      all cases, differing only in the stats incremented.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      60d543ca
    • D
      FS-Cache: Handle pages pending storage that get evicted under OOM conditions · 201a1542
      David Howells 提交于
      Handle netfs pages that the vmscan algorithm wants to evict from the pagecache
      under OOM conditions, but that are waiting for write to the cache.  Under these
      conditions, vmscan calls the releasepage() function of the netfs, asking if a
      page can be discarded.
      
      The problem is typified by the following trace of a stuck process:
      
      	kslowd005     D 0000000000000000     0  4253      2 0x00000080
      	 ffff88001b14f370 0000000000000046 ffff880020d0d000 0000000000000007
      	 0000000000000006 0000000000000001 ffff88001b14ffd8 ffff880020d0d2a8
      	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff880020d0d2a8
      	Call Trace:
      	 [<ffffffffa00782d8>] __fscache_wait_on_page_write+0x8b/0xa7 [fscache]
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffffa0078240>] ? __fscache_check_page_write+0x63/0x70 [fscache]
      	 [<ffffffffa00b671d>] nfs_fscache_release_page+0x4e/0xc4 [nfs]
      	 [<ffffffffa00927f0>] nfs_release_page+0x3c/0x41 [nfs]
      	 [<ffffffff810885d3>] try_to_release_page+0x32/0x3b
      	 [<ffffffff81093203>] shrink_page_list+0x316/0x4ac
      	 [<ffffffff8109372b>] shrink_inactive_list+0x392/0x67c
      	 [<ffffffff813532fa>] ? __mutex_unlock_slowpath+0x100/0x10b
      	 [<ffffffff81058df0>] ? trace_hardirqs_on_caller+0x10c/0x130
      	 [<ffffffff8135330e>] ? mutex_unlock+0x9/0xb
      	 [<ffffffff81093aa2>] shrink_list+0x8d/0x8f
      	 [<ffffffff81093d1c>] shrink_zone+0x278/0x33c
      	 [<ffffffff81052d6c>] ? ktime_get_ts+0xad/0xba
      	 [<ffffffff81094b13>] try_to_free_pages+0x22e/0x392
      	 [<ffffffff81091e24>] ? isolate_pages_global+0x0/0x212
      	 [<ffffffff8108e743>] __alloc_pages_nodemask+0x3dc/0x5cf
      	 [<ffffffff81089529>] grab_cache_page_write_begin+0x65/0xaa
      	 [<ffffffff8110f8c0>] ext3_write_begin+0x78/0x1eb
      	 [<ffffffff81089ec5>] generic_file_buffered_write+0x109/0x28c
      	 [<ffffffff8103cb69>] ? current_fs_time+0x22/0x29
      	 [<ffffffff8108a509>] __generic_file_aio_write+0x350/0x385
      	 [<ffffffff8108a588>] ? generic_file_aio_write+0x4a/0xae
      	 [<ffffffff8108a59e>] generic_file_aio_write+0x60/0xae
      	 [<ffffffff810b2e82>] do_sync_write+0xe3/0x120
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffff810b18e1>] ? __dentry_open+0x1a5/0x2b8
      	 [<ffffffff810b1a76>] ? dentry_open+0x82/0x89
      	 [<ffffffffa00e693c>] cachefiles_write_page+0x298/0x335 [cachefiles]
      	 [<ffffffffa0077147>] fscache_write_op+0x178/0x2c2 [fscache]
      	 [<ffffffffa0075656>] fscache_op_execute+0x7a/0xd1 [fscache]
      	 [<ffffffff81082093>] slow_work_execute+0x18f/0x2d1
      	 [<ffffffff8108239a>] slow_work_thread+0x1c5/0x308
      	 [<ffffffff8104c0f1>] ? autoremove_wake_function+0x0/0x34
      	 [<ffffffff810821d5>] ? slow_work_thread+0x0/0x308
      	 [<ffffffff8104be91>] kthread+0x7a/0x82
      	 [<ffffffff8100beda>] child_rip+0xa/0x20
      	 [<ffffffff8100b87c>] ? restore_args+0x0/0x30
      	 [<ffffffff8102ef83>] ? tg_shares_up+0x171/0x227
      	 [<ffffffff8104be17>] ? kthread+0x0/0x82
      	 [<ffffffff8100bed0>] ? child_rip+0x0/0x20
      
      In the above backtrace, the following is happening:
      
       (1) A page storage operation is being executed by a slow-work thread
           (fscache_write_op()).
      
       (2) FS-Cache farms the operation out to the cache to perform
           (cachefiles_write_page()).
      
       (3) CacheFiles is then calling Ext3 to perform the actual write, using Ext3's
           standard write (do_sync_write()) under KERNEL_DS directly from the netfs
           page.
      
       (4) However, for Ext3 to perform the write, it must allocate some memory, in
           particular, it must allocate at least one page cache page into which it
           can copy the data from the netfs page.
      
       (5) Under OOM conditions, the memory allocator can't immediately come up with
           a page, so it uses vmscan to find something to discard
           (try_to_free_pages()).
      
       (6) vmscan finds a clean netfs page it might be able to discard (possibly the
           one it's trying to write out).
      
       (7) The netfs is called to throw the page away (nfs_release_page()) - but it's
           called with __GFP_WAIT, so the netfs decides to wait for the store to
           complete (__fscache_wait_on_page_write()).
      
       (8) This blocks a slow-work processing thread - possibly against itself.
      
      The system ends up stuck because it can't write out any netfs pages to the
      cache without allocating more memory.
      
      To avoid this, we make FS-Cache cancel some writes that aren't in the middle of
      actually being performed.  This means that some data won't make it into the
      cache this time.  To support this, a new FS-Cache function is added
      fscache_maybe_release_page() that replaces what the netfs releasepage()
      functions used to do with respect to the cache.
      
      The decisions fscache_maybe_release_page() makes are counted and displayed
      through /proc/fs/fscache/stats on a line labelled "VmScan".  There are four
      counters provided: "nos=N" - pages that weren't pending storage; "gon=N" -
      pages that were pending storage when we first looked, but weren't by the time
      we got the object lock; "bsy=N" - pages that we ignored as they were actively
      being written when we looked; and "can=N" - pages that we cancelled the storage
      of.
      
      What I'd really like to do is alter the behaviour of the cancellation
      heuristics, depending on how necessary it is to expel pages.  If there are
      plenty of other pages that aren't waiting to be written to the cache that
      could be ejected first, then it would be nice to hold up on immediate
      cancellation of cache writes - but I don't see a way of doing that.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      201a1542
    • D
      FS-Cache: Fix lock misorder in fscache_write_op() · 1bccf513
      David Howells 提交于
      FS-Cache has two structs internally for keeping track of the internal state of
      a cached file: the fscache_cookie struct, which represents the netfs's state,
      and fscache_object struct, which represents the cache's state.  Each has a
      pointer that points to the other (when both are in existence), and each has a
      spinlock for pointer maintenance.
      
      Since netfs operations approach these structures from the cookie side, they get
      the cookie lock first, then the object lock.  Cache operations, on the other
      hand, approach from the object side, and get the object lock first.  It is not
      then permitted for a cache operation to get the cookie lock whilst it is
      holding the object lock lest deadlock occur; instead, it must do one of two
      things:
      
       (1) increment the cookie usage counter, drop the object lock and then get both
           locks in order, or
      
       (2) simply hold the object lock as certain parts of the cookie may not be
           altered whilst the object lock is held.
      
      It is also not permitted to follow either pointer without holding the lock at
      the end you start with.  To break the pointers between the cookie and the
      object, both locks must be held.
      
      fscache_write_op(), however, violates the locking rules: It attempts to get the
      cookie lock without (a) checking that the cookie pointer is a valid pointer,
      and (b) holding the object lock to protect the cookie pointer whilst it follows
      it.  This is so that it can access the pending page store tree without
      interference from __fscache_write_page().
      
      This is fixed by splitting the cookie lock, such that the page store tracking
      tree is protected by its own lock, and checking that the cookie pointer is
      non-NULL before we attempt to follow it whilst holding the object lock.
      
      The new lock is subordinate to both the cookie lock and the object lock, and so
      should be taken after those.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      1bccf513