1. 03 4月, 2009 40 次提交
    • D
      NFS: Use local disk inode cache · ef79c097
      David Howells 提交于
      Bind data storage objects in the local cache to NFS inodes.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      ef79c097
    • D
      NFS: Define and create inode-level cache objects · 10329a5d
      David Howells 提交于
      Define and create inode-level cache data storage objects (as managed by
      nfs_inode structs).
      
      Each inode-level object is created in a superblock-level index object and is
      itself a data storage object into which pages from the inode are stored.
      
      The inode object key is the NFS file handle for the inode.
      
      The inode object is given coherency data to carry in the auxiliary data
      permitted by the cache.  This is a sequence made up of:
      
       (1) i_mtime from the NFS inode.
      
       (2) i_ctime from the NFS inode.
      
       (3) i_size from the NFS inode.
      
       (4) change_attr from the NFSv4 attribute data.
      
      As the cache is a persistent cache, the auxiliary data is checked when a new
      NFS in-memory inode is set up that matches an already existing data storage
      object in the cache.  If the coherency data is the same, the on-disk object is
      retained and used; if not, it is scrapped and a new one created.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      10329a5d
    • D
      NFS: Define and create superblock-level objects · 08734048
      David Howells 提交于
      Define and create superblock-level cache index objects (as managed by
      nfs_server structs).
      
      Each superblock object is created in a server level index object and is itself
      an index into which inode-level objects are inserted.
      
      Ideally there would be one superblock-level object per server, and the former
      would be folded into the latter; however, since the "nosharecache" option
      exists this isn't possible.
      
      The superblock object key is a sequence consisting of:
      
       (1) Certain superblock s_flags.
      
       (2) Various connection parameters that serve to distinguish superblocks for
           sget().
      
       (3) The volume FSID.
      
       (4) The security flavour.
      
       (5) The uniquifier length.
      
       (6) The uniquifier text.  This is normally an empty string, unless the fsc=xyz
           mount option was used to explicitly specify a uniquifier.
      
      The key blob is of variable length, depending on the length of (6).
      
      The superblock object is given no coherency data to carry in the auxiliary data
      permitted by the cache.  It is assumed that the superblock is always coherent.
      
      This patch also adds uniquification handling such that two otherwise identical
      superblocks, at least one of which is marked "nosharecache", won't end up
      trying to share the on-disk cache.  It will be possible to manually provide a
      uniquifier through a mount option with a later patch to avoid the error
      otherwise produced.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      08734048
    • D
      NFS: Define and create server-level objects · 14727281
      David Howells 提交于
      Define and create server-level cache index objects (as managed by nfs_client
      structs).
      
      Each server object is created in the NFS top-level index object and is itself
      an index into which superblock-level objects are inserted.
      
      Ideally there would be one superblock-level object per server, and the former
      would be folded into the latter; however, since the "nosharecache" option
      exists this isn't possible.
      
      The server object key is a sequence consisting of:
      
       (1) NFS version
      
       (2) Server address family (eg: AF_INET or AF_INET6)
      
       (3) Server port.
      
       (4) Server IP address.
      
      The key blob is of variable length, depending on the length of (4).
      
      The server object is given no coherency data to carry in the auxiliary data
      permitted by the cache.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      14727281
    • D
      NFS: Register NFS for caching and retrieve the top-level index · 8ec442ae
      David Howells 提交于
      Register NFS for caching and retrieve the top-level cache index object cookie.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      8ec442ae
    • D
      NFS: Permit local filesystem caching to be enabled for NFS · 3b9ce977
      David Howells 提交于
      Permit local filesystem caching to be enabled for NFS in the kernel
      configuration.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      3b9ce977
    • D
      NFS: Add FS-Cache option bit and debug bit · c6a6f19e
      David Howells 提交于
      Add FS-Cache option bit to nfs_server struct.  This is set to indicate local
      on-disk caching is enabled for a particular superblock.
      
      Also add debug bit for local caching operations.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      c6a6f19e
    • D
      NFS: Add comment banners to some NFS functions · 6b9b3514
      David Howells 提交于
      Add comment banners to some NFS functions so that they can be modified by the
      NFS fscache patches for further information.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      6b9b3514
    • D
      FS-Cache: Make kAFS use FS-Cache · 9b3f26c9
      David Howells 提交于
      The attached patch makes the kAFS filesystem in fs/afs/ use FS-Cache, and
      through it any attached caches.  The kAFS filesystem will use caching
      automatically if it's available.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      9b3f26c9
    • D
      CacheFiles: A cache that backs onto a mounted filesystem · 9ae326a6
      David Howells 提交于
      Add an FS-Cache cache-backend that permits a mounted filesystem to be used as a
      backing store for the cache.
      
      CacheFiles uses a userspace daemon to do some of the cache management - such as
      reaping stale nodes and culling.  This is called cachefilesd and lives in
      /sbin.  The source for the daemon can be downloaded from:
      
      	http://people.redhat.com/~dhowells/cachefs/cachefilesd.c
      
      And an example configuration from:
      
      	http://people.redhat.com/~dhowells/cachefs/cachefilesd.conf
      
      The filesystem and data integrity of the cache are only as good as those of the
      filesystem providing the backing services.  Note that CacheFiles does not
      attempt to journal anything since the journalling interfaces of the various
      filesystems are very specific in nature.
      
      CacheFiles creates a misc character device - "/dev/cachefiles" - that is used
      to communication with the daemon.  Only one thing may have this open at once,
      and whilst it is open, a cache is at least partially in existence.  The daemon
      opens this and sends commands down it to control the cache.
      
      CacheFiles is currently limited to a single cache.
      
      CacheFiles attempts to maintain at least a certain percentage of free space on
      the filesystem, shrinking the cache by culling the objects it contains to make
      space if necessary - see the "Cache Culling" section.  This means it can be
      placed on the same medium as a live set of data, and will expand to make use of
      spare space and automatically contract when the set of data requires more
      space.
      
      ============
      REQUIREMENTS
      ============
      
      The use of CacheFiles and its daemon requires the following features to be
      available in the system and in the cache filesystem:
      
      	- dnotify.
      
      	- extended attributes (xattrs).
      
      	- openat() and friends.
      
      	- bmap() support on files in the filesystem (FIBMAP ioctl).
      
      	- The use of bmap() to detect a partial page at the end of the file.
      
      It is strongly recommended that the "dir_index" option is enabled on Ext3
      filesystems being used as a cache.
      
      =============
      CONFIGURATION
      =============
      
      The cache is configured by a script in /etc/cachefilesd.conf.  These commands
      set up cache ready for use.  The following script commands are available:
      
       (*) brun <N>%
       (*) bcull <N>%
       (*) bstop <N>%
       (*) frun <N>%
       (*) fcull <N>%
       (*) fstop <N>%
      
      	Configure the culling limits.  Optional.  See the section on culling
      	The defaults are 7% (run), 5% (cull) and 1% (stop) respectively.
      
      	The commands beginning with a 'b' are file space (block) limits, those
      	beginning with an 'f' are file count limits.
      
       (*) dir <path>
      
      	Specify the directory containing the root of the cache.  Mandatory.
      
       (*) tag <name>
      
      	Specify a tag to FS-Cache to use in distinguishing multiple caches.
      	Optional.  The default is "CacheFiles".
      
       (*) debug <mask>
      
      	Specify a numeric bitmask to control debugging in the kernel module.
      	Optional.  The default is zero (all off).  The following values can be
      	OR'd into the mask to collect various information:
      
      		1	Turn on trace of function entry (_enter() macros)
      		2	Turn on trace of function exit (_leave() macros)
      		4	Turn on trace of internal debug points (_debug())
      
      	This mask can also be set through sysfs, eg:
      
      		echo 5 >/sys/modules/cachefiles/parameters/debug
      
      ==================
      STARTING THE CACHE
      ==================
      
      The cache is started by running the daemon.  The daemon opens the cache device,
      configures the cache and tells it to begin caching.  At that point the cache
      binds to fscache and the cache becomes live.
      
      The daemon is run as follows:
      
      	/sbin/cachefilesd [-d]* [-s] [-n] [-f <configfile>]
      
      The flags are:
      
       (*) -d
      
      	Increase the debugging level.  This can be specified multiple times and
      	is cumulative with itself.
      
       (*) -s
      
      	Send messages to stderr instead of syslog.
      
       (*) -n
      
      	Don't daemonise and go into background.
      
       (*) -f <configfile>
      
      	Use an alternative configuration file rather than the default one.
      
      ===============
      THINGS TO AVOID
      ===============
      
      Do not mount other things within the cache as this will cause problems.  The
      kernel module contains its own very cut-down path walking facility that ignores
      mountpoints, but the daemon can't avoid them.
      
      Do not create, rename or unlink files and directories in the cache whilst the
      cache is active, as this may cause the state to become uncertain.
      
      Renaming files in the cache might make objects appear to be other objects (the
      filename is part of the lookup key).
      
      Do not change or remove the extended attributes attached to cache files by the
      cache as this will cause the cache state management to get confused.
      
      Do not create files or directories in the cache, lest the cache get confused or
      serve incorrect data.
      
      Do not chmod files in the cache.  The module creates things with minimal
      permissions to prevent random users being able to access them directly.
      
      =============
      CACHE CULLING
      =============
      
      The cache may need culling occasionally to make space.  This involves
      discarding objects from the cache that have been used less recently than
      anything else.  Culling is based on the access time of data objects.  Empty
      directories are culled if not in use.
      
      Cache culling is done on the basis of the percentage of blocks and the
      percentage of files available in the underlying filesystem.  There are six
      "limits":
      
       (*) brun
       (*) frun
      
           If the amount of free space and the number of available files in the cache
           rises above both these limits, then culling is turned off.
      
       (*) bcull
       (*) fcull
      
           If the amount of available space or the number of available files in the
           cache falls below either of these limits, then culling is started.
      
       (*) bstop
       (*) fstop
      
           If the amount of available space or the number of available files in the
           cache falls below either of these limits, then no further allocation of
           disk space or files is permitted until culling has raised things above
           these limits again.
      
      These must be configured thusly:
      
      	0 <= bstop < bcull < brun < 100
      	0 <= fstop < fcull < frun < 100
      
      Note that these are percentages of available space and available files, and do
      _not_ appear as 100 minus the percentage displayed by the "df" program.
      
      The userspace daemon scans the cache to build up a table of cullable objects.
      These are then culled in least recently used order.  A new scan of the cache is
      started as soon as space is made in the table.  Objects will be skipped if
      their atimes have changed or if the kernel module says it is still using them.
      
      ===============
      CACHE STRUCTURE
      ===============
      
      The CacheFiles module will create two directories in the directory it was
      given:
      
       (*) cache/
      
       (*) graveyard/
      
      The active cache objects all reside in the first directory.  The CacheFiles
      kernel module moves any retired or culled objects that it can't simply unlink
      to the graveyard from which the daemon will actually delete them.
      
      The daemon uses dnotify to monitor the graveyard directory, and will delete
      anything that appears therein.
      
      The module represents index objects as directories with the filename "I..." or
      "J...".  Note that the "cache/" directory is itself a special index.
      
      Data objects are represented as files if they have no children, or directories
      if they do.  Their filenames all begin "D..." or "E...".  If represented as a
      directory, data objects will have a file in the directory called "data" that
      actually holds the data.
      
      Special objects are similar to data objects, except their filenames begin
      "S..." or "T...".
      
      If an object has children, then it will be represented as a directory.
      Immediately in the representative directory are a collection of directories
      named for hash values of the child object keys with an '@' prepended.  Into
      this directory, if possible, will be placed the representations of the child
      objects:
      
      	INDEX     INDEX      INDEX                             DATA FILES
      	========= ========== ================================= ================
      	cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400
      	cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...DB1ry
      	cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...N22ry
      	cache/@4a/I03nfs/@30/Ji000000000000000--fHg8hi8400/@75/Es0g000w...FP1ry
      
      If the key is so long that it exceeds NAME_MAX with the decorations added on to
      it, then it will be cut into pieces, the first few of which will be used to
      make a nest of directories, and the last one of which will be the objects
      inside the last directory.  The names of the intermediate directories will have
      '+' prepended:
      
      	J1223/@23/+xy...z/+kl...m/Epqr
      
      Note that keys are raw data, and not only may they exceed NAME_MAX in size,
      they may also contain things like '/' and NUL characters, and so they may not
      be suitable for turning directly into a filename.
      
      To handle this, CacheFiles will use a suitably printable filename directly and
      "base-64" encode ones that aren't directly suitable.  The two versions of
      object filenames indicate the encoding:
      
      	OBJECT TYPE	PRINTABLE	ENCODED
      	===============	===============	===============
      	Index		"I..."		"J..."
      	Data		"D..."		"E..."
      	Special		"S..."		"T..."
      
      Intermediate directories are always "@" or "+" as appropriate.
      
      Each object in the cache has an extended attribute label that holds the object
      type ID (required to distinguish special objects) and the auxiliary data from
      the netfs.  The latter is used to detect stale objects in the cache and update
      or retire them.
      
      Note that CacheFiles will erase from the cache any file it doesn't recognise or
      any file of an incorrect type (such as a FIFO file or a device file).
      
      ==========================
      SECURITY MODEL AND SELINUX
      ==========================
      
      CacheFiles is implemented to deal properly with the LSM security features of
      the Linux kernel and the SELinux facility.
      
      One of the problems that CacheFiles faces is that it is generally acting on
      behalf of a process, and running in that process's context, and that includes a
      security context that is not appropriate for accessing the cache - either
      because the files in the cache are inaccessible to that process, or because if
      the process creates a file in the cache, that file may be inaccessible to other
      processes.
      
      The way CacheFiles works is to temporarily change the security context (fsuid,
      fsgid and actor security label) that the process acts as - without changing the
      security context of the process when it the target of an operation performed by
      some other process (so signalling and suchlike still work correctly).
      
      When the CacheFiles module is asked to bind to its cache, it:
      
       (1) Finds the security label attached to the root cache directory and uses
           that as the security label with which it will create files.  By default,
           this is:
      
      	cachefiles_var_t
      
       (2) Finds the security label of the process which issued the bind request
           (presumed to be the cachefilesd daemon), which by default will be:
      
      	cachefilesd_t
      
           and asks LSM to supply a security ID as which it should act given the
           daemon's label.  By default, this will be:
      
      	cachefiles_kernel_t
      
           SELinux transitions the daemon's security ID to the module's security ID
           based on a rule of this form in the policy.
      
      	type_transition <daemon's-ID> kernel_t : process <module's-ID>;
      
           For instance:
      
      	type_transition cachefilesd_t kernel_t : process cachefiles_kernel_t;
      
      The module's security ID gives it permission to create, move and remove files
      and directories in the cache, to find and access directories and files in the
      cache, to set and access extended attributes on cache objects, and to read and
      write files in the cache.
      
      The daemon's security ID gives it only a very restricted set of permissions: it
      may scan directories, stat files and erase files and directories.  It may
      not read or write files in the cache, and so it is precluded from accessing the
      data cached therein; nor is it permitted to create new files in the cache.
      
      There are policy source files available in:
      
      	http://people.redhat.com/~dhowells/fscache/cachefilesd-0.8.tar.bz2
      
      and later versions.  In that tarball, see the files:
      
      	cachefilesd.te
      	cachefilesd.fc
      	cachefilesd.if
      
      They are built and installed directly by the RPM.
      
      If a non-RPM based system is being used, then copy the above files to their own
      directory and run:
      
      	make -f /usr/share/selinux/devel/Makefile
      	semodule -i cachefilesd.pp
      
      You will need checkpolicy and selinux-policy-devel installed prior to the
      build.
      
      By default, the cache is located in /var/fscache, but if it is desirable that
      it should be elsewhere, than either the above policy files must be altered, or
      an auxiliary policy must be installed to label the alternate location of the
      cache.
      
      For instructions on how to add an auxiliary policy to enable the cache to be
      located elsewhere when SELinux is in enforcing mode, please see:
      
      	/usr/share/doc/cachefilesd-*/move-cache.txt
      
      When the cachefilesd rpm is installed; alternatively, the document can be found
      in the sources.
      
      ==================
      A NOTE ON SECURITY
      ==================
      
      CacheFiles makes use of the split security in the task_struct.  It allocates
      its own task_security structure, and redirects current->act_as to point to it
      when it acts on behalf of another process, in that process's context.
      
      The reason it does this is that it calls vfs_mkdir() and suchlike rather than
      bypassing security and calling inode ops directly.  Therefore the VFS and LSM
      may deny the CacheFiles access to the cache data because under some
      circumstances the caching code is running in the security context of whatever
      process issued the original syscall on the netfs.
      
      Furthermore, should CacheFiles create a file or directory, the security
      parameters with that object is created (UID, GID, security label) would be
      derived from that process that issued the system call, thus potentially
      preventing other processes from accessing the cache - including CacheFiles's
      cache management daemon (cachefilesd).
      
      What is required is to temporarily override the security of the process that
      issued the system call.  We can't, however, just do an in-place change of the
      security data as that affects the process as an object, not just as a subject.
      This means it may lose signals or ptrace events for example, and affects what
      the process looks like in /proc.
      
      So CacheFiles makes use of a logical split in the security between the
      objective security (task->sec) and the subjective security (task->act_as).  The
      objective security holds the intrinsic security properties of a process and is
      never overridden.  This is what appears in /proc, and is what is used when a
      process is the target of an operation by some other process (SIGKILL for
      example).
      
      The subjective security holds the active security properties of a process, and
      may be overridden.  This is not seen externally, and is used whan a process
      acts upon another object, for example SIGKILLing another process or opening a
      file.
      
      LSM hooks exist that allow SELinux (or Smack or whatever) to reject a request
      for CacheFiles to run in a context of a specific security label, or to create
      files and directories with another security label.
      
      This documentation is added by the patch to:
      
      	Documentation/filesystems/caching/cachefiles.txt
      Signed-Off-By: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      9ae326a6
    • D
      CacheFiles: Export things for CacheFiles · 800a9647
      David Howells 提交于
      Export a number of functions for CacheFiles's use.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      800a9647
    • D
      CacheFiles: Permit the page lock state to be monitored · 385e1ca5
      David Howells 提交于
      Add a function to install a monitor on the page lock waitqueue for a particular
      page, thus allowing the page being unlocked to be detected.
      
      This is used by CacheFiles to detect read completion on a page in the backing
      filesystem so that it can then copy the data to the waiting netfs page.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      385e1ca5
    • D
      FS-Cache: Implement data I/O part of netfs API · b5108822
      David Howells 提交于
      Implement the data I/O part of the FS-Cache netfs API.  The documentation and
      API header file were added in a previous patch.
      
      This patch implements the following functions for the netfs to call:
      
       (*) fscache_attr_changed().
      
           Indicate that the object has changed its attributes.  The only attribute
           currently recorded is the file size.  Only pages within the set file size
           will be stored in the cache.
      
           This operation is submitted for asynchronous processing, and will return
           immediately.  It will return -ENOMEM if an out of memory error is
           encountered, -ENOBUFS if the object is not actually cached, or 0 if the
           operation is successfully queued.
      
       (*) fscache_read_or_alloc_page().
       (*) fscache_read_or_alloc_pages().
      
           Request data be fetched from the disk, and allocate internal metadata to
           track the netfs pages and reserve disk space for unknown pages.
      
           These operations perform semi-asynchronous data reads.  Upon returning
           they will indicate which pages they think can be retrieved from disk, and
           will have set in progress attempts to retrieve those pages.
      
           These will return, in order of preference, -ENOMEM on memory allocation
           error, -ERESTARTSYS if a signal interrupted proceedings, -ENODATA if one
           or more requested pages are not yet cached, -ENOBUFS if the object is not
           actually cached or if there isn't space for future pages to be cached on
           this object, or 0 if successful.
      
           In the case of the multipage function, the pages for which reads are set
           in progress will be removed from the list and the page count decreased
           appropriately.
      
           If any read operations should fail, the completion function will be given
           an error, and will also be passed contextual information to allow the
           netfs to fall back to querying the server for the absent pages.
      
           For each successful read, the page completion function will also be
           called.
      
           Any pages subsequently tracked by the cache will have PG_fscache set upon
           them on return.  fscache_uncache_page() must be called for such pages.
      
           If supplied by the netfs, the mark_pages_cached() cookie op will be
           invoked for any pages now tracked.
      
       (*) fscache_alloc_page().
      
           Allocate internal metadata to track a netfs page and reserve disk space.
      
           This will return -ENOMEM on memory allocation error, -ERESTARTSYS on
           signal, -ENOBUFS if the object isn't cached, or there isn't enough space
           in the cache, or 0 if successful.
      
           Any pages subsequently tracked by the cache will have PG_fscache set upon
           them on return.  fscache_uncache_page() must be called for such pages.
      
           If supplied by the netfs, the mark_pages_cached() cookie op will be
           invoked for any pages now tracked.
      
       (*) fscache_write_page().
      
           Request data be stored to disk.  This may only be called on pages that
           have been read or alloc'd by the above three functions and have not yet
           been uncached.
      
           This will return -ENOMEM on memory allocation error, -ERESTARTSYS on
           signal, -ENOBUFS if the object isn't cached, or there isn't immediately
           enough space in the cache, or 0 if successful.
      
           On a successful return, this operation will have queued the page for
           asynchronous writing to the cache.  The page will be returned with
           PG_fscache_write set until the write completes one way or another.  The
           caller will not be notified if the write fails due to an I/O error.  If
           that happens, the object will become available and all pending writes will
           be aborted.
      
           Note that the cache may batch up page writes, and so it may take a while
           to get around to writing them out.
      
           The caller must assume that until PG_fscache_write is cleared the page is
           use by the cache.  Any changes made to the page may be reflected on disk.
           The page may even be under DMA.
      
       (*) fscache_uncache_page().
      
           Indicate that the cache should stop tracking a page previously read or
           alloc'd from the cache.  If the page was alloc'd only, but unwritten, it
           will not appear on disk.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      b5108822
    • D
      FS-Cache: Add and document asynchronous operation handling · 952efe7b
      David Howells 提交于
      Add and document asynchronous operation handling for use by FS-Cache's data
      storage and retrieval routines.
      
      The following documentation is added to:
      
      	Documentation/filesystems/caching/operations.txt
      
      		       ================================
      		       ASYNCHRONOUS OPERATIONS HANDLING
      		       ================================
      
      ========
      OVERVIEW
      ========
      
      FS-Cache has an asynchronous operations handling facility that it uses for its
      data storage and retrieval routines.  Its operations are represented by
      fscache_operation structs, though these are usually embedded into some other
      structure.
      
      This facility is available to and expected to be be used by the cache backends,
      and FS-Cache will create operations and pass them off to the appropriate cache
      backend for completion.
      
      To make use of this facility, <linux/fscache-cache.h> should be #included.
      
      ===============================
      OPERATION RECORD INITIALISATION
      ===============================
      
      An operation is recorded in an fscache_operation struct:
      
      	struct fscache_operation {
      		union {
      			struct work_struct fast_work;
      			struct slow_work slow_work;
      		};
      		unsigned long		flags;
      		fscache_operation_processor_t processor;
      		...
      	};
      
      Someone wanting to issue an operation should allocate something with this
      struct embedded in it.  They should initialise it by calling:
      
      	void fscache_operation_init(struct fscache_operation *op,
      				    fscache_operation_release_t release);
      
      with the operation to be initialised and the release function to use.
      
      The op->flags parameter should be set to indicate the CPU time provision and
      the exclusivity (see the Parameters section).
      
      The op->fast_work, op->slow_work and op->processor flags should be set as
      appropriate for the CPU time provision (see the Parameters section).
      
      FSCACHE_OP_WAITING may be set in op->flags prior to each submission of the
      operation and waited for afterwards.
      
      ==========
      PARAMETERS
      ==========
      
      There are a number of parameters that can be set in the operation record's flag
      parameter.  There are three options for the provision of CPU time in these
      operations:
      
       (1) The operation may be done synchronously (FSCACHE_OP_MYTHREAD).  A thread
           may decide it wants to handle an operation itself without deferring it to
           another thread.
      
           This is, for example, used in read operations for calling readpages() on
           the backing filesystem in CacheFiles.  Although readpages() does an
           asynchronous data fetch, the determination of whether pages exist is done
           synchronously - and the netfs does not proceed until this has been
           determined.
      
           If this option is to be used, FSCACHE_OP_WAITING must be set in op->flags
           before submitting the operation, and the operating thread must wait for it
           to be cleared before proceeding:
      
      		wait_on_bit(&op->flags, FSCACHE_OP_WAITING,
      			    fscache_wait_bit, TASK_UNINTERRUPTIBLE);
      
       (2) The operation may be fast asynchronous (FSCACHE_OP_FAST), in which case it
           will be given to keventd to process.  Such an operation is not permitted
           to sleep on I/O.
      
           This is, for example, used by CacheFiles to copy data from a backing fs
           page to a netfs page after the backing fs has read the page in.
      
           If this option is used, op->fast_work and op->processor must be
           initialised before submitting the operation:
      
      		INIT_WORK(&op->fast_work, do_some_work);
      
       (3) The operation may be slow asynchronous (FSCACHE_OP_SLOW), in which case it
           will be given to the slow work facility to process.  Such an operation is
           permitted to sleep on I/O.
      
           This is, for example, used by FS-Cache to handle background writes of
           pages that have just been fetched from a remote server.
      
           If this option is used, op->slow_work and op->processor must be
           initialised before submitting the operation:
      
      		fscache_operation_init_slow(op, processor)
      
      Furthermore, operations may be one of two types:
      
       (1) Exclusive (FSCACHE_OP_EXCLUSIVE).  Operations of this type may not run in
           conjunction with any other operation on the object being operated upon.
      
           An example of this is the attribute change operation, in which the file
           being written to may need truncation.
      
       (2) Shareable.  Operations of this type may be running simultaneously.  It's
           up to the operation implementation to prevent interference between other
           operations running at the same time.
      
      =========
      PROCEDURE
      =========
      
      Operations are used through the following procedure:
      
       (1) The submitting thread must allocate the operation and initialise it
           itself.  Normally this would be part of a more specific structure with the
           generic op embedded within.
      
       (2) The submitting thread must then submit the operation for processing using
           one of the following two functions:
      
      	int fscache_submit_op(struct fscache_object *object,
      			      struct fscache_operation *op);
      
      	int fscache_submit_exclusive_op(struct fscache_object *object,
      					struct fscache_operation *op);
      
           The first function should be used to submit non-exclusive ops and the
           second to submit exclusive ones.  The caller must still set the
           FSCACHE_OP_EXCLUSIVE flag.
      
           If successful, both functions will assign the operation to the specified
           object and return 0.  -ENOBUFS will be returned if the object specified is
           permanently unavailable.
      
           The operation manager will defer operations on an object that is still
           undergoing lookup or creation.  The operation will also be deferred if an
           operation of conflicting exclusivity is in progress on the object.
      
           If the operation is asynchronous, the manager will retain a reference to
           it, so the caller should put their reference to it by passing it to:
      
      	void fscache_put_operation(struct fscache_operation *op);
      
       (3) If the submitting thread wants to do the work itself, and has marked the
           operation with FSCACHE_OP_MYTHREAD, then it should monitor
           FSCACHE_OP_WAITING as described above and check the state of the object if
           necessary (the object might have died whilst the thread was waiting).
      
           When it has finished doing its processing, it should call
           fscache_put_operation() on it.
      
       (4) The operation holds an effective lock upon the object, preventing other
           exclusive ops conflicting until it is released.  The operation can be
           enqueued for further immediate asynchronous processing by adjusting the
           CPU time provisioning option if necessary, eg:
      
      	op->flags &= ~FSCACHE_OP_TYPE;
      	op->flags |= ~FSCACHE_OP_FAST;
      
           and calling:
      
      	void fscache_enqueue_operation(struct fscache_operation *op)
      
           This can be used to allow other things to have use of the worker thread
           pools.
      
      =====================
      ASYNCHRONOUS CALLBACK
      =====================
      
      When used in asynchronous mode, the worker thread pool will invoke the
      processor method with a pointer to the operation.  This should then get at the
      container struct by using container_of():
      
      	static void fscache_write_op(struct fscache_operation *_op)
      	{
      		struct fscache_storage *op =
      			container_of(_op, struct fscache_storage, op);
      	...
      	}
      
      The caller holds a reference on the operation, and will invoke
      fscache_put_operation() when the processor function returns.  The processor
      function is at liberty to call fscache_enqueue_operation() or to take extra
      references.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      952efe7b
    • D
      FS-Cache: Implement the cookie management part of the netfs API · ccc4fc3d
      David Howells 提交于
      Implement the cookie management part of the FS-Cache netfs client API.  The
      documentation and API header file were added in a previous patch.
      
      This patch implements the following three functions:
      
       (1) fscache_acquire_cookie().
      
           Acquire a cookie to represent an object to the netfs.  If the object in
           question is a non-index object, then that object and its parent indices
           will be created on disk at this point if they don't already exist.  Index
           creation is deferred because an index may reside in multiple caches.
      
       (2) fscache_relinquish_cookie().
      
           Retire or release a cookie previously acquired.  At this point, the
           object on disk may be destroyed.
      
       (3) fscache_update_cookie().
      
           Update the in-cache representation of a cookie.  This is used to update
           the auxiliary data for coherency management purposes.
      
      With this patch it is possible to have a netfs instruct a cache backend to
      look up, validate and create metadata on disk and to destroy it again.
      The ability to actually store and retrieve data in the objects so created is
      added in later patches.
      
      Note that these functions will never return an error.  _All_ errors are
      handled internally to FS-Cache.
      
      The worst that can happen is that fscache_acquire_cookie() may return a NULL
      pointer - which is considered a negative cookie pointer and can be passed back
      to any function that takes a cookie without harm.  A negative cookie pointer
      merely suppresses caching at that level.
      
      The stub in linux/fscache.h will detect inline the negative cookie pointer and
      abort the operation as fast as possible.  This means that the compiler doesn't
      have to set up for a call in that case.
      
      See the documentation in Documentation/filesystems/caching/netfs-api.txt for
      more information.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      ccc4fc3d
    • D
      FS-Cache: Object management state machine · 36c95590
      David Howells 提交于
      Implement the cache object management state machine.
      
      The following documentation is added to illuminate the working of this state
      machine.  It will also be added as:
      
      	Documentation/filesystems/caching/object.txt
      
      	     ====================================================
      	     IN-KERNEL CACHE OBJECT REPRESENTATION AND MANAGEMENT
      	     ====================================================
      
      ==============
      REPRESENTATION
      ==============
      
      FS-Cache maintains an in-kernel representation of each object that a netfs is
      currently interested in.  Such objects are represented by the fscache_cookie
      struct and are referred to as cookies.
      
      FS-Cache also maintains a separate in-kernel representation of the objects that
      a cache backend is currently actively caching.  Such objects are represented by
      the fscache_object struct.  The cache backends allocate these upon request, and
      are expected to embed them in their own representations.  These are referred to
      as objects.
      
      There is a 1:N relationship between cookies and objects.  A cookie may be
      represented by multiple objects - an index may exist in more than one cache -
      or even by no objects (it may not be cached).
      
      Furthermore, both cookies and objects are hierarchical.  The two hierarchies
      correspond, but the cookies tree is a superset of the union of the object trees
      of multiple caches:
      
      	    NETFS INDEX TREE               :      CACHE 1     :      CACHE 2
      	                                   :                  :
      	                                   :   +-----------+  :
      	                          +----------->|  IObject  |  :
      	      +-----------+       |        :   +-----------+  :
      	      |  ICookie  |-------+        :         |        :
      	      +-----------+       |        :         |        :   +-----------+
      	            |             +------------------------------>|  IObject  |
      	            |                      :         |        :   +-----------+
      	            |                      :         V        :         |
      	            |                      :   +-----------+  :         |
      	            V             +----------->|  IObject  |  :         |
      	      +-----------+       |        :   +-----------+  :         |
      	      |  ICookie  |-------+        :         |        :         V
      	      +-----------+       |        :         |        :   +-----------+
      	            |             +------------------------------>|  IObject  |
      	      +-----+-----+                :         |        :   +-----------+
      	      |           |                :         |        :         |
      	      V           |                :         V        :         |
      	+-----------+     |                :   +-----------+  :         |
      	|  ICookie  |------------------------->|  IObject  |  :         |
      	+-----------+     |                :   +-----------+  :         |
      	      |           V                :         |        :         V
      	      |     +-----------+          :         |        :   +-----------+
      	      |     |  ICookie  |-------------------------------->|  IObject  |
      	      |     +-----------+          :         |        :   +-----------+
      	      V           |                :         V        :         |
      	+-----------+     |                :   +-----------+  :         |
      	|  DCookie  |------------------------->|  DObject  |  :         |
      	+-----------+     |                :   +-----------+  :         |
      	                  |                :                  :         |
      	          +-------+-------+        :                  :         |
      	          |               |        :                  :         |
      	          V               V        :                  :         V
      	    +-----------+   +-----------+  :                  :   +-----------+
      	    |  DCookie  |   |  DCookie  |------------------------>|  DObject  |
      	    +-----------+   +-----------+  :                  :   +-----------+
      	                                   :                  :
      
      In the above illustration, ICookie and IObject represent indices and DCookie
      and DObject represent data storage objects.  Indices may have representation in
      multiple caches, but currently, non-index objects may not.  Objects of any type
      may also be entirely unrepresented.
      
      As far as the netfs API goes, the netfs is only actually permitted to see
      pointers to the cookies.  The cookies themselves and any objects attached to
      those cookies are hidden from it.
      
      ===============================
      OBJECT MANAGEMENT STATE MACHINE
      ===============================
      
      Within FS-Cache, each active object is managed by its own individual state
      machine.  The state for an object is kept in the fscache_object struct, in
      object->state.  A cookie may point to a set of objects that are in different
      states.
      
      Each state has an action associated with it that is invoked when the machine
      wakes up in that state.  There are four logical sets of states:
      
       (1) Preparation: states that wait for the parent objects to become ready.  The
           representations are hierarchical, and it is expected that an object must
           be created or accessed with respect to its parent object.
      
       (2) Initialisation: states that perform lookups in the cache and validate
           what's found and that create on disk any missing metadata.
      
       (3) Normal running: states that allow netfs operations on objects to proceed
           and that update the state of objects.
      
       (4) Termination: states that detach objects from their netfs cookies, that
           delete objects from disk, that handle disk and system errors and that free
           up in-memory resources.
      
      In most cases, transitioning between states is in response to signalled events.
      When a state has finished processing, it will usually set the mask of events in
      which it is interested (object->event_mask) and relinquish the worker thread.
      Then when an event is raised (by calling fscache_raise_event()), if the event
      is not masked, the object will be queued for processing (by calling
      fscache_enqueue_object()).
      
      PROVISION OF CPU TIME
      ---------------------
      
      The work to be done by the various states is given CPU time by the threads of
      the slow work facility (see Documentation/slow-work.txt).  This is used in
      preference to the workqueue facility because:
      
       (1) Threads may be completely occupied for very long periods of time by a
           particular work item.  These state actions may be doing sequences of
           synchronous, journalled disk accesses (lookup, mkdir, create, setxattr,
           getxattr, truncate, unlink, rmdir, rename).
      
       (2) Threads may do little actual work, but may rather spend a lot of time
           sleeping on I/O.  This means that single-threaded and 1-per-CPU-threaded
           workqueues don't necessarily have the right numbers of threads.
      
      LOCKING SIMPLIFICATION
      ----------------------
      
      Because only one worker thread may be operating on any particular object's
      state machine at once, this simplifies the locking, particularly with respect
      to disconnecting the netfs's representation of a cache object (fscache_cookie)
      from the cache backend's representation (fscache_object) - which may be
      requested from either end.
      
      =================
      THE SET OF STATES
      =================
      
      The object state machine has a set of states that it can be in.  There are
      preparation states in which the object sets itself up and waits for its parent
      object to transit to a state that allows access to its children:
      
       (1) State FSCACHE_OBJECT_INIT.
      
           Initialise the object and wait for the parent object to become active.  In
           the cache, it is expected that it will not be possible to look an object
           up from the parent object, until that parent object itself has been looked
           up.
      
      There are initialisation states in which the object sets itself up and accesses
      disk for the object metadata:
      
       (2) State FSCACHE_OBJECT_LOOKING_UP.
      
           Look up the object on disk, using the parent as a starting point.
           FS-Cache expects the cache backend to probe the cache to see whether this
           object is represented there, and if it is, to see if it's valid (coherency
           management).
      
           The cache should call fscache_object_lookup_negative() to indicate lookup
           failure for whatever reason, and should call fscache_obtained_object() to
           indicate success.
      
           At the completion of lookup, FS-Cache will let the netfs go ahead with
           read operations, no matter whether the file is yet cached.  If not yet
           cached, read operations will be immediately rejected with ENODATA until
           the first known page is uncached - as to that point there can be no data
           to be read out of the cache for that file that isn't currently also held
           in the pagecache.
      
       (3) State FSCACHE_OBJECT_CREATING.
      
           Create an object on disk, using the parent as a starting point.  This
           happens if the lookup failed to find the object, or if the object's
           coherency data indicated what's on disk is out of date.  In this state,
           FS-Cache expects the cache to create
      
           The cache should call fscache_obtained_object() if creation completes
           successfully, fscache_object_lookup_negative() otherwise.
      
           At the completion of creation, FS-Cache will start processing write
           operations the netfs has queued for an object.  If creation failed, the
           write ops will be transparently discarded, and nothing recorded in the
           cache.
      
      There are some normal running states in which the object spends its time
      servicing netfs requests:
      
       (4) State FSCACHE_OBJECT_AVAILABLE.
      
           A transient state in which pending operations are started, child objects
           are permitted to advance from FSCACHE_OBJECT_INIT state, and temporary
           lookup data is freed.
      
       (5) State FSCACHE_OBJECT_ACTIVE.
      
           The normal running state.  In this state, requests the netfs makes will be
           passed on to the cache.
      
       (6) State FSCACHE_OBJECT_UPDATING.
      
           The state machine comes here to update the object in the cache from the
           netfs's records.  This involves updating the auxiliary data that is used
           to maintain coherency.
      
      And there are terminal states in which an object cleans itself up, deallocates
      memory and potentially deletes stuff from disk:
      
       (7) State FSCACHE_OBJECT_LC_DYING.
      
           The object comes here if it is dying because of a lookup or creation
           error.  This would be due to a disk error or system error of some sort.
           Temporary data is cleaned up, and the parent is released.
      
       (8) State FSCACHE_OBJECT_DYING.
      
           The object comes here if it is dying due to an error, because its parent
           cookie has been relinquished by the netfs or because the cache is being
           withdrawn.
      
           Any child objects waiting on this one are given CPU time so that they too
           can destroy themselves.  This object waits for all its children to go away
           before advancing to the next state.
      
       (9) State FSCACHE_OBJECT_ABORT_INIT.
      
           The object comes to this state if it was waiting on its parent in
           FSCACHE_OBJECT_INIT, but its parent died.  The object will destroy itself
           so that the parent may proceed from the FSCACHE_OBJECT_DYING state.
      
      (10) State FSCACHE_OBJECT_RELEASING.
      (11) State FSCACHE_OBJECT_RECYCLING.
      
           The object comes to one of these two states when dying once it is rid of
           all its children, if it is dying because the netfs relinquished its
           cookie.  In the first state, the cached data is expected to persist, and
           in the second it will be deleted.
      
      (12) State FSCACHE_OBJECT_WITHDRAWING.
      
           The object transits to this state if the cache decides it wants to
           withdraw the object from service, perhaps to make space, but also due to
           error or just because the whole cache is being withdrawn.
      
      (13) State FSCACHE_OBJECT_DEAD.
      
           The object transits to this state when the in-memory object record is
           ready to be deleted.  The object processor shouldn't ever see an object in
           this state.
      
      THE SET OF EVENTS
      -----------------
      
      There are a number of events that can be raised to an object state machine:
      
       (*) FSCACHE_OBJECT_EV_UPDATE
      
           The netfs requested that an object be updated.  The state machine will ask
           the cache backend to update the object, and the cache backend will ask the
           netfs for details of the change through its cookie definition ops.
      
       (*) FSCACHE_OBJECT_EV_CLEARED
      
           This is signalled in two circumstances:
      
           (a) when an object's last child object is dropped and
      
           (b) when the last operation outstanding on an object is completed.
      
           This is used to proceed from the dying state.
      
       (*) FSCACHE_OBJECT_EV_ERROR
      
           This is signalled when an I/O error occurs during the processing of some
           object.
      
       (*) FSCACHE_OBJECT_EV_RELEASE
       (*) FSCACHE_OBJECT_EV_RETIRE
      
           These are signalled when the netfs relinquishes a cookie it was using.
           The event selected depends on whether the netfs asks for the backing
           object to be retired (deleted) or retained.
      
       (*) FSCACHE_OBJECT_EV_WITHDRAW
      
           This is signalled when the cache backend wants to withdraw an object.
           This means that the object will have to be detached from the netfs's
           cookie.
      
      Because the withdrawing releasing/retiring events are all handled by the object
      state machine, it doesn't matter if there's a collision with both ends trying
      to sever the connection at the same time.  The state machine can just pick
      which one it wants to honour, and that effects the other.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      36c95590
    • D
      FS-Cache: Bit waiting helpers · 2868cbea
      David Howells 提交于
      Add helpers for use with wait_on_bit().
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      2868cbea
    • D
      FS-Cache: Add netfs registration · 726dd7ff
      David Howells 提交于
      Add functions to register and unregister a network filesystem or other client
      of the FS-Cache service.  This allocates and releases the cookie representing
      the top-level index for a netfs, and makes it available to the netfs.
      
      If the FS-Cache facility is disabled, then the calls are optimised away at
      compile time.
      
      Note that whilst this patch may appear to work with FS-Cache enabled and a
      netfs attempting to use it, it will leak the cookie it allocates for the netfs
      as fscache_relinquish_cookie() is implemented in a later patch.  This will
      cause the slab code to emit a warning when the module is removed.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      726dd7ff
    • D
      FS-Cache: Provide a slab for cookie allocation · 955d0091
      David Howells 提交于
      Provide a slab from which can be allocated the FS-Cache cookies that will be
      presented to the netfs.
      
      Also provide a slab constructor and a function to recursively discard a cookie
      and its ancestor chain.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      955d0091
    • D
      FS-Cache: Add cache management · 4c515dd4
      David Howells 提交于
      Implement the entry points by which a cache backend may initialise, add,
      declare an error upon and withdraw a cache.
      
      Further, an object is created in sysfs under which each cache added will get
      an object created:
      
      	/sys/fs/fscache/<cachetag>/
      
      All of this is described in Documentation/filesystems/caching/backend-api.txt
      added by a previous patch.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      4c515dd4
    • D
      FS-Cache: Add cache tag handling · 0e04d4ce
      David Howells 提交于
      Implement two features of FS-Cache:
      
       (1) The ability to request and release cache tags - names by which a cache may
           be known to a netfs, and thus selected for use.
      
       (2) An internal function by which a cache is selected by consulting the netfs,
           if the netfs wishes to be consulted.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      0e04d4ce
    • D
      FS-Cache: Root index definition · a6891645
      David Howells 提交于
      Add a description of the root index of the cache for later patches to make use
      of.
      
      The root index is owned by FS-Cache itself.  When a netfs requests caching
      facilities, FS-Cache will, if one doesn't already exist, create an entry in
      the root index with the key being the name of the netfs ("AFS" for example),
      and the auxiliary data holding the index structure version supplied by the
      netfs:
      
      				     FSDEF
      				       |
      				 +-----------+
      				 |           |
      				NFS         AFS
      			       [v=1]       [v=1]
      
      If an entry with the appropriate name does already exist, the version is
      compared.  If the version is different, the entire subtree from that entry
      will be discarded and a new entry created.
      
      The new entry will be an index, and a cookie referring to it will be passed to
      the netfs.  This is then the root handle by which the netfs accesses the
      cache.  It can create whatever objects it likes in that index, including
      further indices.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      a6891645
    • D
      FS-Cache: Add use of /proc and presentation of statistics · 7394daa8
      David Howells 提交于
      Make FS-Cache create its /proc interface and present various statistical
      information through it.  Also provide the functions for updating this
      information.
      
      These features are enabled by:
      
      	CONFIG_FSCACHE_PROC
      	CONFIG_FSCACHE_STATS
      	CONFIG_FSCACHE_HISTOGRAM
      
      The /proc directory for FS-Cache is also exported so that caching modules can
      add their own statistics there too.
      
      The FS-Cache module is loadable at this point, and the statistics files can be
      examined by userspace:
      
      	cat /proc/fs/fscache/stats
      	cat /proc/fs/fscache/histogram
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      7394daa8
    • D
      FS-Cache: Add main configuration option, module entry points and debugging · 06b3db1b
      David Howells 提交于
      Add the main configuration option, allowing FS-Cache to be selected; the
      module entry and exit functions and the debugging stuff used by these patches.
      
      The two configuration options added are:
      
      	CONFIG_FSCACHE
      	CONFIG_FSCACHE_DEBUG
      
      The first enables the facility, and the second makes the debugging statements
      enableable through the "debug" module parameter.  The value of this parameter
      is a bitmask as described in:
      
      	Documentation/filesystems/caching/fscache.txt
      
      The module can be loaded at this point, but all it will do at this point in
      the patch series is to start up the slow work facility and shut it down again.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      06b3db1b
    • D
      FS-Cache: Add the FS-Cache cache backend API and documentation · 0dfc41d1
      David Howells 提交于
      Add the API for a generic facility (FS-Cache) by which caches may declare them
      selves open for business, and may obtain work to be done from network
      filesystems.  The header file is included by:
      
      	#include <linux/fscache-cache.h>
      
      Documentation for the API is also added to:
      
      	Documentation/filesystems/caching/backend-api.txt
      
      This API is not usable without the implementation of the utility functions
      which will be added in further patches.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      0dfc41d1
    • D
      FS-Cache: Add the FS-Cache netfs API and documentation · 2d6fff63
      David Howells 提交于
      Add the API for a generic facility (FS-Cache) by which filesystems (such as AFS
      or NFS) may call on local caching capabilities without having to know anything
      about how the cache works, or even if there is a cache:
      
      	+---------+
      	|         |                        +--------------+
      	|   NFS   |--+                     |              |
      	|         |  |                 +-->|   CacheFS    |
      	+---------+  |   +----------+  |   |  /dev/hda5   |
      	             |   |          |  |   +--------------+
      	+---------+  +-->|          |  |
      	|         |      |          |--+
      	|   AFS   |----->| FS-Cache |
      	|         |      |          |--+
      	+---------+  +-->|          |  |
      	             |   |          |  |   +--------------+
      	+---------+  |   +----------+  |   |              |
      	|         |  |                 +-->|  CacheFiles  |
      	|  ISOFS  |--+                     |  /var/cache  |
      	|         |                        +--------------+
      	+---------+
      
      General documentation and documentation of the netfs specific API are provided
      in addition to the header files.
      
      As this patch stands, it is possible to build a filesystem against the facility
      and attempt to use it.  All that will happen is that all requests will be
      immediately denied as if no cache is present.
      
      Further patches will implement the core of the facility.  The facility will
      transfer requests from networking filesystems to appropriate caches if
      possible, or else gracefully deny them.
      
      If this facility is disabled in the kernel configuration, then all its
      operations will trivially reduce to nothing during compilation.
      
      WHY NOT I_MAPPING?
      ==================
      
      I have added my own API to implement caching rather than using i_mapping to do
      this for a number of reasons.  These have been discussed a lot on the LKML and
      CacheFS mailing lists, but to summarise the basics:
      
       (1) Most filesystems don't do hole reportage.  Holes in files are treated as
           blocks of zeros and can't be distinguished otherwise, making it difficult
           to distinguish blocks that have been read from the network and cached from
           those that haven't.
      
       (2) The backing inode must be fully populated before being exposed to
           userspace through the main inode because the VM/VFS goes directly to the
           backing inode and does not interrogate the front inode's VM ops.
      
           Therefore:
      
           (a) The backing inode must fit entirely within the cache.
      
           (b) All backed files currently open must fit entirely within the cache at
           	 the same time.
      
           (c) A working set of files in total larger than the cache may not be
           	 cached.
      
           (d) A file may not grow larger than the available space in the cache.
      
           (e) A file that's open and cached, and remotely grows larger than the
           	 cache is potentially stuffed.
      
       (3) Writes go to the backing filesystem, and can only be transferred to the
           network when the file is closed.
      
       (4) There's no record of what changes have been made, so the whole file must
           be written back.
      
       (5) The pages belong to the backing filesystem, and all metadata associated
           with that page are relevant only to the backing filesystem, and not
           anything stacked atop it.
      
      OVERVIEW
      ========
      
      FS-Cache provides (or will provide) the following facilities:
      
       (1) Caches can be added / removed at any time, even whilst in use.
      
       (2) Adds a facility by which tags can be used to refer to caches, even if
           they're not available yet.
      
       (3) More than one cache can be used at once.  Caches can be selected
           explicitly by use of tags.
      
       (4) The netfs is provided with an interface that allows either party to
           withdraw caching facilities from a file (required for (1)).
      
       (5) A netfs may annotate cache objects that belongs to it.  This permits the
           storage of coherency maintenance data.
      
       (6) Cache objects will be pinnable and space reservations will be possible.
      
       (7) The interface to the netfs returns as few errors as possible, preferring
           rather to let the netfs remain oblivious.
      
       (8) Cookies are used to represent indices, files and other objects to the
           netfs.  The simplest cookie is just a NULL pointer - indicating nothing
           cached there.
      
       (9) The netfs is allowed to propose - dynamically - any index hierarchy it
           desires, though it must be aware that the index search function is
           recursive, stack space is limited, and indices can only be children of
           indices.
      
      (10) Indices can be used to group files together to reduce key size and to make
           group invalidation easier.  The use of indices may make lookup quicker,
           but that's cache dependent.
      
      (11) Data I/O is effectively done directly to and from the netfs's pages.  The
           netfs indicates that page A is at index B of the data-file represented by
           cookie C, and that it should be read or written.  The cache backend may or
           may not start I/O on that page, but if it does, a netfs callback will be
           invoked to indicate completion.  The I/O may be either synchronous or
           asynchronous.
      
      (12) Cookies can be "retired" upon release.  At this point FS-Cache will mark
           them as obsolete and the index hierarchy rooted at that point will get
           recycled.
      
      (13) The netfs provides a "match" function for index searches.  In addition to
           saying whether a match was made or not, this can also specify that an
           entry should be updated or deleted.
      
      FS-Cache maintains a virtual index tree in which all indices, files, objects
      and pages are kept.  Bits of this tree may actually reside in one or more
      caches.
      
                                                 FSDEF
                                                   |
                              +------------------------------------+
                              |                                    |
                             NFS                                  AFS
                              |                                    |
                 +--------------------------+                +-----------+
                 |                          |                |           |
              homedir                     mirror          afs.org   redhat.com
                 |                          |                            |
           +------------+           +---------------+              +----------+
           |            |           |               |              |          |
         00001        00002       00007           00125        vol00001   vol00002
           |            |           |               |                         |
       +---+---+     +-----+      +---+      +------+------+            +-----+----+
       |   |   |     |     |      |   |      |      |      |            |     |    |
      PG0 PG1 PG2   PG0  XATTR   PG0 PG1   DIRENT DIRENT DIRENT        R/W   R/O  Bak
                           |                                            |
                          PG0                                       +-------+
                                                                    |       |
                                                                  00001   00003
                                                                    |
                                                                +---+---+
                                                                |   |   |
                                                               PG0 PG1 PG2
      
      In the example above, two netfs's can be seen to be backed: NFS and AFS.  These
      have different index hierarchies:
      
       (*) The NFS primary index will probably contain per-server indices.  Each
           server index is indexed by NFS file handles to get data file objects.
           Each data file objects can have an array of pages, but may also have
           further child objects, such as extended attributes and directory entries.
           Extended attribute objects themselves have page-array contents.
      
       (*) The AFS primary index contains per-cell indices.  Each cell index contains
           per-logical-volume indices.  Each of volume index contains up to three
           indices for the read-write, read-only and backup mirrors of those volumes.
           Each of these contains vnode data file objects, each of which contains an
           array of pages.
      
      The very top index is the FS-Cache master index in which individual netfs's
      have entries.
      
      Any index object may reside in more than one cache, provided it only has index
      children.  Any index with non-index object children will be assumed to only
      reside in one cache.
      
      The FS-Cache overview can be found in:
      
      	Documentation/filesystems/caching/fscache.txt
      
      The netfs API to FS-Cache can be found in:
      
      	Documentation/filesystems/caching/netfs-api.txt
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      2d6fff63
    • D
      FS-Cache: Recruit a page flags for cache management · 266cf658
      David Howells 提交于
      Recruit a page flag to aid in cache management.  The following extra flag is
      defined:
      
       (1) PG_fscache (PG_private_2)
      
           The marked page is backed by a local cache and is pinning resources in the
           cache driver.
      
      If PG_fscache is set, then things that checked for PG_private will now also
      check for that.  This includes things like truncation and page invalidation.
      The function page_has_private() had been added to make the checks for both
      PG_private and PG_private_2 at the same time.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      266cf658
    • D
      FS-Cache: Release page->private after failed readahead · 03fb3d2a
      David Howells 提交于
      The attached patch causes read_cache_pages() to release page-private data on a
      page for which add_to_page_cache() fails.  If the filler function fails, then
      the problematic page is left attached to the pagecache (with appropriate flags
      set, one presumes) and the remaining to-be-attached pages are invalidated and
      discarded.  This permits pages with caching references associated with them to
      be cleaned up.
      
      The invalidatepage() address space op is called (indirectly) to do the honours.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NRik van Riel <riel@redhat.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      03fb3d2a
    • D
      Document the slow work thread pool · 8f0aa2f2
      David Howells 提交于
      Document the slow work thread pool.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      8f0aa2f2
    • D
      Make the slow work pool configurable · 12e22c5e
      David Howells 提交于
      Make the slow work pool configurable through /proc/sys/kernel/slow-work.
      
       (*) /proc/sys/kernel/slow-work/min-threads
      
           The minimum number of threads that should be in the pool as long as it is
           in use.  This may be anywhere between 2 and max-threads.
      
       (*) /proc/sys/kernel/slow-work/max-threads
      
           The maximum number of threads that should in the pool.  This may be
           anywhere between min-threads and 255 or NR_CPUS * 2, whichever is greater.
      
       (*) /proc/sys/kernel/slow-work/vslow-percentage
      
           The percentage of active threads in the pool that may be used to execute
           very slow work items.  This may be between 1 and 99.  The resultant number
           is bounded to between 1 and one fewer than the number of active threads.
           This ensures there is always at least one thread that can process very
           slow work items, and always at least one thread that won't.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSerge Hallyn <serue@us.ibm.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      12e22c5e
    • D
      Make slow-work thread pool actually dynamic · 109d9272
      David Howells 提交于
      Make the slow-work thread pool actually dynamic in the number of threads it
      contains.  With this patch, it will both create additional threads when it has
      extra work to do, and cull excess threads that aren't doing anything.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSerge Hallyn <serue@us.ibm.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      109d9272
    • D
      Create a dynamically sized pool of threads for doing very slow work items · 07fe7cb7
      David Howells 提交于
      Create a dynamically sized pool of threads for doing very slow work items, such
      as invoking mkdir() or rmdir() - things that may take a long time and may
      sleep, holding mutexes/semaphores and hogging a thread, and are thus unsuitable
      for workqueues.
      
      The number of threads is always at least a settable minimum, but more are
      started when there's more work to do, up to a limit.  Because of the nature of
      the load, it's not suitable for a 1-thread-per-CPU type pool.  A system with
      one CPU may well want several threads.
      
      This is used by FS-Cache to do slow caching operations in the background, such
      as looking up, creating or deleting cache objects.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NSerge Hallyn <serue@us.ibm.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NTrond Myklebust <Trond.Myklebust@netapp.com>
      Acked-by: NAl Viro <viro@zeniv.linux.org.uk>
      Tested-by: NDaire Byrne <Daire.Byrne@framestore.com>
      07fe7cb7
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6 · 8fe74cf0
      Linus Torvalds 提交于
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6:
        Remove two unneeded exports and make two symbols static in fs/mpage.c
        Cleanup after commit 585d3bc0
        Trim includes of fdtable.h
        Don't crap into descriptor table in binfmt_som
        Trim includes in binfmt_elf
        Don't mess with descriptor table in load_elf_binary()
        Get rid of indirect include of fs_struct.h
        New helper - current_umask()
        check_unsafe_exec() doesn't care about signal handlers sharing
        New locking/refcounting for fs_struct
        Take fs_struct handling to new file (fs/fs_struct.c)
        Get rid of bumping fs_struct refcount in pivot_root(2)
        Kill unsharing fs_struct in __set_personality()
      8fe74cf0
    • L
      Merge branch 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6 · c2eb2fa6
      Linus Torvalds 提交于
      * 'drm-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6: (21 commits)
        drm/radeon: load the right microcode on rs780
        drm: remove unused "can_grow" parameter from drm_crtc_helper_initial_config
        drm: fix EDID backward compat check
        drm: sync the mode validation for INTERLACE/DBLSCAN
        drm: fix typo in edid vendor parsing.
        DRM: drm_crtc_helper.h doesn't actually need i2c.h
        drm: fix missing inline function on 32-bit powerpc.
        drm: Use pgprot_writecombine in GEM GTT mapping to get the right bits for !PAT.
        drm/i915: Add a spinlock to protect the active_list
        drm/i915: Fix SDVO TV support
        drm/i915: Fix SDVO CREATE_PREFERRED_INPUT_TIMING command
        drm/i915: Fix error in SDVO DTD and modeline convert
        drm/i915: Fix SDVO command debug function
        drm/i915: fix TV mode setting in property change
        drm/i915: only set TV mode when any property changed
        drm/i915: clean up udelay usage
        drm/i915: add VGA hotplug support for 945+
        drm/i915: correctly set IGD device's gtt size for KMS.
        drm/i915: avoid hanging on to a stale pointer to raw_edid.
        drm/i915: check for -EINVAL from vm_insert_pfn
        ...
      c2eb2fa6
    • L
      Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6 · ef8a97bb
      Linus Torvalds 提交于
      * git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-2.6: (54 commits)
        glge: remove unused #include <version.h>
        dnet: remove unused #include <version.h>
        tcp: miscounts due to tcp_fragment pcount reset
        tcp: add helper for counter tweaking due mid-wq change
        hso: fix for the 'invalid frame length' messages
        hso: fix for crash when unplugging the device
        fsl_pq_mdio: Fix compile failure
        fsl_pq_mdio: Revive UCC MDIO support
        ucc_geth: Pass proper device to DMA routines, otherwise oops happens
        i.MX31: Fixing cs89x0 network building to i.MX31ADS
        tc35815: Fix build error if NAPI enabled
        hso: add Vendor/Product ID's for new devices
        ucc_geth: Remove unused header
        gianfar: Remove unused header
        kaweth: Fix locking to be SMP-safe
        net: allow multiple dev per napi with GRO
        r8169: reset IntrStatus after chip reset
        ixgbe: Fix potential memory leak/driver panic issue while setting up Tx & Rx ring parameters
        ixgbe: fix ethtool -A|a behavior
        ixgbe: Patch to fix driver panic while freeing up tx & rx resources
        ...
      ef8a97bb
    • J
      cpumask: fix slab corruption caused by alloc_cpumask_var_node() · 4f032ac4
      Jack Steiner 提交于
      Fix slab corruption caused by alloc_cpumask_var_node() overwriting the
      tail end of an off-stack cpumask.
      
      The function zeros out cpumask bits beyond the last possible cpu.  The
      starting point for zeroing should be the beginning of the mask offset by a
      byte count derived from the number of possible cpus.  The offset was
      calculated in bits instead of bytes.  This resulted in overwriting the end
      of the cpumask.
      Signed-off-by: NJack Steiner <steiner@sgi.com>
      Acked-by: Mike Travis <travis.sgi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: <stable@kernel.org>		[2.6.29.x]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4f032ac4
    • R
      ia64: implement interrupt-enabling rwlocks · 2d09cde9
      Robin Holt 提交于
      Implement __raw_read_lock_flags and __raw_write_lock_flags for the ia64
      architecture.
      
      [kosaki.motohiro@jp.fujitsu.com: typo fix]
      Signed-off-by: NPetr Tesarik <ptesarik@suse.cz>
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NTony Luck <tony.luck@intel.com>
      Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2d09cde9
    • R
      Allow rwlocks to re-enable interrupts · f5f7eac4
      Robin Holt 提交于
      Pass the original flags to rwlock arch-code, so that it can re-enable
      interrupts if implemented for that architecture.
      
      Initially, make __raw_read_lock_flags and __raw_write_lock_flags stubs
      which just do the same thing as non-flags variants.
      Signed-off-by: NPetr Tesarik <ptesarik@suse.cz>
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <linux-arch@vger.kernel.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f5f7eac4
    • R
      Factor out #ifdefs from kernel/spinlock.c to LOCK_CONTENDED_FLAGS · e8c158bb
      Robin Holt 提交于
      SGI has observed that on large systems, interrupts are not serviced for a
      long period of time when waiting for a rwlock.  The following patch series
      re-enables irqs while waiting for the lock, resembling the code which is
      already there for spinlocks.
      
      I only made the ia64 version, because the patch adds some overhead to the
      fast path.  I assume there is currently no demand to have this for other
      architectures, because the systems are not so large.  Of course, the
      possibility to implement raw_{read|write}_lock_flags for any architecture
      is still there.
      
      This patch:
      
      The new macro LOCK_CONTENDED_FLAGS expands to the correct implementation
      depending on the config options, so that IRQ's are re-enabled when
      possible, but they remain disabled if CONFIG_LOCKDEP is set.
      Signed-off-by: NPetr Tesarik <ptesarik@suse.cz>
      Signed-off-by: NRobin Holt <holt@sgi.com>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8c158bb
    • C
      fs/ufs: return f_fsid for statfs(2) · 41d577aa
      Coly Li 提交于
      Make ufs return f_fsid info for statfs(2).
      Signed-off-by: NColy Li <coly.li@suse.de>
      Cc: Evgeniy Dushistov <dushistov@mail.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      41d577aa