1. 06 4月, 2018 1 次提交
  2. 04 4月, 2018 7 次提交
    • D
      fscache: Attach the index key and aux data to the cookie · 402cb8dd
      David Howells 提交于
      Attach copies of the index key and auxiliary data to the fscache cookie so
      that:
      
       (1) The callbacks to the netfs for this stuff can be eliminated.  This
           can simplify things in the cache as the information is still
           available, even after the cache has relinquished the cookie.
      
       (2) Simplifies the locking requirements of accessing the information as we
           don't have to worry about the netfs object going away on us.
      
       (3) The cache can do lazy updating of the coherency information on disk.
           As long as the cache is flushed before reboot/poweroff, there's no
           need to update the coherency info on disk every time it changes.
      
       (4) Cookies can be hashed or put in a tree as the index key is easily
           available.  This allows:
      
           (a) Checks for duplicate cookies can be made at the top fscache layer
           	 rather than down in the bowels of the cache backend.
      
           (b) Caching can be added to a netfs object that has a cookie if the
           	 cache is brought online after the netfs object is allocated.
      
      A certain amount of space is made in the cookie for inline copies of the
      data, but if it won't fit there, extra memory will be allocated for it.
      
      The downside of this is that live cache operation requires more memory.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NAnna Schumaker <anna.schumaker@netapp.com>
      Tested-by: NSteve Dickson <steved@redhat.com>
      402cb8dd
    • D
      fscache: Add more tracepoints · 08c2e3d0
      David Howells 提交于
      Add more tracepoints to fscache, including:
      
       (*) fscache_page - Tracks netfs pages known to fscache.
      
       (*) fscache_check_page - Tracks the netfs querying whether a page is
           pending storage.
      
       (*) fscache_wake_cookie - Tracks cookies being woken up after a page
           completes/aborts storage in the cache.
      
       (*) fscache_op - Tracks operations being initialised.
      
       (*) fscache_wrote_page - Tracks return of the backend write_page op.
      
       (*) fscache_gang_lookup - Tracks lookup of pages to be stored in the write
           operation.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      08c2e3d0
    • D
      fscache: Add tracepoints · a18feb55
      David Howells 提交于
      Add some tracepoints to fscache:
      
       (*) fscache_cookie - Tracks a cookie's usage count.
      
       (*) fscache_netfs - Logs registration of a network filesystem, including
           the pointer to the cookie allocated.
      
       (*) fscache_acquire - Logs cookie acquisition.
      
       (*) fscache_relinquish - Logs cookie relinquishment.
      
       (*) fscache_enable - Logs enablement of a cookie.
      
       (*) fscache_disable - Logs disablement of a cookie.
      
       (*) fscache_osm - Tracks execution of states in the object state machine.
      
      and cachefiles:
      
       (*) cachefiles_ref - Tracks a cachefiles object's usage count.
      
       (*) cachefiles_lookup - Logs result of lookup_one_len().
      
       (*) cachefiles_mkdir - Logs result of vfs_mkdir().
      
       (*) cachefiles_create - Logs result of vfs_create().
      
       (*) cachefiles_unlink - Logs calls to vfs_unlink().
      
       (*) cachefiles_rename - Logs calls to vfs_rename().
      
       (*) cachefiles_mark_active - Logs an object becoming active.
      
       (*) cachefiles_wait_active - Logs a wait for an old object to be
           destroyed.
      
       (*) cachefiles_mark_inactive - Logs an object becoming inactive.
      
       (*) cachefiles_mark_buried - Logs the burial of an object.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      a18feb55
    • D
      fscache: Fix hanging wait on page discarded by writeback · 2c984257
      David Howells 提交于
      If the fscache asynchronous write operation elects to discard a page that's
      pending storage to the cache because the page would be over the store limit
      then it needs to wake the page as someone may be waiting on completion of
      the write.
      
      The problem is that the store limit may be updated by a different
      asynchronous operation - and so may miss the write - and that the store
      limit may not even get updated until later by the netfs.
      
      Fix the kernel hang by making fscache_write_op() mark as written any pages
      that are over the limit.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      2c984257
    • D
      fscache: Detect multiple relinquishment of a cookie · d0fb31ec
      David Howells 提交于
      Report if an fscache cookie is relinquished multiple times by the netfs.
      Signed-off-by: NDavid <dhowells@redhat.com>
      d0fb31ec
    • D
      fscache: Pass the correct cancelled indications to fscache_op_complete() · b27ddd46
      David Howells 提交于
      The last parameter to fscache_op_complete() is a bool indicating whether or
      not the operation was cancelled.  A lot of the time the inverse value is
      given or no differentiation is made.  Fix this.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      b27ddd46
    • D
      fscache, cachefiles: Fix checker warnings · bfa3837e
      David Howells 提交于
      Fix a couple of checker warnings in fscache and cachefiles:
      
       (1) fscache_n_op_requeue is never used, so get rid of it.
      
       (2) cachefiles_uncache_page() is passed in a lock that it releases, so
           this needs annotating.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      bfa3837e
  3. 20 3月, 2018 1 次提交
  4. 16 11月, 2017 1 次提交
  5. 13 11月, 2017 1 次提交
    • D
      Pass mode to wait_on_atomic_t() action funcs and provide default actions · 5e4def20
      David Howells 提交于
      Make wait_on_atomic_t() pass the TASK_* mode onto its action function as an
      extra argument and make it 'unsigned int throughout.
      
      Also, consolidate a bunch of identical action functions into a default
      function that can do the appropriate thing for the mode.
      
      Also, change the argument name in the bit_wait*() function declarations to
      reflect the fact that it's the mode and not the bit number.
      
      [Peter Z gives this a grudging ACK, but thinks that the whole atomic_t wait
      should be done differently, though he's not immediately sure as to how]
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      cc: Ingo Molnar <mingo@kernel.org>
      5e4def20
  6. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  7. 13 10月, 2017 1 次提交
    • E
      FS-Cache: fix dereference of NULL user_key_payload · d124b2c5
      Eric Biggers 提交于
      When the file /proc/fs/fscache/objects (available with
      CONFIG_FSCACHE_OBJECT_LIST=y) is opened, we request a user key with
      description "fscache:objlist", then access its payload.  However, a
      revoked key has a NULL payload, and we failed to check for this.
      request_key() *does* skip revoked keys, but there is still a window
      where the key can be revoked before we access its payload.
      
      Fix it by checking for a NULL payload, treating it like a key which was
      already revoked at the time it was requested.
      
      Fixes: 4fbf4291 ("FS-Cache: Allow the current state of all objects to be dumped")
      Reviewed-by: NJames Morris <james.l.morris@oracle.com>
      Cc: <stable@vger.kernel.org>    [v2.6.32+]
      Signed-off-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      d124b2c5
  8. 14 9月, 2017 1 次提交
  9. 07 9月, 2017 2 次提交
  10. 02 3月, 2017 1 次提交
    • D
      KEYS: Differentiate uses of rcu_dereference_key() and user_key_payload() · 0837e49a
      David Howells 提交于
      rcu_dereference_key() and user_key_payload() are currently being used in
      two different, incompatible ways:
      
       (1) As a wrapper to rcu_dereference() - when only the RCU read lock used
           to protect the key.
      
       (2) As a wrapper to rcu_dereference_protected() - when the key semaphor is
           used to protect the key and the may be being modified.
      
      Fix this by splitting both of the key wrappers to produce:
      
       (1) RCU accessors for keys when caller has the key semaphore locked:
      
      	dereference_key_locked()
      	user_key_payload_locked()
      
       (2) RCU accessors for keys when caller holds the RCU read lock:
      
      	dereference_key_rcu()
      	user_key_payload_rcu()
      
      This should fix following warning in the NFS idmapper
      
        ===============================
        [ INFO: suspicious RCU usage. ]
        4.10.0 #1 Tainted: G        W
        -------------------------------
        ./include/keys/user-type.h:53 suspicious rcu_dereference_protected() usage!
        other info that might help us debug this:
        rcu_scheduler_active = 2, debug_locks = 0
        1 lock held by mount.nfs/5987:
          #0:  (rcu_read_lock){......}, at: [<d000000002527abc>] nfs_idmap_get_key+0x15c/0x420 [nfsv4]
        stack backtrace:
        CPU: 1 PID: 5987 Comm: mount.nfs Tainted: G        W       4.10.0 #1
        Call Trace:
          dump_stack+0xe8/0x154 (unreliable)
          lockdep_rcu_suspicious+0x140/0x190
          nfs_idmap_get_key+0x380/0x420 [nfsv4]
          nfs_map_name_to_uid+0x2a0/0x3b0 [nfsv4]
          decode_getfattr_attrs+0xfac/0x16b0 [nfsv4]
          decode_getfattr_generic.constprop.106+0xbc/0x150 [nfsv4]
          nfs4_xdr_dec_lookup_root+0xac/0xb0 [nfsv4]
          rpcauth_unwrap_resp+0xe8/0x140 [sunrpc]
          call_decode+0x29c/0x910 [sunrpc]
          __rpc_execute+0x140/0x8f0 [sunrpc]
          rpc_run_task+0x170/0x200 [sunrpc]
          nfs4_call_sync_sequence+0x68/0xa0 [nfsv4]
          _nfs4_lookup_root.isra.44+0xd0/0xf0 [nfsv4]
          nfs4_lookup_root+0xe0/0x350 [nfsv4]
          nfs4_lookup_root_sec+0x70/0xa0 [nfsv4]
          nfs4_find_root_sec+0xc4/0x100 [nfsv4]
          nfs4_proc_get_rootfh+0x5c/0xf0 [nfsv4]
          nfs4_get_rootfh+0x6c/0x190 [nfsv4]
          nfs4_server_common_setup+0xc4/0x260 [nfsv4]
          nfs4_create_server+0x278/0x3c0 [nfsv4]
          nfs4_remote_mount+0x50/0xb0 [nfsv4]
          mount_fs+0x74/0x210
          vfs_kern_mount+0x78/0x220
          nfs_do_root_mount+0xb0/0x140 [nfsv4]
          nfs4_try_mount+0x60/0x100 [nfsv4]
          nfs_fs_mount+0x5ec/0xda0 [nfs]
          mount_fs+0x74/0x210
          vfs_kern_mount+0x78/0x220
          do_mount+0x254/0xf70
          SyS_mount+0x94/0x100
          system_call+0x38/0xe0
      Reported-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Tested-by: NJan Stancek <jstancek@redhat.com>
      Signed-off-by: NJames Morris <james.l.morris@oracle.com>
      0837e49a
  11. 01 2月, 2017 3 次提交
    • D
      fscache: Fix dead object requeue · e26bfebd
      David Howells 提交于
      Under some circumstances, an fscache object can become queued such that it
      fscache_object_work_func() can be called once the object is in the
      OBJECT_DEAD state.  This results in the kernel oopsing when it tries to
      invoke the handler for the state (which is hard coded to 0x2).
      
      The way this comes about is something like the following:
      
       (1) The object dispatcher is processing a work state for an object.  This
           is done in workqueue context.
      
       (2) An out-of-band event comes in that isn't masked, causing the object to
           be queued, say EV_KILL.
      
       (3) The object dispatcher finishes processing the current work state on
           that object and then sees there's another event to process, so,
           without returning to the workqueue core, it processes that event too.
           It then follows the chain of events that initiates until we reach
           OBJECT_DEAD without going through a wait state (such as
           WAIT_FOR_CLEARANCE).
      
           At this point, object->events may be 0, object->event_mask will be 0
           and oob_event_mask will be 0.
      
       (4) The object dispatcher returns to the workqueue processor, and in due
           course, this sees that the object's work item is still queued and
           invokes it again.
      
       (5) The current state is a work state (OBJECT_DEAD), so the dispatcher
           jumps to it - resulting in an OOPS.
      
      When I'm seeing this, the work state in (1) appears to have been either
      LOOK_UP_OBJECT or CREATE_OBJECT (object->oob_table is
      fscache_osm_lookup_oob).
      
      The window for (2) is very small:
      
       (A) object->event_mask is cleared whilst the event dispatch process is
           underway - though there's no memory barrier to force this to the top
           of the function.
      
           The window, therefore is from the time the object was selected by the
           workqueue processor and made requeueable to the time the mask was
           cleared.
      
       (B) fscache_raise_event() will only queue the object if it manages to set
           the event bit and the corresponding event_mask bit was set.
      
           The enqueuement is then deferred slightly whilst we get a ref on the
           object and get the per-CPU variable for workqueue congestion.  This
           slight deferral slightly increases the probability by allowing extra
           time for the workqueue to make the item requeueable.
      
      Handle this by giving the dead state a processor function and checking the
      for the dead state address rather than seeing if the processor function is
      address 0x2.  The dead state processor function can then set a flag to
      indicate that it's occurred and give a warning if it occurs more than once
      per object.
      
      If this race occurs, an oops similar to the following is seen (note the RIP
      value):
      
      BUG: unable to handle kernel NULL pointer dereference at 0000000000000002
      IP: [<0000000000000002>] 0x1
      PGD 0
      Oops: 0010 [#1] SMP
      Modules linked in: ...
      CPU: 17 PID: 16077 Comm: kworker/u48:9 Not tainted 3.10.0-327.18.2.el7.x86_64 #1
      Hardware name: HP ProLiant DL380 Gen9/ProLiant DL380 Gen9, BIOS P89 12/27/2015
      Workqueue: fscache_object fscache_object_work_func [fscache]
      task: ffff880302b63980 ti: ffff880717544000 task.ti: ffff880717544000
      RIP: 0010:[<0000000000000002>]  [<0000000000000002>] 0x1
      RSP: 0018:ffff880717547df8  EFLAGS: 00010202
      RAX: ffffffffa0368640 RBX: ffff880edf7a4480 RCX: dead000000200200
      RDX: 0000000000000002 RSI: 00000000ffffffff RDI: ffff880edf7a4480
      RBP: ffff880717547e18 R08: 0000000000000000 R09: dfc40a25cb3a4510
      R10: dfc40a25cb3a4510 R11: 0000000000000400 R12: 0000000000000000
      R13: ffff880edf7a4510 R14: ffff8817f6153400 R15: 0000000000000600
      FS:  0000000000000000(0000) GS:ffff88181f420000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      CR2: 0000000000000002 CR3: 000000000194a000 CR4: 00000000001407e0
      DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
      DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
      Stack:
       ffffffffa0363695 ffff880edf7a4510 ffff88093f16f900 ffff8817faa4ec00
       ffff880717547e60 ffffffff8109d5db 00000000faa4ec18 0000000000000000
       ffff8817faa4ec18 ffff88093f16f930 ffff880302b63980 ffff88093f16f900
      Call Trace:
       [<ffffffffa0363695>] ? fscache_object_work_func+0xa5/0x200 [fscache]
       [<ffffffff8109d5db>] process_one_work+0x17b/0x470
       [<ffffffff8109e4ac>] worker_thread+0x21c/0x400
       [<ffffffff8109e290>] ? rescuer_thread+0x400/0x400
       [<ffffffff810a5acf>] kthread+0xcf/0xe0
       [<ffffffff810a5a00>] ? kthread_create_on_node+0x140/0x140
       [<ffffffff816460d8>] ret_from_fork+0x58/0x90
       [<ffffffff810a5a00>] ? kthread_create_on_node+0x140/0x140
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NJeremy McNicoll <jeremymc@redhat.com>
      Tested-by: NFrank Sorenson <sorenson@redhat.com>
      Tested-by: NBenjamin Coddington <bcodding@redhat.com>
      Reviewed-by: NBenjamin Coddington <bcodding@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      e26bfebd
    • D
      fscache: Clear outstanding writes when disabling a cookie · 6bdded59
      David Howells 提交于
      fscache_disable_cookie() needs to clear the outstanding writes on the
      cookie it's disabling because they cannot be completed after.
      
      Without this, fscache_nfs_open_file() gets stuck because it disables the
      cookie when the file is opened for writing but can't uncache the pages till
      afterwards - otherwise there's a race between the open routine and anyone
      who already has it open R/O and is still reading from it.
      
      Looking in /proc/pid/stack of the offending process shows:
      
      [<ffffffffa0142883>] __fscache_wait_on_page_write+0x82/0x9b [fscache]
      [<ffffffffa014336e>] __fscache_uncache_all_inode_pages+0x91/0xe1 [fscache]
      [<ffffffffa01740fa>] nfs_fscache_open_file+0x59/0x9e [nfs]
      [<ffffffffa01ccf41>] nfs4_file_open+0x17f/0x1b8 [nfsv4]
      [<ffffffff8117350e>] do_dentry_open+0x16d/0x2b7
      [<ffffffff811743ac>] vfs_open+0x5c/0x65
      [<ffffffff81184185>] path_openat+0x785/0x8fb
      [<ffffffff81184343>] do_filp_open+0x48/0x9e
      [<ffffffff81174710>] do_sys_open+0x13b/0x1cb
      [<ffffffff811747b9>] SyS_open+0x19/0x1b
      [<ffffffff81001c44>] do_syscall_64+0x80/0x17a
      [<ffffffff8165c2da>] return_from_SYSCALL_64+0x0/0x7a
      [<ffffffffffffffff>] 0xffffffffffffffff
      Reported-by: NJianhong Yin <jiyin@redhat.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Acked-by: NJeff Layton <jlayton@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6bdded59
    • D
      FS-Cache: Initialise stores_lock in netfs cookie · 62deb818
      David Howells 提交于
      Initialise the stores_lock in fscache netfs cookies.  Technically, it
      shouldn't be necessary, since the netfs cookie is an index and stores no
      data, but initialising it anyway adds insignificant overhead.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NJeff Layton <jlayton@redhat.com>
      Acked-by: NSteve Dickson <steved@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      62deb818
  12. 01 6月, 2016 1 次提交
  13. 30 5月, 2016 1 次提交
  14. 05 4月, 2016 1 次提交
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  15. 11 11月, 2015 3 次提交
  16. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  17. 21 10月, 2015 1 次提交
    • D
      KEYS: Merge the type-specific data with the payload data · 146aa8b1
      David Howells 提交于
      Merge the type-specific data with the payload data into one four-word chunk
      as it seems pointless to keep them separate.
      
      Use user_key_payload() for accessing the payloads of overloaded
      user-defined keys.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      cc: linux-cifs@vger.kernel.org
      cc: ecryptfs@vger.kernel.org
      cc: linux-ext4@vger.kernel.org
      cc: linux-f2fs-devel@lists.sourceforge.net
      cc: linux-nfs@vger.kernel.org
      cc: ceph-devel@vger.kernel.org
      cc: linux-ima-devel@lists.sourceforge.net
      146aa8b1
  18. 02 4月, 2015 12 次提交
    • D
      FS-Cache: Retain the netfs context in the retrieval op earlier · 4a47132f
      David Howells 提交于
      Now that the retrieval operation may be disposed of by fscache_put_operation()
      before we actually set the context, the retrieval-specific cleanup operation
      can produce a NULL-pointer dereference when it tries to unconditionally clean
      up the netfs context.
      
      Given that it is expected that we'll get at least as far as the place where we
      currently set the context pointer and it is unlikely we'll go through the
      error handling paths prior to that point, retain the context right from the
      point that the retrieval op is allocated.
      
      Concomitant to this, we need to retain the cookie pointer in the retrieval op
      also so that we can call the netfs to release its context in the release
      method.
      
      In addition, we might now get into fscache_release_retrieval_op() with the op
      only initialised.  To this end, set the operation to DEAD only after the
      release method has been called and skip the n_pages test upon cleanup if the
      op is still in the INITIALISED state.
      
      Without these changes, the following oops might be seen:
      
      	BUG: unable to handle kernel NULL pointer dereference at 00000000000000b8
      	...
      	RIP: 0010:[<ffffffffa0089c98>] fscache_release_retrieval_op+0xae/0x100
      	...
      	Call Trace:
      	 [<ffffffffa0088560>] fscache_put_operation+0x117/0x2e0
      	 [<ffffffffa008b8f5>] __fscache_read_or_alloc_pages+0x351/0x3ac
      	 [<ffffffffa00b761f>] __nfs_readpages_from_fscache+0x59/0xbf [nfs]
      	 [<ffffffffa00b06c5>] nfs_readpages+0x10c/0x185 [nfs]
      	 [<ffffffff81124925>] ? alloc_pages_current+0x119/0x13e
      	 [<ffffffff810ee5fd>] ? __page_cache_alloc+0xfb/0x10a
      	 [<ffffffff810f87f8>] __do_page_cache_readahead+0x188/0x22c
      	 [<ffffffff810f8b3a>] ondemand_readahead+0x29e/0x2af
      	 [<ffffffff810f8c92>] page_cache_sync_readahead+0x38/0x3a
      	 [<ffffffff810ef337>] generic_file_read_iter+0x1a2/0x55a
      	 [<ffffffffa00a9dff>] ? nfs_revalidate_mapping+0xd6/0x288 [nfs]
      	 [<ffffffffa00a6a23>] nfs_file_read+0x49/0x70 [nfs]
      	 [<ffffffff811363be>] new_sync_read+0x78/0x9c
      	 [<ffffffff81137164>] __vfs_read+0x13/0x38
      	 [<ffffffff8113721e>] vfs_read+0x95/0x121
      	 [<ffffffff811372f6>] SyS_read+0x4c/0x8a
      	 [<ffffffff81557a52>] system_call_fastpath+0x12/0x17
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      4a47132f
    • D
      FS-Cache: The operation cancellation method needs calling in more places · d3b97ca4
      David Howells 提交于
      Any time an incomplete operation is cancelled, the operation cancellation
      function needs to be called to clean up.  This is currently being passed
      directly to some of the functions that might want to call it, but not all.
      
      Instead, pass the cancellation method pointer to the fscache_operation_init()
      and have that cache it in the operation struct.  Further, plug in a dummy
      cancellation handler if the caller declines to set one as this allows us to
      call the function unconditionally (the extra overhead isn't worth bothering
      about as we don't expect to be calling this typically).
      
      The cancellation method must thence be called everywhere the CANCELLED state
      is set.  Note that we call it *before* setting the CANCELLED state such that
      the method can use the old state value to guide its operation.
      
      fscache_do_cancel_retrieval() needs moving higher up in the sources so that
      the init function can use it now.
      
      Without this, the following oops may be seen:
      
      	FS-Cache: Assertion failed
      	FS-Cache: 3 == 0 is false
      	------------[ cut here ]------------
      	kernel BUG at ../fs/fscache/page.c:261!
      	...
      	RIP: 0010:[<ffffffffa0089c1b>]  fscache_release_retrieval_op+0x77/0x100
      	 [<ffffffffa008853d>] fscache_put_operation+0x114/0x2da
      	 [<ffffffffa008b8c2>] __fscache_read_or_alloc_pages+0x358/0x3b3
      	 [<ffffffffa00b761f>] __nfs_readpages_from_fscache+0x59/0xbf [nfs]
      	 [<ffffffffa00b06c5>] nfs_readpages+0x10c/0x185 [nfs]
      	 [<ffffffff81124925>] ? alloc_pages_current+0x119/0x13e
      	 [<ffffffff810ee5fd>] ? __page_cache_alloc+0xfb/0x10a
      	 [<ffffffff810f87f8>] __do_page_cache_readahead+0x188/0x22c
      	 [<ffffffff810f8b3a>] ondemand_readahead+0x29e/0x2af
      	 [<ffffffff810f8c92>] page_cache_sync_readahead+0x38/0x3a
      	 [<ffffffff810ef337>] generic_file_read_iter+0x1a2/0x55a
      	 [<ffffffffa00a9dff>] ? nfs_revalidate_mapping+0xd6/0x288 [nfs]
      	 [<ffffffffa00a6a23>] nfs_file_read+0x49/0x70 [nfs]
      	 [<ffffffff811363be>] new_sync_read+0x78/0x9c
      	 [<ffffffff81137164>] __vfs_read+0x13/0x38
      	 [<ffffffff8113721e>] vfs_read+0x95/0x121
      	 [<ffffffff811372f6>] SyS_read+0x4c/0x8a
      	 [<ffffffff81557a52>] system_call_fastpath+0x12/0x17
      
      The assertion is showing that the remaining number of pages (n_pages) is not 0
      when the operation is being released.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      d3b97ca4
    • D
      FS-Cache: Put an aborted initialised op so that it is accounted correctly · a39caadf
      David Howells 提交于
      Call fscache_put_operation() or a wrapper on any op that has gone through
      fscache_operation_init() so that the accounting shown in /proc is done
      correctly, specifically fscache_n_op_release.
      
      fscache_put_operation() therefore now allows an op in the INITIALISED state as
      well as in the CANCELLED and COMPLETE states.
      
      Note that this means that an operation can get put that doesn't have its
      ->object pointer filled in, so anything that depends on the object needs to be
      conditional in fscache_put_operation().
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      a39caadf
    • D
      FS-Cache: Fix cancellation of in-progress operation · 73c04a47
      David Howells 提交于
      Cancellation of an in-progress operation needs to update the relevant counters
      and start any operations that are pending waiting on this one.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      73c04a47
    • D
      FS-Cache: Count the number of initialised operations · 03cdd0e4
      David Howells 提交于
      Count and display through /proc/fs/fscache/stats the number of initialised
      operations.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      03cdd0e4
    • D
      FS-Cache: Out of line fscache_operation_init() · 1339ec98
      David Howells 提交于
      Out of line fscache_operation_init() so that it can access internal FS-Cache
      features, such as stats, in a later commit.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      1339ec98
    • D
      FS-Cache: Permit fscache_cancel_op() to cancel in-progress operations too · 418b7eb9
      David Howells 提交于
      Currently, fscache_cancel_op() only cancels pending operations - attempts to
      cancel in-progress operations are ignored.  This leads to a problem in
      fscache_wait_for_operation_activation() whereby the wait is terminated, but
      the object has been killed.
      
      The check at the end of the function now triggers because it's no longer
      contingent on the cache having produced an I/O error since the commit that
      fixed the logic error in fscache_object_is_dead().
      
      The result of the check is that it tries to cancel the operation - but since
      the object may not be pending by this point, the cancellation request may be
      ignored - with the result that the the object is just put by the caller and
      fscache_put_operation has an assertion failure because the operation isn't in
      either the COMPLETE or the CANCELLED states.
      
      To fix this, we permit in-progress ops to be cancelled under some
      circumstances.
      
      The bug results in an oops that looks something like this:
      
      	FS-Cache: fscache_wait_for_operation_activation() = -ENOBUFS [obj dead 3]
      	FS-Cache:
      	FS-Cache: Assertion failed
      	FS-Cache: 3 == 5 is false
      	------------[ cut here ]------------
      	kernel BUG at ../fs/fscache/operation.c:432!
      	...
      	RIP: 0010:[<ffffffffa0088574>] fscache_put_operation+0xf2/0x2cd
      	Call Trace:
      	 [<ffffffffa008b92a>] __fscache_read_or_alloc_pages+0x2ec/0x3b3
      	 [<ffffffffa00b761f>] __nfs_readpages_from_fscache+0x59/0xbf [nfs]
      	 [<ffffffffa00b06c5>] nfs_readpages+0x10c/0x185 [nfs]
      	 [<ffffffff81124925>] ? alloc_pages_current+0x119/0x13e
      	 [<ffffffff810ee5fd>] ? __page_cache_alloc+0xfb/0x10a
      	 [<ffffffff810f87f8>] __do_page_cache_readahead+0x188/0x22c
      	 [<ffffffff810f8b3a>] ondemand_readahead+0x29e/0x2af
      	 [<ffffffff810f8c92>] page_cache_sync_readahead+0x38/0x3a
      	 [<ffffffff810ef337>] generic_file_read_iter+0x1a2/0x55a
      	 [<ffffffffa00a9dff>] ? nfs_revalidate_mapping+0xd6/0x288 [nfs]
      	 [<ffffffffa00a6a23>] nfs_file_read+0x49/0x70 [nfs]
      	 [<ffffffff811363be>] new_sync_read+0x78/0x9c
      	 [<ffffffff81137164>] __vfs_read+0x13/0x38
      	 [<ffffffff8113721e>] vfs_read+0x95/0x121
      	 [<ffffffff811372f6>] SyS_read+0x4c/0x8a
      	 [<ffffffff81557a52>] system_call_fastpath+0x12/0x17
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      418b7eb9
    • D
      FS-Cache: fscache_object_is_dead() has wrong logic, kill it · 87021526
      David Howells 提交于
      fscache_object_is_dead() returns true only if the object is marked dead and
      the cache got an I/O error.  This should be a logical OR instead.  Since two
      of the callers got split up into handling for separate subcases, expand the
      other callers and kill the function.  This is probably the right thing to do
      anyway since one of the subcases isn't about the object at all, but rather
      about the cache.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      87021526
    • D
      FS-Cache: Synchronise object death state change vs operation submission · f09b443d
      David Howells 提交于
      When an object is being marked as no longer live, do this under the object
      spinlock to prevent a race with operation submission targeted on that object.
      
      The problem occurs due to the following pair of intertwined sequences when the
      cache tries to create an object that would take it over the hard available
      space limit:
      
       NETFS INTERFACE
       ===============
       (A) The netfs calls fscache_acquire_cookie().  object creation is deferred to
           the object state machine and the netfs is allowed to continue.
      
      	OBJECT STATE MACHINE KTHREAD
      	============================
      	(1) The object is looked up on disk by fscache_look_up_object()
      	    calling cachefiles_walk_to_object().  The latter finds that the
      	    object is not yet represented on disk and calls
      	    fscache_object_lookup_negative().
      
      	(2) fscache_object_lookup_negative() sets FSCACHE_COOKIE_NO_DATA_YET
      	    and clears FSCACHE_COOKIE_LOOKING_UP, thus allowing the netfs to
      	    start queuing read operations.
      
       (B) The netfs calls fscache_read_or_alloc_pages().  This calls
           fscache_wait_for_deferred_lookup() which sees FSCACHE_COOKIE_LOOKING_UP
           become clear, allowing the read to begin.
      
       (C) A read operation is set up and passed to fscache_submit_op() to deal
           with.
      
      	(3) cachefiles_walk_to_object() calls cachefiles_has_space(), which
      	    fails (or one of the file operations to create stuff fails).
      	    cachefiles returns an error to fscache.
      
      	(4) fscache_look_up_object() transits to the LOOKUP_FAILURE state,
      
      	(5) fscache_lookup_failure() sets FSCACHE_OBJECT_LOOKED_UP and
      	    FSCACHE_COOKIE_UNAVAILABLE and clears FSCACHE_COOKIE_LOOKING_UP
      	    then transits to the KILL_OBJECT state.
      
      	(6) fscache_kill_object() clears FSCACHE_OBJECT_IS_LIVE in an attempt
      	    to reject any further requests from the netfs.
      
      	(7) object->n_ops is examined and found to be 0.
      	    fscache_kill_object() transits to the DROP_OBJECT state.
      
       (D) fscache_submit_op() locks the object spinlock, sees if it can dispatch
           the op immediately by calling fscache_object_is_active() - which fails
           since FSCACHE_OBJECT_IS_AVAILABLE has not yet been set.
      
       (E) fscache_submit_op() then tests FSCACHE_OBJECT_LOOKED_UP - which is set.
           It then queues the object and increments object->n_ops.
      
      	(8) fscache_drop_object() releases the object and eventually
      	    fscache_put_object() calls cachefiles_put_object() which suffers
      	    an assertion failure here:
      
      		ASSERTCMP(object->fscache.n_ops, ==, 0);
      
      Locking the object spinlock in step (6) around the clearance of
      FSCACHE_OBJECT_IS_LIVE ensures that the the decision trees in
      fscache_submit_op() and fscache_submit_exclusive_op() don't see the IS_LIVE
      flag being cleared mid-decision: either the op is queued before step (7) - in
      which case fscache_kill_object() will see n_ops>0 and will deal with the op -
      or the op will be rejected.
      
      This, combined with rejecting op submission if the target object is dying, fix
      the problem.
      
      The problem shows up as the following oops:
      
      CacheFiles: Assertion failed
      CacheFiles: 1 == 0 is false
      ------------[ cut here ]------------
      kernel BUG at ../fs/cachefiles/interface.c:339!
      ...
      RIP: 0010:[<ffffffffa014fd9c>]  [<ffffffffa014fd9c>] cachefiles_put_object+0x2a4/0x301 [cachefiles]
      ...
      Call Trace:
       [<ffffffffa008674b>] fscache_put_object+0x18/0x21 [fscache]
       [<ffffffffa00883e6>] fscache_object_work_func+0x3ba/0x3c9 [fscache]
       [<ffffffff81054dad>] process_one_work+0x226/0x441
       [<ffffffff81055d91>] worker_thread+0x273/0x36b
       [<ffffffff81055b1e>] ? rescuer_thread+0x2e1/0x2e1
       [<ffffffff81059b9d>] kthread+0x10e/0x116
       [<ffffffff81059a8f>] ? kthread_create_on_node+0x1bb/0x1bb
       [<ffffffff815579ac>] ret_from_fork+0x7c/0xb0
       [<ffffffff81059a8f>] ? kthread_create_on_node+0x1bb/0x1bb
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      f09b443d
    • D
      FS-Cache: Handle a new operation submitted against a killed object · 6515d1db
      David Howells 提交于
      Reject new operations that are being submitted against an object if that
      object has failed its lookup or creation states or has been killed by the
      cache backend for some other reason, such as having been culled.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      6515d1db
    • D
      FS-Cache: When submitting an op, cancel it if the target object is dying · 30ceec62
      David Howells 提交于
      When submitting an operation, prefer to cancel the operation immediately
      rather than queuing it for later processing if the object is marked as dying
      (ie. the object state machine has reached the KILL_OBJECT state).
      
      Whilst we're at it, change the series of related test_bit() calls into a
      READ_ONCE() and bitwise-AND operators to reduce the number of load
      instructions (test_bit() has a volatile address).
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      30ceec62
    • D
      FS-Cache: Move fscache_report_unexpected_submission() to make it more available · 3c305984
      David Howells 提交于
      Move fscache_report_unexpected_submission() up within operation.c so that it
      can be called from fscache_submit_exclusive_op() too.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Reviewed-by: NSteve Dickson <steved@redhat.com>
      Acked-by: NJeff Layton <jeff.layton@primarydata.com>
      3c305984