- 04 10月, 2016 1 次提交
-
-
由 Al Viro 提交于
* splice_to_pipe() stops at pipe overflow and does *not* take pipe_lock * ->splice_read() instances do the same * vmsplice_to_pipe() and do_splice() (ultimate callers of splice_to_pipe()) arrange for waiting, looping, etc. themselves. That should make pipe_lock the outermost one. Unfortunately, existing rules for the amount passed by vmsplice_to_pipe() and do_splice() are quite ugly _and_ userland code can be easily broken by changing those. It's not even "no more than the maximal capacity of this pipe" - it's "once we'd fed pipe->nr_buffers pages into the pipe, leave instead of waiting". Considering how poorly these rules are documented, let's try "wait for some space to appear, unless given SPLICE_F_NONBLOCK, then push into pipe and if we run into overflow, we are done". Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 19 7月, 2016 1 次提交
-
-
由 Al Viro 提交于
just use wait_event_killable{,_exclusive}(). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 11 6月, 2016 1 次提交
-
-
由 Linus Torvalds 提交于
We always mixed in the parent pointer into the dentry name hash, but we did it late at lookup time. It turns out that we can simplify that lookup-time action by salting the hash with the parent pointer early instead of late. A few other users of our string hashes also wanted to mix in their own pointers into the hash, and those are updated to use the same mechanism. Hash users that don't have any particular initial salt can just use the NULL pointer as a no-salt. Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: George Spelvin <linux@sciencehorizons.net> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 4月, 2016 1 次提交
-
-
由 Kirill A. Shutemov 提交于
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: NMichal Hocko <mhocko@suse.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 8月, 2015 1 次提交
-
-
由 Jann Horn 提交于
fuse_dev_ioctl() performed fuse_get_dev() on a user-supplied fd, leading to a type confusion issue. Fix it by checking file->f_op. Signed-off-by: NJann Horn <jann@thejh.net> Acked-by: NMiklos Szeredi <miklos@szeredi.hu> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 01 7月, 2015 35 次提交
-
-
由 Miklos Szeredi 提交于
Make each fuse device clone refer to a separate processing queue. The only constraint on userspace code is that the request answer must be written to the same device clone as it was read off. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Miklos Szeredi 提交于
Allow fuse device clones to refer to be distinguished. This patch just adds the infrastructure by associating a separate "struct fuse_dev" with each clone. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Allow an open fuse device to be "cloned". Userspace can create a clone by: newfd = open("/dev/fuse", O_RDWR) ioctl(newfd, FUSE_DEV_IOC_CLONE, &oldfd); At this point newfd will refer to the same fuse connection as oldfd. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
In fuse_abort_conn() when all requests are on private lists we no longer need fc->lock protection. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Remove fc->lock protection from processing queue members, now protected by fpq->lock. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
No longer need to call request_end() with the connection lock held. We still protect the background counters and queue with fc->lock, so acquire it if necessary. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Now that we atomically test having already done everything we no longer need other protection. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
When the connection is aborted it is possible that request_end() will be called twice. Use atomic test and set to do the actual ending only once. test_and_set_bit() also provides the necessary barrier semantics so no explicit smp_wmb() is necessary. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
When an unlocked request is aborted, it is moved from fpq->io to a private list. Then, after unlocking fpq->lock, the private list is processed and the requests are finished off. To protect the private list, we need to mark the request with a flag, so if in the meantime the request is unlocked the list is not corrupted. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Add a fpq->lock for protecting members of struct fuse_pqueue and FR_LOCKED request flag. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Rearrange fuse_abort_conn() so that processing queue accesses are grouped together. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
- locked list_add() + list_del_init() cancel out - common handling of case when request is ended here in the read phase Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Miklos Szeredi 提交于
This will allow checking ->connected just with the processing queue lock. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
This is just two fields: fc->io and fc->processing. This patch just rearranges the fields, no functional change. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
wait_event_interruptible_exclusive_locked() will do everything request_wait() does, so replace it. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Remove fc->lock protection from input queue members, now protected by fiq->waitq.lock. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Interrupt is only queued after the request has been sent to userspace. This is either done in request_wait_answer() or fuse_dev_do_read() depending on which state the request is in at the time of the interrupt. If it's not yet sent, then queuing the interrupt is postponed until the request is read. Otherwise (the request has already been read and is waiting for an answer) the interrupt is queued immedidately. We want to call queue_interrupt() without fc->lock protection, in which case there can be a race between the two functions: - neither of them queue the interrupt (thinking the other one has already done it). - both of them queue the interrupt The first one is prevented by adding memory barriers, the second is prevented by checking (under fiq->waitq.lock) if the interrupt has already been queued. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Miklos Szeredi 提交于
Use fiq->waitq.lock for protecting members of struct fuse_iqueue and FR_PENDING request flag, previously protected by fc->lock. Following patches will remove fc->lock protection from these members. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Different lists will need different locks. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Rearrange fuse_abort_conn() so that input queue accesses are grouped together. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
This will allow checking ->connected just with the input queue lock. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
The input queue contains normal requests (fc->pending), forgets (fc->forget_*) and interrupts (fc->interrupts). There's also fc->waitq and fc->fasync for waking up the readers of the fuse device when a request is available. The fc->reqctr is also moved to the input queue (assigned to the request when the request is added to the input queue. This patch just rearranges the fields, no functional change. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Use flags for representing the state in fuse_req. This is needed since req->list will be protected by different locks in different states, hence we'll want the state itself to be split into distinct bits, each protected with the relevant lock in that state. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Miklos Szeredi 提交于
FUSE_REQ_INIT is actually the same state as FUSE_REQ_PENDING and FUSE_REQ_READING and FUSE_REQ_WRITING can be merged into a common FUSE_REQ_IO state. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Only hold fc->lock over sections of request_wait_answer() that actually need it. If wait_event_interruptible() returns zero, it means that the request finished. Need to add memory barriers, though, to make sure that all relevant data in the request is synchronized. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
-
由 Miklos Szeredi 提交于
Since it's a 64bit counter, it's never gonna wrap around. Remove code dealing with that possibility. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Splice fc->pending and fc->processing lists into a common kill list while holding fc->lock. By the time we release fc->lock, pending and processing lists are empty and the io list contains only locked requests. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Fold end_io_requests() and end_queued_requests() into fuse_abort_conn(). Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Reuse req->waitq.lock for protecting FR_ABORTED and FR_LOCKED flags. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
Finer grained locking will mean there's no single lock to protect modification of bitfileds in fuse_req. So move to using bitops. Can use the non-atomic variants for those which happen while the request definitely has only one reference. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
- don't end the request while req->locked is true - make unlock_request() return an error if the connection was aborted Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
fuse_abort_conn() does all the work done by fuse_dev_release() and more. "More" consists of: end_io_requests(fc); wake_up_all(&fc->waitq); kill_fasync(&fc->fasync, SIGIO, POLL_IN); All of which should be no-op (WARN_ON's added). Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
And the same with fuse_request_send_nowait_locked(). Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-
由 Miklos Szeredi 提交于
fc->conn_error is set once in FUSE_INIT reply and never cleared. Check it in request allocation, there's no sense in doing all the preparation if sending will surely fail. Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz> Reviewed-by: NAshish Samant <ashish.samant@oracle.com>
-