- 10 2月, 2007 23 次提交
-
-
由 David Chinner 提交于
The block reservation mechanism has been broken since the per-cpu superblock counters were introduced. Make the block reservation code work with the per-cpu counters by syncing the counters, snapshotting the amount of available space and then doing a modifcation of the counter state according to the result. Continue in a loop until we either have no space available or we reserve some space. SGI-PV: 956323 SGI-Modid: xfs-linux-melb:xfs-kern:27895a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
The free block modification code has a 32bit interface, limiting the size the filesystem can be grown even on 64 bit machines. On 32 bit machines, there are other 32bit variables in transaction structures and interfaces that need to be expanded to allow this to work. SGI-PV: 959978 SGI-Modid: xfs-linux-melb:xfs-kern:27894a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
SGI-PV: 959388 SGI-Modid: xfs-linux-melb:xfs-kern:27805a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Barry Naujok 提交于
SGI-PV: 958747 SGI-Modid: xfs-linux-melb:xfs-kern:27792a Signed-off-by: NBarry Naujok <bnaujok@sgi.com> Signed-off-by: NRussell Cattelan <cattelan@thebarn.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Vlad Apostolov 提交于
SGI-PV: 959264 SGI-Modid: xfs-linux-melb:xfs-kern:27750a Signed-off-by: NVlad Apostolov <vapo@sgi.com> Signed-off-by: NDavid Chatterton <chatz@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Lachlan McIlroy 提交于
SGI-PV: 959140 SGI-Modid: xfs-linux-melb:xfs-kern:27712a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Lachlan McIlroy 提交于
functions, but they a) ignore the flags parameter completely, and b) are never called directly, only via the flag-less defines anyway So, drop the #define indirection, and rename mraccessf to mraccess, etc. SGI-PV: 959138 SGI-Modid: xfs-linux-melb:xfs-kern:27711a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Lachlan McIlroy 提交于
SGI-PV: 959137 SGI-Modid: xfs-linux-melb:xfs-kern:27710a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Lachlan McIlroy 提交于
SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:27702a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Lachlan McIlroy 提交于
SGI-PV: 954580 SGI-Modid: xfs-linux-melb:xfs-kern:27701a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NChristoph Hellwig <hch@infradead.org> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
SGI-PV: 952227 SGI-Modid: xfs-linux-melb:xfs-kern:27692a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
The existing per-cpu superblock counter code uses the global superblock spin lock when we approach ENOSPC for global synchronisation. On larger machines than this code was originally tested on this can still get catastrophic spinlock contention due increasing rebalance frequency near ENOSPC. By introducing a sleeping lock that is used to serialise balances and modifications near ENOSPC we prevent contention from needlessly from wasting the CPU time of potentially hundreds of CPUs. To reduce the number of balances occuring, we separate the need rebalance case from the slow allocate case. Now, a counter running dry will trigger a rebalance during which counters are disabled. Any thread that sees a disabled counter enters a different path where it waits on the new mutex. When it gets the new mutex, it checks if the counter is disabled. If the counter is disabled, then we _know_ that we have to use the global counter and lock and it is safe to do so immediately. Otherwise, we drop the mutex and go back to trying the per-cpu counters which we know were re-enabled. SGI-PV: 952227 SGI-Modid: xfs-linux-melb:xfs-kern:27612a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Eric Sandeen 提交于
Sandeen. SGI-PV: 958736 SGI-Modid: xfs-linux-melb:xfs-kern:27596a Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
gcc-4.1 and more recent aggressively inline static functions which increases XFS stack usage by ~15% in critical paths. Prevent this from occurring by adding noinline to the STATIC definition. Also uninline some functions that are too large to be inlined and were causing problems with CONFIG_FORCED_INLINING=y. Finally, clean up all the different users of inline, __inline and __inline__ and put them under one STATIC_INLINE macro. For debug kernels the STATIC_INLINE macro uninlines those functions. SGI-PV: 957159 SGI-Modid: xfs-linux-melb:xfs-kern:27585a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NDavid Chatterton <chatz@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
The {test,set,clear}_bit() operations take a bit index for the bit to operate on. The XBT_* flags are defined as bit fields which is incorrect, not to mention the way the bit fields are enumerated is broken too. This was only working by chance. Fix the definitions of the flags and make the code using them use the {test,set,clear}_bit() operations correctly. SGI-PV: 958639 SGI-Modid: xfs-linux-melb:xfs-kern:27565a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Lachlan McIlroy 提交于
The message buffer used by cmn_err() is only 256 bytes and some CXFS messages were exceeding this length. Since we were using vsprintf() and not checking for buffer overruns we were clobbering memory beyond the buffer. The size of the buffer has been increased to 1024 bytes so we can capture these larger messages and we are now using vsnprintf() to prevent overrunning the buffer size. SGI-PV: 958599 SGI-Modid: xfs-linux-melb:xfs-kern:27561a Signed-off-by: NLachlan McIlroy <lachlan@sgi.com> Signed-off-by: NGeoffrey Wehrman <gwehrman@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
At the last stage of a freeze, we flush the buftarg synchronously over and over again until it succeeds twice without skipping any buffers. The delwri list flush skips pinned buffers, but tries to flush all others. It removes the buffers from the delwri list, then tries to lock them one at a time as it traverses the list to issue the I/O. It holds them locked until we issue all of the I/O and then unlocks them once we've waited for it to complete. The problem is that during a freeze, the filesystem may still be doing stuff - like flushing delalloc data buffers - in the background and hence we can be trying to lock buffers that were on the delwri list at the same time. Hence we can get ABBA deadlocks between threads doing allocation and the buftarg flush (freeze) thread. Fix it by skipping locked (and pinned) buffers as we traverse the delwri buffer list. SGI-PV: 957195 SGI-Modid: xfs-linux-melb:xfs-kern:27535a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 David Chinner 提交于
The XFS quiet mount logic was inverted making quiet mounts noisy and vice versa. Fix it. SGI-PV: 958469 SGI-Modid: xfs-linux-melb:xfs-kern:27520a Signed-off-by: NDavid Chinner <dgc@sgi.com> Signed-off-by: NEric Sandeen <sandeen@sandeen.net> Signed-off-by: NTim Shimmin <tes@sgi.com>
-
由 Greg Ungerer 提交于
remap() the region we get from mmap() to mark the fact that we are using all of the available slack space. Any slack space is used to form a simple brk region, and potentially more stack space than requested at load time. Any searches of the vma chain may well fail looking for stack (and especially arg) addresses if the remaping is not done. The simplest example is /proc/<pid>/cmdline, since the args are pretty much always at the top of the data/bss/stack region. Signed-off-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Adrian Bunk 提交于
Fix a double free of "dfid" introduced by commit da977b2c and spotted by the Coverity checker. Signed-off-by: NAdrian Bunk <bunk@stusta.de> Cc: Eric Van Hensbergen <ericvh@gmail.com> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ken Chen 提交于
__unmap_hugepage_range() is buggy that it does not preserve dirty state of huge_pte when unmapping hugepage range. It causes data corruption in the event of dop_caches being used by sys admin. For example, an application creates a hugetlb file, modify pages, then unmap it. While leaving the hugetlb file alive, comes along sys admin doing a "echo 3 > /proc/sys/vm/drop_caches". drop_pagecache_sb() will happily free all pages that aren't marked dirty if there are no active mapping. Later when application remaps the hugetlb file back and all data are gone, triggering catastrophic flip over on application. Not only that, the internal resv_huge_pages count will also get all messed up. Fix it up by marking page dirty appropriately. Signed-off-by: NKen Chen <kenchen@google.com> Cc: "Nish Aravamudan" <nish.aravamudan@gmail.com> Cc: Adam Litke <agl@us.ibm.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: <stable@kernel.org> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Evgeniy Dushistov 提交于
This is a fix of regression, which triggered by ~2.6.16. Patch with name ufs-directory-and-page-cache-from-blocks-to-pages.patch: in additional to conversation from block to page cache mechanism added new checks of directory integrity, one of them that directory entry do not across directory chunks. But some kinds of UFS: OpenStep UFS and Apple UFS (looks like these are the same filesystems) have different directory chunk size, then common UFSes(BSD and Solaris UFS). So this patch adds ability to works with variable size of directory chunks, and set it for ufstype=openstep to right size. Tested on darwin ufs. Signed-off-by: NEvgeniy Dushistov <dushistov@mail.ru> Cc: <stable@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2007 17 次提交
-
-
由 Joel Becker 提交于
Attributes in configfs are text files. As such, most handlers expect to be able to call functions like simple_strtoul() without checking the bounds of the buffer. Change the call to zero terminate the buffer before calling the client's ->store() method. This does reduce the attribute size from PAGE_SIZE to PAGE_SIZE-1. Also, change get_zeroed_page() to alloc_page(), as we are handling the termination. Signed-off-by: NJoel Becker <joel.becker@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Philipp Reisner 提交于
As was already pointed out Mathieu Avila on Thu, 07 Sep 2006 03:15:25 -0700 that OCFS2 is expecting bio_add_page() to add pages to BIOs in an easily predictable manner. That is not true, especially for devices with own merge_bvec_fn(). Therefore OCFS2's heartbeat code is very likely to fail on such devices. Move the bio_put() call into the bio's bi_end_io() function. This makes the whole idea of trying to predict the behaviour of bio_add_page() unnecessary. Removed compute_max_sectors() and o2hb_compute_request_limits(). Signed-off-by: NPhilipp Reisner <philipp.reisner@linbit.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Zhen Wei 提交于
When there is a lot of multithreaded I/O usage, two threads can collide while sending out a message to the other nodes. This is due to the lack of locking between threads while sending out the messages. When a connected TCP send(), sendto(), or sendmsg() arrives in the Linux kernel, it eventually comes through tcp_sendmsg(). tcp_sendmsg() protects itself by acquiring a lock at invocation by calling lock_sock(). tcp_sendmsg() then loops over the buffers in the iovec, allocating associated sk_buff's and cache pages for use in the actual send. As it does so, it pushes the data out to tcp for actual transmission. However, if one of those allocation fails (because a large number of large sends is being processed, for example), it must wait for memory to become available. It does so by jumping to wait_for_sndbuf or wait_for_memory, both of which eventually cause a call to sk_stream_wait_memory(). sk_stream_wait_memory() contains a code path that calls sk_wait_event(). Finally, sk_wait_event() contains the call to release_sock(). The following patch adds a lock to the socket container in order to properly serialize outbound requests. From: Zhen Wei <zwei@novell.com> Acked-by: NJeff Mahoney <jeffm@suse.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Randy Dunlap 提交于
OCFS2: drop 'depends on INET' since local mounts are now allowed. Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Sunil Mushran 提交于
Currently the ocfs2 dlm has no timeout during dlm join domain. While this is not a problem in normal operation, this does become an issue if, say, the other node is refusing to let the node join the domain because of a stuck recovery. This patch adds a 90 sec timeout. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Sunil Mushran 提交于
These messages can easily be activated using the mlog infrastructure and don't need to be enabled by default. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Srinivas Eeda 提交于
There is a small window where a joining node may not see the node(s) that just died but are still part of the domain. To fix this, we must disallow join requests if the joining node has a different node map. A new field node_map is added to dlm_query_join_request to send the current nodes nodemap along with join request. On the receiving end the nodes that are part of the cluster verifies if this new node sees all the nodes that are still part of the cluster. They disallow the join if the maps mismatch. Signed-off-by: NSrinivas Eeda <srinivas.eeda@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Sunil Mushran 提交于
Eventhough the set refmap bit message is sent before the clear refmap message, currently there is no guarentee that the set message will be handled before the clear. This patch prevents the clear refmap to be processed while the node is sending assert master messages to other nodes. (The set refmap message is sent as a response to the assert master request). Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Sunil Mushran 提交于
This patch binds the o2net listener to the configured ip address instead of INADDR_ANY for security. Fixes oss.oracle.com bugzilla#814. Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
This patch prevents the dlm from sending the clear refmap message before the set refmap. We use the newly created post function handler routine to accomplish the task. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Currently o2net allows one handler function per message type. This patch adds the ability to call another function to be called after the handler has returned the message to the other node. Handlers are now given the option of returning a context (in the form of a void **) which will be passed back into the post message handler function. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
The dlm encodes the node number and a sequence number in the lock cookie. It also stores the cookie in the lockres in the big endian format to avoid swapping 8 bytes on each lock request. The bug here was that it was assuming the cookie to be in the cpu format when decoding it for printing the error message. This patch swaps the bytes before the print. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
When the lockres is in migrate or recovery state, all convert requests are denied with the appropriate error status that is handled on the requester node. This patch silences the erroneous error message printed on the master node. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
The dlm was not waking up threads waiting on the lockres wait queue, waiting for the lockres to be no longer be in the DLM_LOCK_RES_IN_PROGRESS and the DLM_LOCK_RES_MIGRATING states. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
dlm_dispatch_work was not processing the queued up tasks at the first sign of the node leaving the domain leading to not only incompleted tasks but also a mismatch in the dlm refcnt. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-
由 Kurt Hackel 提交于
This is to prevent the condition in which a previously queued up assert master asserts after we start the migration. Now migration ensures the workqueue is flushed before proceeding with migrating the lock to another node. This condition is typically encountered during parallel umounts. Signed-off-by: NKurt Hackel <kurt.hackel@oracle.com> Signed-off-by: NSunil Mushran <sunil.mushran@oracle.com> Signed-off-by: NMark Fasheh <mark.fasheh@oracle.com>
-