1. 12 9月, 2006 2 次提交
    • S
      [GFS2] Use hlist for glock hash chains · b6397893
      Steven Whitehouse 提交于
      This results in smaller list heads, so that we can have more chains
      in the same amount of memory (twice as many). I've multiplied the
      size of the table by four though - this is because we are saving
      memory by not having one lock per chain any more. So we land up
      using about the same amount of memory for the hash table as we
      did before I started these changes, the difference being that we
      now have four times as many hash chains.
      
      The reason that I say "about the same amount of memory" is that the
      actual amount now depends upon the NR_CPUS and some of the config
      variables, so that its not exact and in some cases we do use more
      memory. Eventually we might want to scale the hash table size
      according to the size of physical ram as measured on module load.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      b6397893
    • S
      [GFS2] Rewrite of examine_bucket() · 24264434
      Steven Whitehouse 提交于
      The existing implementation of this function in glock.c was not
      very efficient as it relied upon keeping a cursor element upon the
      hash chain in question and moving it along. This new version improves
      upon this by using the current element as a cursor. This is possible
      since we only look at the "next" element in the list after we've
      taken the read_lock() subsequent to calling the examiner function.
      Obviously we have to eventually drop the ref count that we are then
      left with and we cannot do that while holding the read_lock, so we
      do that next time we drop the lock. That means either just before
      we examine another glock, or when the loop has terminated.
      
      The new implementation has several advantages: it uses only a
      read_lock() rather than a write_lock(), so it can run simnultaneously
      with other code, it doesn't need a "plug" element, so that it removes
      a test not only from this list iterator, but from all the other glock
      list iterators too. So it makes things faster and smaller.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      24264434
  2. 10 9月, 2006 4 次提交
  3. 09 9月, 2006 2 次提交
    • D
      [DLM] confirm master for recovered waiting requests · fa9f0e49
      David Teigland 提交于
      Fixing the following scenario:
      - A request is on the waiters list waiting for a reply from a remote node.
      - The request is the first one on the resource, so first_lkid is set.
      - The remote node fails causing recovery.
      - During recovery the requesting node becomes master.
      - The request is now processed locally instead of being a remote operation.
      - At this point we need to call confirm_master() on the resource since
        we're certain we're now the master node.  This will clear first_lkid.
      - We weren't calling confirm_master(), so first_lkid was not being cleared
        causing subsequent requests on that resource to get stuck.
      Signed-off-by: NDavid Teigland <teigland@redhat.com>
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      fa9f0e49
    • S
      [GFS2] Move rwlocks in glock.c into their own array · 37b2fa6a
      Steven Whitehouse 提交于
      This splits the rwlocks guarding the hash chains of the glock hash
      table into their own array. This will reduce memory usage in some
      cases due to better alignment, although the real reason for doing it
      is to allow the two tables to be different sizes in future (i.e.
      the locks will be sized proportionally with the max number of CPUs
      and the hash chains sized proportinally with the size of physical memory)
      
      In order to allow this, the gl_bucket member of struct gfs2_glock has
      now become gl_hash, so we record the hash rather than a pointer to the
      bucket itself.
      Signed-off-by: NSteven Whitehouse <swhiteho@redhat.com>
      37b2fa6a
  4. 08 9月, 2006 7 次提交
  5. 07 9月, 2006 4 次提交
  6. 06 9月, 2006 5 次提交
  7. 05 9月, 2006 13 次提交
  8. 04 9月, 2006 3 次提交