1. 23 8月, 2012 10 次提交
    • K
      xen/mmu: Release just the MFN list, not MFN list and part of pagetables. · 785f6231
      Konrad Rzeszutek Wilk 提交于
      We call memblock_reserve for [start of mfn list] -> [PMD aligned end
      of mfn list] instead of <start of mfn list> -> <page aligned end of mfn list].
      
      This has the disastrous effect that if at bootup the end of mfn_list is
      not PMD aligned we end up returning to memblock parts of the region
      past the mfn_list array. And those parts are the PTE tables with
      the disastrous effect of seeing this at bootup:
      
      Write protecting the kernel read-only data: 10240k
      Freeing unused kernel memory: 1860k freed
      Freeing unused kernel memory: 200k freed
      (XEN) mm.c:2429:d0 Bad type (saw 1400000000000002 != exp 7000000000000000) for mfn 116a80 (pfn 14e26)
      ...
      (XEN) mm.c:908:d0 Error getting mfn 116a83 (pfn 14e2a) from L1 entry 8000000116a83067 for l1e_owner=0, pg_owner=0
      (XEN) mm.c:908:d0 Error getting mfn 4040 (pfn 5555555555555555) from L1 entry 0000000004040601 for l1e_owner=0, pg_owner=0
      .. and so on.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      785f6231
    • K
      xen/mmu: Remove from __ka space PMD entries for pagetables. · 3aca7fbc
      Konrad Rzeszutek Wilk 提交于
      Please first read the description in "xen/mmu: Copy and revector the
      P2M tree."
      
      At this stage, the __ka address space (which is what the old
      P2M tree was using) is partially disassembled. The cleanup_highmap
      has removed the PMD entries from 0-16MB and anything past _brk_end
      up to the max_pfn_mapped (which is the end of the ramdisk).
      
      The xen_remove_p2m_tree and code around has ripped out the __ka for
      the old P2M array.
      
      Here we continue on doing it to where the Xen page-tables were.
      It is safe to do it, as the page-tables are addressed using __va.
      For good measure we delete anything that is within MODULES_VADDR
      and up to the end of the PMD.
      
      At this point the __ka only contains PMD entries for the start
      of the kernel up to __brk.
      
      [v1: Per Stefano's suggestion wrapped the MODULES_VADDR in debug]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      3aca7fbc
    • K
      xen/mmu: Copy and revector the P2M tree. · 7f914062
      Konrad Rzeszutek Wilk 提交于
      Please first read the description in "xen/p2m: Add logic to revector a
      P2M tree to use __va leafs" patch.
      
      The 'xen_revector_p2m_tree()' function allocates a new P2M tree
      copies the contents of the old one in it, and returns the new one.
      
      At this stage, the __ka address space (which is what the old
      P2M tree was using) is partially disassembled. The cleanup_highmap
      has removed the PMD entries from 0-16MB and anything past _brk_end
      up to the max_pfn_mapped (which is the end of the ramdisk).
      
      We have revectored the P2M tree (and the one for save/restore as well)
      to use new shiny __va address to new MFNs. The xen_start_info
      has been taken care of already in 'xen_setup_kernel_pagetable()' and
      xen_start_info->shared_info in 'xen_setup_shared_info()', so
      we are free to roam and delete PMD entries - which is exactly what
      we are going to do. We rip out the __ka for the old P2M array.
      
      [v1: Fix smatch warnings]
      [v2: memset was doing 0 instead of 0xff]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      7f914062
    • K
      xen/p2m: Add logic to revector a P2M tree to use __va leafs. · 357a3cfb
      Konrad Rzeszutek Wilk 提交于
      During bootup Xen supplies us with a P2M array. It sticks
      it right after the ramdisk, as can be seen with a 128GB PV guest:
      
      (certain parts removed for clarity):
      xc_dom_build_image: called
      xc_dom_alloc_segment:   kernel       : 0xffffffff81000000 -> 0xffffffff81e43000  (pfn 0x1000 + 0xe43 pages)
      xc_dom_pfn_to_ptr: domU mapping: pfn 0x1000+0xe43 at 0x7f097d8bf000
      xc_dom_alloc_segment:   ramdisk      : 0xffffffff81e43000 -> 0xffffffff925c7000  (pfn 0x1e43 + 0x10784 pages)
      xc_dom_pfn_to_ptr: domU mapping: pfn 0x1e43+0x10784 at 0x7f0952dd2000
      xc_dom_alloc_segment:   phys2mach    : 0xffffffff925c7000 -> 0xffffffffa25c7000  (pfn 0x125c7 + 0x10000 pages)
      xc_dom_pfn_to_ptr: domU mapping: pfn 0x125c7+0x10000 at 0x7f0942dd2000
      xc_dom_alloc_page   :   start info   : 0xffffffffa25c7000 (pfn 0x225c7)
      xc_dom_alloc_page   :   xenstore     : 0xffffffffa25c8000 (pfn 0x225c8)
      xc_dom_alloc_page   :   console      : 0xffffffffa25c9000 (pfn 0x225c9)
      nr_page_tables: 0x0000ffffffffffff/48: 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
      nr_page_tables: 0x0000007fffffffff/39: 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
      nr_page_tables: 0x000000003fffffff/30: 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
      nr_page_tables: 0x00000000001fffff/21: 0xffffffff80000000 -> 0xffffffffa27fffff, 276 table(s)
      xc_dom_alloc_segment:   page tables  : 0xffffffffa25ca000 -> 0xffffffffa26e1000  (pfn 0x225ca + 0x117 pages)
      xc_dom_pfn_to_ptr: domU mapping: pfn 0x225ca+0x117 at 0x7f097d7a8000
      xc_dom_alloc_page   :   boot stack   : 0xffffffffa26e1000 (pfn 0x226e1)
      xc_dom_build_image  : virt_alloc_end : 0xffffffffa26e2000
      xc_dom_build_image  : virt_pgtab_end : 0xffffffffa2800000
      
      So the physical memory and virtual (using __START_KERNEL_map addresses)
      layout looks as so:
      
        phys                             __ka
      /------------\                   /-------------------\
      | 0          | empty             | 0xffffffff80000000|
      | ..         |                   | ..                |
      | 16MB       | <= kernel starts  | 0xffffffff81000000|
      | ..         |                   |                   |
      | 30MB       | <= kernel ends => | 0xffffffff81e43000|
      | ..         |  & ramdisk starts | ..                |
      | 293MB      | <= ramdisk ends=> | 0xffffffff925c7000|
      | ..         |  & P2M starts     | ..                |
      | ..         |                   | ..                |
      | 549MB      | <= P2M ends    => | 0xffffffffa25c7000|
      | ..         | start_info        | 0xffffffffa25c7000|
      | ..         | xenstore          | 0xffffffffa25c8000|
      | ..         | cosole            | 0xffffffffa25c9000|
      | 549MB      | <= page tables => | 0xffffffffa25ca000|
      | ..         |                   |                   |
      | 550MB      | <= PGT end     => | 0xffffffffa26e1000|
      | ..         | boot stack        |                   |
      \------------/                   \-------------------/
      
      As can be seen, the ramdisk, P2M and pagetables are taking
      a bit of __ka addresses space. Which is a problem since the
      MODULES_VADDR starts at 0xffffffffa0000000 - and P2M sits
      right in there! This results during bootup with the inability to
      load modules, with this error:
      
      ------------[ cut here ]------------
      WARNING: at /home/konrad/ssd/linux/mm/vmalloc.c:106 vmap_page_range_noflush+0x2d9/0x370()
      Call Trace:
       [<ffffffff810719fa>] warn_slowpath_common+0x7a/0xb0
       [<ffffffff81030279>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
       [<ffffffff81071a45>] warn_slowpath_null+0x15/0x20
       [<ffffffff81130b89>] vmap_page_range_noflush+0x2d9/0x370
       [<ffffffff81130c4d>] map_vm_area+0x2d/0x50
       [<ffffffff811326d0>] __vmalloc_node_range+0x160/0x250
       [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
       [<ffffffff810c6186>] ? load_module+0x66/0x19c0
       [<ffffffff8105cadc>] module_alloc+0x5c/0x60
       [<ffffffff810c5369>] ? module_alloc_update_bounds+0x19/0x80
       [<ffffffff810c5369>] module_alloc_update_bounds+0x19/0x80
       [<ffffffff810c70c3>] load_module+0xfa3/0x19c0
       [<ffffffff812491f6>] ? security_file_permission+0x86/0x90
       [<ffffffff810c7b3a>] sys_init_module+0x5a/0x220
       [<ffffffff815ce339>] system_call_fastpath+0x16/0x1b
      ---[ end trace fd8f7704fdea0291 ]---
      vmalloc: allocation failure, allocated 16384 of 20480 bytes
      modprobe: page allocation failure: order:0, mode:0xd2
      
      Since the __va and __ka are 1:1 up to MODULES_VADDR and
      cleanup_highmap rids __ka of the ramdisk mapping, what
      we want to do is similar - get rid of the P2M in the __ka
      address space. There are two ways of fixing this:
      
       1) All P2M lookups instead of using the __ka address would
          use the __va address. This means we can safely erase from
          __ka space the PMD pointers that point to the PFNs for
          P2M array and be OK.
       2). Allocate a new array, copy the existing P2M into it,
          revector the P2M tree to use that, and return the old
          P2M to the memory allocate. This has the advantage that
          it sets the stage for using XEN_ELF_NOTE_INIT_P2M
          feature. That feature allows us to set the exact virtual
          address space we want for the P2M - and allows us to
          boot as initial domain on large machines.
      
      So we pick option 2).
      
      This patch only lays the groundwork in the P2M code. The patch
      that modifies the MMU is called "xen/mmu: Copy and revector the P2M tree."
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      357a3cfb
    • K
      xen/mmu: Recycle the Xen provided L4, L3, and L2 pages · 488f046d
      Konrad Rzeszutek Wilk 提交于
      As we are not using them. We end up only using the L1 pagetables
      and grafting those to our page-tables.
      
      [v1: Per Stefano's suggestion squashed two commits]
      [v2: Per Stefano's suggestion simplified loop]
      [v3: Fix smatch warnings]
      [v4: Add more comments]
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      488f046d
    • K
      xen/mmu: For 64-bit do not call xen_map_identity_early · caaf9ecf
      Konrad Rzeszutek Wilk 提交于
      B/c we do not need it. During the startup the Xen provides
      us with all the initial memory mapped that we need to function.
      
      The initial memory mapped is up to the bootstack, which means
      we can reference using __ka up to 4.f):
      
      (from xen/interface/xen.h):
      
       4. This the order of bootstrap elements in the initial virtual region:
         a. relocated kernel image
         b. initial ram disk              [mod_start, mod_len]
         c. list of allocated page frames [mfn_list, nr_pages]
         d. start_info_t structure        [register ESI (x86)]
         e. bootstrap page tables         [pt_base, CR3 (x86)]
         f. bootstrap stack               [register ESP (x86)]
      
      (initial ram disk may be ommitted).
      
      [v1: More comments in git commit]
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      caaf9ecf
    • K
      xen/mmu: use copy_page instead of memcpy. · ae895ed7
      Konrad Rzeszutek Wilk 提交于
      After all, this is what it is there for.
      Acked-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      ae895ed7
    • K
      xen/mmu: Provide comments describing the _ka and _va aliasing issue · 4fac153a
      Konrad Rzeszutek Wilk 提交于
      Which is that the level2_kernel_pgt (__ka virtual addresses)
      and level2_ident_pgt (__va virtual address) contain the same
      PMD entries. So if you modify a PTE in __ka, it will be reflected
      in __va (and vice-versa).
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      4fac153a
    • K
      xen/mmu: The xen_setup_kernel_pagetable doesn't need to return anything. · 3699aad0
      Konrad Rzeszutek Wilk 提交于
      We don't need to return the new PGD - as we do not use it.
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      3699aad0
    • K
      Revert "xen/x86: Workaround 64-bit hypervisor and 32-bit initial domain." and... · 51faaf2b
      Konrad Rzeszutek Wilk 提交于
      Revert "xen/x86: Workaround 64-bit hypervisor and 32-bit initial domain." and "xen/x86: Use memblock_reserve for sensitive areas."
      
      This reverts commit 806c312e and
      commit 59b29440.
      
      And also documents setup.c and why we want to do it that way, which
      is that we tried to make the the memblock_reserve more selective so
      that it would be clear what region is reserved. Sadly we ran
      in the problem wherein on a 64-bit hypervisor with a 32-bit
      initial domain, the pt_base has the cr3 value which is not
      neccessarily where the pagetable starts! As Jan put it: "
      Actually, the adjustment turns out to be correct: The page
      tables for a 32-on-64 dom0 get allocated in the order "first L1",
      "first L2", "first L3", so the offset to the page table base is
      indeed 2. When reading xen/include/public/xen.h's comment
      very strictly, this is not a violation (since there nothing is said
      that the first thing in the page table space is pointed to by
      pt_base; I admit that this seems to be implied though, namely
      do I think that it is implied that the page table space is the
      range [pt_base, pt_base + nt_pt_frames), whereas that
      range here indeed is [pt_base - 2, pt_base - 2 + nt_pt_frames),
      which - without a priori knowledge - the kernel would have
      difficulty to figure out)." - so lets just fall back to the
      easy way and reserve the whole region.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      51faaf2b
  2. 22 8月, 2012 3 次提交
  3. 22 7月, 2012 7 次提交
  4. 21 7月, 2012 4 次提交
    • L
      Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus · d75e2c9a
      Linus Torvalds 提交于
      Pull late MIPS fixes from Ralf Baechle:
       "This fixes a number of lose ends in the MIPS code and various bug
        fixes.
      
        Aside of dropping some patch that should not be in this pull request
        everything has sat in -next for quite a while and there are no known
        issues.
      
        The biggest patch in this patch set moves the allocation of an array
        that is aliased to a function (for runtime generated code) to
        assembler code.  This avoids an issue with certain toolchains when
        building for microMIPS."
      
      * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (35 commits)
        MIPS: PCI: Move fixups from __init to __devinit.
        MIPS: Fix bug.h MIPS build regression
        MIPS: sync-r4k: remove redundant irq operation
        MIPS: smp: Warn on too early irq enable
        MIPS: call set_cpu_online() on cpu being brought up with irq disabled
        MIPS: call ->smp_finish() a little late
        MIPS: Yosemite: delay irq enable to ->smp_finish()
        MIPS: SMTC: delay irq enable to ->smp_finish()
        MIPS: BMIPS: delay irq enable to ->smp_finish()
        MIPS: Octeon: delay enable irq to ->smp_finish()
        MIPS: Oprofile: Fix build as a module.
        MIPS: BCM63XX: Fix BCM6368 IPSec clock bit
        MIPS: perf: Fix build error caused by unused counters_per_cpu_to_total()
        MIPS: Fix Magic SysRq L kernel crash.
        MIPS: BMIPS: Fix duplicate header inclusion.
        mips: mark const init data with __initconst instead of __initdata
        MIPS: cmpxchg.h: Add missing include
        MIPS: Malta may also be equipped with MIPS64 R2 processors.
        MIPS: Fix typo multipy -> multiply
        MIPS: Cavium: Fix duplicate ARCH_SPARSEMEM_ENABLE in kconfig.
        ...
      d75e2c9a
    • L
      Merge tag 'dm-3.5-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-dm · 93517374
      Linus Torvalds 提交于
      Pull device-mapper discard fixes from Alasdair G Kergon:
        - avoid a crash in dm-raid1 when discards coincide with mirror
          recovery;
        - avoid discarding shared data that's still needed in dm-thin;
        - don't guarantee that discarded blocks will be wiped in dm-raid1.
      
      * tag 'dm-3.5-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-dm:
        dm raid1: set discard_zeroes_data_unsupported
        dm thin: do not send discards to shared blocks
        dm raid1: fix crash with mirror recovery and discard
      93517374
    • L
      Merge branch 'for-linus' of git://git.open-osd.org/linux-open-osd · ce9f8d6b
      Linus Torvalds 提交于
      Pull pnfs/ore fixes from Boaz Harrosh:
       "These are catastrophic fixes to the pnfs objects-layout that were just
        discovered.  They are also destined for @stable.
      
        I have found these and worked on them at around RC1 time but
        unfortunately went to the hospital for kidney stones and had a very
        slow recovery.  I refrained from sending them as is, before proper
        testing, and surly I have found a bug just yesterday.
      
        So now they are all well tested, and have my sign-off.  Other then
        fixing the problem at hand, and assuming there are no bugs at the new
        code, there is low risk to any surrounding code.  And in anyway they
        affect only these paths that are now broken.  That is RAID5 in pnfs
        objects-layout code.  It does also affect exofs (which was not broken)
        but I have tested exofs and it is lower priority then objects-layout
        because no one is using exofs, but objects-layout has lots of users."
      
      * 'for-linus' of git://git.open-osd.org/linux-open-osd:
        pnfs-obj: Fix __r4w_get_page when offset is beyond i_size
        pnfs-obj: don't leak objio_state if ore_write/read fails
        ore: Unlock r4w pages in exact reverse order of locking
        ore: Remove support of partial IO request (NFS crash)
        ore: Fix NFS crash by supporting any unaligned RAID IO
      ce9f8d6b
    • L
      Merge tag 'upstream-3.5-rc8' of git://git.infradead.org/linux-ubifs · 17934162
      Linus Torvalds 提交于
      Pull UBIFS free space fix-up bugfix from Artem Bityutskiy:
       "It's been reported already twice recently:
      
          http://lists.infradead.org/pipermail/linux-mtd/2012-May/041408.html
          http://lists.infradead.org/pipermail/linux-mtd/2012-June/042422.html
      
        and we finally have the fix.  I am quite confident the fix is correct
        because I could reproduce the problem with nandsim and verify the fix.
        It was also verified by Iwo (the reporter).
      
        I am also confident that this is OK to merge the fix so late because
        this patch affects only the fixup functionality, which is not used by
        most users."
      
      * tag 'upstream-3.5-rc8' of git://git.infradead.org/linux-ubifs:
        UBIFS: fix a bug in empty space fix-up
      17934162
  5. 20 7月, 2012 10 次提交
    • M
      dm raid1: set discard_zeroes_data_unsupported · 7c8d3a42
      Mikulas Patocka 提交于
      We can't guarantee that REQ_DISCARD on dm-mirror zeroes the data even if
      the underlying disks support zero on discard.  So this patch sets
      ti->discard_zeroes_data_unsupported.
      
      For example, if the mirror is in the process of resynchronizing, it may
      happen that kcopyd reads a piece of data, then discard is sent on the
      same area and then kcopyd writes the piece of data to another leg.
      Consequently, the data is not zeroed.
      
      The flag was made available by commit 983c7db3
      (dm crypt: always disable discard_zeroes_data).
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      7c8d3a42
    • M
      dm thin: do not send discards to shared blocks · 650d2a06
      Mikulas Patocka 提交于
      When process_discard receives a partial discard that doesn't cover a
      full block, it sends this discard down to that block. Unfortunately, the
      block can be shared and the discard would corrupt the other snapshots
      sharing this block.
      
      This patch detects block sharing and ends the discard with success when
      sending it to the shared block.
      
      The above change means that if the device supports discard it can't be
      guaranteed that a discard request zeroes data. Therefore, we set
      ti->discard_zeroes_data_unsupported.
      
      Thin target discard support with this bug arrived in commit
      104655fd (dm thin: support discards).
      Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NMike Snitzer <snitzer@redhat.com>
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      650d2a06
    • M
      dm raid1: fix crash with mirror recovery and discard · 751f188d
      Mikulas Patocka 提交于
      This patch fixes a crash when a discard request is sent during mirror
      recovery.
      
      Firstly, some background.  Generally, the following sequence happens during
      mirror synchronization:
      - function do_recovery is called
      - do_recovery calls dm_rh_recovery_prepare
      - dm_rh_recovery_prepare uses a semaphore to limit the number
        simultaneously recovered regions (by default the semaphore value is 1,
        so only one region at a time is recovered)
      - dm_rh_recovery_prepare calls __rh_recovery_prepare,
        __rh_recovery_prepare asks the log driver for the next region to
        recover. Then, it sets the region state to DM_RH_RECOVERING. If there
        are no pending I/Os on this region, the region is added to
        quiesced_regions list. If there are pending I/Os, the region is not
        added to any list. It is added to the quiesced_regions list later (by
        dm_rh_dec function) when all I/Os finish.
      - when the region is on quiesced_regions list, there are no I/Os in
        flight on this region. The region is popped from the list in
        dm_rh_recovery_start function. Then, a kcopyd job is started in the
        recover function.
      - when the kcopyd job finishes, recovery_complete is called. It calls
        dm_rh_recovery_end. dm_rh_recovery_end adds the region to
        recovered_regions or failed_recovered_regions list (depending on
        whether the copy operation was successful or not).
      
      The above mechanism assumes that if the region is in DM_RH_RECOVERING
      state, no new I/Os are started on this region. When I/O is started,
      dm_rh_inc_pending is called, which increases reg->pending count. When
      I/O is finished, dm_rh_dec is called. It decreases reg->pending count.
      If the count is zero and the region was in DM_RH_RECOVERING state,
      dm_rh_dec adds it to the quiesced_regions list.
      
      Consequently, if we call dm_rh_inc_pending/dm_rh_dec while the region is
      in DM_RH_RECOVERING state, it could be added to quiesced_regions list
      multiple times or it could be added to this list when kcopyd is copying
      data (it is assumed that the region is not on any list while kcopyd does
      its jobs). This results in memory corruption and crash.
      
      There already exist bypasses for REQ_FLUSH requests: REQ_FLUSH requests
      do not belong to any region, so they are always added to the sync list
      in do_writes. dm_rh_inc_pending does not increase count for REQ_FLUSH
      requests. In mirror_end_io, dm_rh_dec is never called for REQ_FLUSH
      requests. These bypasses avoid the crash possibility described above.
      
      These bypasses were improperly implemented for REQ_DISCARD when
      the mirror target gained discard support in commit
      5fc2ffea (dm raid1: support discard).
      
      In do_writes, REQ_DISCARD requests is always added to the sync queue and
      immediately dispatched (even if the region is in DM_RH_RECOVERING).  However,
      dm_rh_inc and dm_rh_dec is called for REQ_DISCARD resusts.  So it violates the
      rule that no I/Os are started on DM_RH_RECOVERING regions, and causes the list
      corruption described above.
      
      This patch changes it so that REQ_DISCARD requests follow the same path
      as REQ_FLUSH. This avoids the crash.
      
      Reference: https://bugzilla.redhat.com/837607Signed-off-by: NMikulas Patocka <mpatocka@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NAlasdair G Kergon <agk@redhat.com>
      751f188d
    • B
      pnfs-obj: Fix __r4w_get_page when offset is beyond i_size · c999ff68
      Boaz Harrosh 提交于
      It is very common for the end of the file to be unaligned on
      stripe size. But since we know it's beyond file's end then
      the XOR should be preformed with all zeros.
      
      Old code used to just read zeros out of the OSD devices, which is a great
      waist. But what scares me more about this situation is that, we now have
      pages attached to the file's mapping that are beyond i_size. I don't
      like the kind of bugs this calls for.
      
      Fix both birds, by returning a global zero_page, if offset is beyond
      i_size.
      
      TODO:
      	Change the API to ->__r4w_get_page() so a NULL can be
      	returned without being considered as error, since XOR API
      	treats NULL entries as zero_pages.
      
      [Bug since 3.2. Should apply the same way to all Kernels since]
      CC: Stable Tree <stable@kernel.org>
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      c999ff68
    • B
      pnfs-obj: don't leak objio_state if ore_write/read fails · 9909d45a
      Boaz Harrosh 提交于
      [Bug since 3.2 Kernel]
      CC: Stable Tree <stable@kernel.org>
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      9909d45a
    • B
      ore: Unlock r4w pages in exact reverse order of locking · 537632e0
      Boaz Harrosh 提交于
      The read-4-write pages are locked in address ascending order.
      But where unlocked in a way easiest for coding. Fix that,
      locks should be released in opposite order of locking, .i.e
      descending address order.
      
      I have not hit this dead-lock. It was found by inspecting the
      dbug print-outs. I suspect there is an higher lock at caller that
      protects us, but fix it regardless.
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      537632e0
    • B
      ore: Remove support of partial IO request (NFS crash) · 62b62ad8
      Boaz Harrosh 提交于
      Do to OOM situations the ore might fail to allocate all resources
      needed for IO of the full request. If some progress was possible
      it would proceed with a partial/short request, for the sake of
      forward progress.
      
      Since this crashes NFS-core and exofs is just fine without it just
      remove this contraption, and fail.
      
      TODO:
      	Support real forward progress with some reserved allocations
      	of resources, such as mem pools and/or bio_sets
      
      [Bug since 3.2 Kernel]
      CC: Stable Tree <stable@kernel.org>
      CC: Benny Halevy <bhalevy@tonian.com>
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      62b62ad8
    • B
      ore: Fix NFS crash by supporting any unaligned RAID IO · 9ff19309
      Boaz Harrosh 提交于
      In RAID_5/6 We used to not permit an IO that it's end
      byte is not stripe_size aligned and spans more than one stripe.
      .i.e the caller must check if after submission the actual
      transferred bytes is shorter, and would need to resubmit
      a new IO with the remainder.
      
      Exofs supports this, and NFS was supposed to support this
      as well with it's short write mechanism. But late testing has
      exposed a CRASH when this is used with none-RPC layout-drivers.
      
      The change at NFS is deep and risky, in it's place the fix
      at ORE to lift the limitation is actually clean and simple.
      So here it is below.
      
      The principal here is that in the case of unaligned IO on
      both ends, beginning and end, we will send two read requests
      one like old code, before the calculation of the first stripe,
      and also a new site, before the calculation of the last stripe.
      If any "boundary" is aligned or the complete IO is within a single
      stripe. we do a single read like before.
      
      The code is clean and simple by splitting the old _read_4_write
      into 3 even parts:
      1._read_4_write_first_stripe
      2. _read_4_write_last_stripe
      3. _read_4_write_execute
      
      And calling 1+3 at the same place as before. 2+3 before last
      stripe, and in the case of all in a single stripe then 1+2+3
      is preformed additively.
      
      Why did I not think of it before. Well I had a strike of
      genius because I have stared at this code for 2 years, and did
      not find this simple solution, til today. Not that I did not try.
      
      This solution is much better for NFS than the previous supposedly
      solution because the short write was dealt  with out-of-band after
      IO_done, which would cause for a seeky IO pattern where as in here
      we execute in order. At both solutions we do 2 separate reads, only
      here we do it within a single IO request. (And actually combine two
      writes into a single submission)
      
      NFS/exofs code need not change since the ORE API communicates the new
      shorter length on return, what will happen is that this case would not
      occur anymore.
      
      hurray!!
      
      [Stable this is an NFS bug since 3.2 Kernel should apply cleanly]
      CC: Stable Tree <stable@kernel.org>
      Signed-off-by: NBoaz Harrosh <bharrosh@panasas.com>
      9ff19309
    • A
      UBIFS: fix a bug in empty space fix-up · c6727932
      Artem Bityutskiy 提交于
      UBIFS has a feature called "empty space fix-up" which is a quirk to work-around
      limitations of dumb flasher programs. Namely, of those flashers that are unable
      to skip NAND pages full of 0xFFs while flashing, resulting in empty space at
      the end of half-filled eraseblocks to be unusable for UBIFS. This feature is
      relatively new (introduced in v3.0).
      
      The fix-up routine (fixup_free_space()) is executed only once at the very first
      mount if the superblock has the 'space_fixup' flag set (can be done with -F
      option of mkfs.ubifs). It basically reads all the UBIFS data and metadata and
      writes it back to the same LEB. The routine assumes the image is pristine and
      does not have anything in the journal.
      
      There was a bug in 'fixup_free_space()' where it fixed up the log incorrectly.
      All but one LEB of the log of a pristine file-system are empty. And one
      contains just a commit start node. And 'fixup_free_space()' just unmapped this
      LEB, which resulted in wiping the commit start node. As a result, some users
      were unable to mount the file-system next time with the following symptom:
      
      UBIFS error (pid 1): replay_log_leb: first log node at LEB 3:0 is not CS node
      UBIFS error (pid 1): replay_log_leb: log error detected while replaying the log at LEB 3:0
      
      The root-cause of this bug was that 'fixup_free_space()' wrongly assumed
      that the beginning of empty space in the log head (c->lhead_offs) was known
      on mount. However, it is not the case - it was always 0. UBIFS does not store
      in it the master node and finds out by scanning the log on every mount.
      
      The fix is simple - just pass commit start node size instead of 0 to
      'fixup_leb()'.
      Signed-off-by: NArtem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
      Cc: stable@vger.kernel.org [v3.0+]
      Reported-by: NIwo Mergler <Iwo.Mergler@netcommwireless.com>
      Tested-by: NIwo Mergler <Iwo.Mergler@netcommwireless.com>
      Reported-by: NJames Nute <newten82@gmail.com>
      c6727932
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client · 85efc72a
      Linus Torvalds 提交于
      Pull last minute Ceph fixes from Sage Weil:
       "The important one fixes a bug in the socket failure handling behavior
        that was turned up in some recent failure injection testing.  The
        other two are minor bug fixes."
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client:
        rbd: endian bug in rbd_req_cb()
        rbd: Fix ceph_snap_context size calculation
        libceph: fix messenger retry
      85efc72a
  6. 19 7月, 2012 6 次提交