1. 17 4月, 2008 1 次提交
    • K
      [POWERPC] Move phys_addr_t definition into asm/types.h · d04ceb3f
      Kumar Gala 提交于
      Moved phys_addr_t out of mmu-*.h and into asm/types.h so we can use it in
      places that before would have caused recursive includes.
      
      For example to use phys_addr_t in <asm/page.h> we would have included
      <asm/mmu.h> which would have possibly included <asm/mmu-hash64.h> which
      includes <asm/page.h>.  Wheeee recursive include.
      
      CONFIG_PHYS_64BIT is a bit counterintuitive in light of ppc64 systems
      and thus the config option is only used for ppc32 systems with >32-bit
      physical addresses (44x, 85xx, 745x, etc.).
      Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d04ceb3f
  2. 24 1月, 2008 1 次提交
    • P
      [POWERPC] Provide a way to protect 4k subpages when using 64k pages · fa28237c
      Paul Mackerras 提交于
      Using 64k pages on 64-bit PowerPC systems makes life difficult for
      emulators that are trying to emulate an ISA, such as x86, which use a
      smaller page size, since the emulator can no longer use the MMU and
      the normal system calls for controlling page protections.  Of course,
      the emulator can emulate the MMU by checking and possibly remapping
      the address for each memory access in software, but that is pretty
      slow.
      
      This provides a facility for such programs to control the access
      permissions on individual 4k sub-pages of 64k pages.  The idea is
      that the emulator supplies an array of protection masks to apply to a
      specified range of virtual addresses.  These masks are applied at the
      level where hardware PTEs are inserted into the hardware page table
      based on the Linux PTEs, so the Linux PTEs are not affected.  Note
      that this new mechanism does not allow any access that would otherwise
      be prohibited; it can only prohibit accesses that would otherwise be
      allowed.  This new facility is only available on 64-bit PowerPC and
      only when the kernel is configured for 64k pages.
      
      The masks are supplied using a new subpage_prot system call, which
      takes a starting virtual address and length, and a pointer to an array
      of protection masks in memory.  The array has a 32-bit word per 64k
      page to be protected; each 32-bit word consists of 16 2-bit fields,
      for which 0 allows any access (that is otherwise allowed), 1 prevents
      write accesses, and 2 or 3 prevent any access.
      
      Implicit in this is that the regions of the address space that are
      protected are switched to use 4k hardware pages rather than 64k
      hardware pages (on machines with hardware 64k page support).  In fact
      the whole process is switched to use 4k hardware pages when the
      subpage_prot system call is used, but this could be improved in future
      to switch only the affected segments.
      
      The subpage protection bits are stored in a 3 level tree akin to the
      page table tree.  The top level of this tree is stored in a structure
      that is appended to the top level of the page table tree, i.e., the
      pgd array.  Since it will often only be 32-bit addresses (below 4GB)
      that are protected, the pointers to the first four bottom level pages
      are also stored in this structure (each bottom level page contains the
      protection bits for 1GB of address space), so the protection bits for
      addresses below 4GB can be accessed with one fewer loads than those
      for higher addresses.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      fa28237c
  3. 17 1月, 2008 2 次提交
  4. 15 1月, 2008 1 次提交
    • P
      [POWERPC] Fix boot failure on POWER6 · dfbe0d3b
      Paul Mackerras 提交于
      Commit 473980a9 added a call to clear
      the SLB shadow buffer before registering it.  Unfortunately this means
      that we clear out the entries that slb_initialize has previously set in
      there.  On POWER6, the hypervisor uses the SLB shadow buffer when doing
      partition switches, and that means that after the next partition switch,
      each non-boot CPU has no SLB entries to map the kernel text and data,
      which causes it to crash.
      
      This fixes it by reverting most of 473980a9 and instead clearing the
      3rd entry explicitly in slb_initialize.  This fixes the problem that
      473980a9 was trying to solve, but without breaking POWER6.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      dfbe0d3b
  5. 11 1月, 2008 1 次提交
  6. 11 12月, 2007 1 次提交
  7. 12 10月, 2007 1 次提交
    • P
      [POWERPC] Use 1TB segments · 1189be65
      Paul Mackerras 提交于
      This makes the kernel use 1TB segments for all kernel mappings and for
      user addresses of 1TB and above, on machines which support them
      (currently POWER5+, POWER6 and PA6T).
      
      We detect that the machine supports 1TB segments by looking at the
      ibm,processor-segment-sizes property in the device tree.
      
      We don't currently use 1TB segments for user addresses < 1T, since
      that would effectively prevent 32-bit processes from using huge pages
      unless we also had a way to revert to using 256MB segments.  That
      would be possible but would involve extra complications (such as
      keeping track of which segment size was used when HPTEs were inserted)
      and is not addressed here.
      
      Parts of this patch were originally written by Ben Herrenschmidt.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      1189be65
  8. 03 10月, 2007 1 次提交
  9. 03 8月, 2007 1 次提交
    • M
      [POWERPC] Fixes for the SLB shadow buffer code · 67439b76
      Michael Neuling 提交于
      On a machine with hardware 64kB pages and a kernel configured for a
      64kB base page size, we need to change the vmalloc segment from 64kB
      pages to 4kB pages if some driver creates a non-cacheable mapping in
      the vmalloc area.  However, we never updated with SLB shadow buffer.
      This fixes it.  Thanks to paulus for finding this.
      
      Also added some write barriers to ensure the shadow buffer contents
      are always consistent.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      67439b76
  10. 25 6月, 2007 1 次提交
  11. 14 6月, 2007 1 次提交
  12. 10 5月, 2007 1 次提交
  13. 09 5月, 2007 1 次提交
    • B
      [POWERPC] Introduce address space "slices" · d0f13e3c
      Benjamin Herrenschmidt 提交于
      The basic issue is to be able to do what hugetlbfs does but with
      different page sizes for some other special filesystems; more
      specifically, my need is:
      
       - Huge pages
      
       - SPE local store mappings using 64K pages on a 4K base page size
      kernel on Cell
      
       - Some special 4K segments in 64K-page kernels for mapping a dodgy
      type of powerpc-specific infiniband hardware that requires 4K MMU
      mappings for various reasons I won't explain here.
      
      The main issues are:
      
       - To maintain/keep track of the page size per "segment" (as we can
      only have one page size per segment on powerpc, which are 256MB
      divisions of the address space).
      
       - To make sure special mappings stay within their allotted
      "segments" (including MAP_FIXED crap)
      
       - To make sure everybody else doesn't mmap/brk/grow_stack into a
      "segment" that is used for a special mapping
      
      Some of the necessary mechanisms to handle that were present in the
      hugetlbfs code, but mostly in ways not suitable for anything else.
      
      The patch relies on some changes to the generic get_unmapped_area()
      that just got merged.  It still hijacks hugetlb callbacks here or
      there as the generic code hasn't been entirely cleaned up yet but
      that shouldn't be a problem.
      
      So what is a slice ?  Well, I re-used the mechanism used formerly by our
      hugetlbfs implementation which divides the address space in
      "meta-segments" which I called "slices".  The division is done using
      256MB slices below 4G, and 1T slices above.  Thus the address space is
      divided currently into 16 "low" slices and 16 "high" slices.  (Special
      case: high slice 0 is the area between 4G and 1T).
      
      Doing so simplifies significantly the tracking of segments and avoids
      having to keep track of all the 256MB segments in the address space.
      
      While I used the "concepts" of hugetlbfs, I mostly re-implemented
      everything in a more generic way and "ported" hugetlbfs to it.
      
      Slices can have an associated page size, which is encoded in the mmu
      context and used by the SLB miss handler to set the segment sizes.  The
      hash code currently doesn't care, it has a specific check for hugepages,
      though I might add a mechanism to provide per-slice hash mapping
      functions in the future.
      
      The slice code provide a pair of "generic" get_unmapped_area() (bottomup
      and topdown) functions that should work with any slice size.  There is
      some trickiness here so I would appreciate people to have a look at the
      implementation of these and let me know if I got something wrong.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d0f13e3c
  14. 27 4月, 2007 1 次提交
    • D
      [POWERPC] Prepare for splitting up mmu.h by MMU type · 8d2169e8
      David Gibson 提交于
      Currently asm-powerpc/mmu.h has definitions for the 64-bit hash based
      MMU.  If CONFIG_PPC64 is not set, it instead includes asm-ppc/mmu.h
      which contains a particularly horrible mess of #ifdefs giving the
      definitions for all the various 32-bit MMUs.
      
      It would be nice to have the low level definitions for each MMU type
      neatly in their own separate files.  It would also be good to wean
      arch/powerpc off dependence on the old asm-ppc/mmu.h.
      
      This patch makes a start on such a cleanup by moving the definitions
      for the 64-bit hash MMU to their own file, asm-powerpc/mmu_hash64.h.
      Definitions for the other MMUs still all come from asm-ppc/mmu.h,
      however each MMU type can now be one-by-one moved over to their own
      file, in the process cleaning them up stripping them of cruft no
      longer necessary in arch/powerpc.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      8d2169e8
  15. 24 4月, 2007 1 次提交
    • A
      [POWERPC] spufs: make spu page faults not block scheduling · 57dace23
      Arnd Bergmann 提交于
      Until now, we have always entered the spu page fault handler
      with a mutex for the spu context held. This has multiple
      bad side-effects:
      - it becomes impossible to suspend the context during
        page faults
      - if an spu program attempts to access its own mmio
        areas through DMA, we get an immediate livelock when
        the nopage function tries to acquire the same mutex
      
      This patch makes the page fault logic operate on a
      struct spu_context instead of a struct spu, and moves it
      from spu_base.c to a new file fault.c inside of spufs.
      
      We now also need to copy the dar and dsisr contents
      of the last fault into the saved context to have it
      accessible in case we schedule out the context before
      activating the page fault handler.
      Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
      57dace23
  16. 07 2月, 2007 1 次提交
  17. 16 10月, 2006 1 次提交
  18. 28 6月, 2006 1 次提交
  19. 15 6月, 2006 1 次提交
    • P
      powerpc: Use 64k pages without needing cache-inhibited large pages · bf72aeba
      Paul Mackerras 提交于
      Some POWER5+ machines can do 64k hardware pages for normal memory but
      not for cache-inhibited pages.  This patch lets us use 64k hardware
      pages for most user processes on such machines (assuming the kernel
      has been configured with CONFIG_PPC_64K_PAGES=y).  User processes
      start out using 64k pages and get switched to 4k pages if they use any
      non-cacheable mappings.
      
      With this, we use 64k pages for the vmalloc region and 4k pages for
      the imalloc region.  If anything creates a non-cacheable mapping in
      the vmalloc region, the vmalloc region will get switched to 4k pages.
      I don't know of any driver other than the DRM that would do this,
      though, and these machines don't have AGP.
      
      When a region gets switched from 64k pages to 4k pages, we do not have
      to clear out all the 64k HPTEs from the hash table immediately.  We
      use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page
      was hashed in as a 64k page or a set of 4k pages.  If hash_page is
      trying to insert a 4k page for a Linux PTE and it sees that it has
      already been inserted as a 64k page, it first invalidates the 64k HPTE
      before inserting the 4k HPTE.  The hash invalidation routines also use
      the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a
      set of 4k HPTEs to remove.  With those two changes, we can tolerate a
      mix of 4k and 64k HPTEs in the hash table, and they will all get
      removed when the address space is torn down.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      bf72aeba
  20. 09 6月, 2006 2 次提交
    • B
      [PATCH] powerpc: Fix buglet with MMU hash management · c5cf0e30
      Benjamin Herrenschmidt 提交于
      Our MMU hash management code would not set the "C" bit (changed bit) in
      the hardware PTE when updating a RO PTE into a RW PTE. That would cause
      the hardware to possibly to a write back to the hash table to set it on
      the first store access, which in addition to being a performance issue,
      might also hit a bug when running with native hash management (non-HV)
      as our code is specifically optimized for the case where no write back
      happens.
      
      Thus there is a very small therocial window were a hash PTE can become
      corrupted if that HPTE has just been upgraded to read write, a store
      access happens on it, and that races with another processor evicting
      that same slot. Since eviction (caused by an almost full hash) is
      extremely rare, the bug is very unlikely to happen fortunately.
      
      This fixes by allowing the updating of the protection bits in the native
      hash handling to also set (but not clear) the "C" bit, and, in order to
      also improve performances in the general case, by always setting that
      bit on newly inserted hash PTE so that writeback really never happens.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c5cf0e30
    • B
      [PATCH] powerpc vdso updates · a5bba930
      Benjamin Herrenschmidt 提交于
      This patch cleans up some locking & error handling in the ppc vdso and
      moves the vdso base pointer from the thread struct to the mm context
      where it more logically belongs. It brings the powerpc implementation
      closer to Ingo's new x86 one and also adds an arch_vma_name() function
      allowing to print [vsdo] in /proc/<pid>/maps if Ingo's x86 vdso patch is
      also applied.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      a5bba930
  21. 22 3月, 2006 1 次提交
  22. 24 2月, 2006 1 次提交
  23. 09 1月, 2006 3 次提交
    • A
      [PATCH] powerpc: sanitize header files for user space includes · 88ced031
      Arnd Bergmann 提交于
      include/asm-ppc/ had #ifdef __KERNEL__ in all header files that
      are not meant for use by user space, include/asm-powerpc does
      not have this yet.
      
      This patch gets us a lot closer there. There are a few cases
      where I was not sure, so I left them out. I have verified
      that no CONFIG_* symbols are used outside of __KERNEL__
      any more and that there are no obvious compile errors when
      including any of the headers in user space libraries.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      88ced031
    • M
      [PATCH] powerpc: Fixups for kernel linked at 32 MB · 758438a7
      Michael Ellerman 提交于
      There's a few places where we need to fix things up for the kernel to work
      if it's linked at 32MB:
      
       - platforms/powermac/smp.c
         To start secondary cpus on pmac we patch the reset vector, which is fine.
         Except if we're above 32MB we don't have enough bits for an absolute branch,
         it needs to relative.
       - kernel/head_64.s
          - A few branches in the cpu hold code need to load the full target address
            and do a bctr.
          - after_prom_start needs to load PHYSICAL_START as the dest address, not 0.
          - The exception prolog needs to load the low word of the target adddress,
            not just the low halfword.
          - Fixup handling of the initial stab address.
       - kernel/setup_64.c
         smp_release_cpus() needs to write 1 to the spinloop flag near 0, not 32 MB.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      758438a7
    • B
      [PATCH] powerpc: Add OF address parsing code (#2) · d1405b86
      Benjamin Herrenschmidt 提交于
      Parsing addresses extracted from Open Firmware isn't a simple matter. We
      have various bits of code that try to do it in various place, including
      some heuristics in prom.c that pre-parse addresses at boot and fill
      device-nodes "addrs", but those are dodgy at best and I want to
      deprecate them. So this patch introduces a new set of routines that
      should be capable of parsing most types of addresses and translating
      them into CPU physical addresses. It currently works for things on PCI
      busses and ISA busses and should work on "standard" busses like the root
      bus or the MacIO bus that don't put funky flags in addresses. If you
      have other bus types that do use funky flags, you'll have to add new bus
      type translators, which is fairly easy.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      d1405b86
  24. 09 12月, 2005 1 次提交
    • D
      [PATCH] powerpc: Add missing icache flushes for hugepages · cbf52afd
      David Gibson 提交于
      On most powerpc CPUs, the dcache and icache are not coherent so
      between writing and executing a page, the caches must be flushed.
      Userspace programs assume pages given to them by the kernel are icache
      clean, so we must do this flush between the kernel clearing a page and
      it being mapped into userspace for execute.  We were not doing this
      for hugepages, this patch corrects the situation.
      
      We use the same lazy mechanism as we use for normal pages, delaying
      the flush until userspace actually attempts to execute from the page
      in question.
      
      Tested on G5.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      cbf52afd
  25. 19 11月, 2005 1 次提交
  26. 10 11月, 2005 3 次提交
    • P
      powerpc: Move some extern declarations from C code into headers · 49b09853
      Paul Mackerras 提交于
      This also make klimit have the same type on 32-bit as on 64-bit,
      namely unsigned long, and defines and initializes it in one place.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      49b09853
    • P
      [PATCH] powerpc: merge code values for identifying platforms · 799d6046
      Paul Mackerras 提交于
      This patch merges platform codes.  systemcfg->platform is no longer used,
      systemcfg use in general is deprecated as much as possible (and renamed
      _systemcfg before it gets completely moved elsewhere in a future patch),
      _machine is now used on ppc64 along as ppc32.  Platform codes aren't gone
      yet but we are getting a step closer. A bunch of asm code in head[_64].S
      is also turned into C code.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      799d6046
    • D
      [PATCH] powerpc: Consolidate asm compatibility macros · 3ddfbcf1
      David Gibson 提交于
      This patch consolidates macros used to generate assembly for
      compatibility across different CPUs or configs.  A new header,
      asm-powerpc/asm-compat.h contains the main compatibility macros.  It
      uses some preprocessor magic to make the macros suitable both for use
      in .S files, and in inline asm in .c files.  Headers (bitops.h,
      uaccess.h, atomic.h, bug.h) which had their own such compatibility
      macros are changed to use asm-compat.h.
      
      ppc_asm.h is now for use in .S files *only*, and a #error enforces
      that.  As such, we're a lot more careless about namespace pollution
      here than in asm-compat.h.
      
      While we're at it, this patch adds a call to the PPC405_ERR77 macro in
      futex.h which should have had it already, but didn't.
      
      Built and booted on pSeries, Maple and iSeries (ARCH=powerpc).  Built
      for 32-bit powermac (ARCH=powerpc) and Walnut (ARCH=ppc).
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      3ddfbcf1
  27. 07 11月, 2005 1 次提交
    • B
      [PATCH] ppc64: support 64k pages · 3c726f8d
      Benjamin Herrenschmidt 提交于
      Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel
      base page size to 64K.  The resulting kernel still boots on any
      hardware.  On current machines with 4K pages support only, the kernel
      will maintain 16 "subpages" for each 64K page transparently.
      
      Note that while real 64K capable HW has been tested, the current patch
      will not enable it yet as such hardware is not released yet, and I'm
      still verifying with the firmware architects the proper to get the
      information from the newer hypervisors.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3c726f8d
  28. 23 9月, 2005 1 次提交
    • M
      ppc64 iSeries: Update create_pte_mapping to replace iSeries_bolt_kernel() · 4c55130b
      Michael Ellerman 提交于
      early_setup() calls htab_initialize() which is similar, but not identical
      to iSeries_bolt_kernel().
      
      On iSeries the Hypervisor has already inserted some ptes for us, and we
      simply have to detect that and bolt them. iSeries_hpte_bolt_or_insert()
      implements that logic.
      
      For the case of a non-existing pte we just call iSeries_hpte_insert(). This
      appears to work, although it's not entirely equivalent to the old code in
      iSeries_make_pte() which panicked if we got a secondary slot. Not sure if
      that's important.
      
      Finally we call iSeries_hpte_bolt_or_insert() from create_pte_mapping(),
      which is called from htab_initialize() for each lmb region.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      4c55130b
  29. 21 9月, 2005 1 次提交
  30. 19 9月, 2005 1 次提交
  31. 06 9月, 2005 1 次提交
    • D
      [PATCH] Invert sense of SLB class bit · 14b34661
      David Gibson 提交于
      Currently, we set the class bit in kernel SLB entries, and clear it on
      user SLB entries.  On POWER5, ERAT entries created in real mode have
      the class bit clear.  So to avoid flushing kernel ERAT entries on each
      context switch, this patch inverts our usage of the class bit, setting
      it on user SLB entries and clearing it on kernel SLB entries.
      
      Booted on POWER5 and G5.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      14b34661
  32. 29 8月, 2005 3 次提交
    • D
      [PATCH] Dynamic hugepage addresses for ppc64 · c594adad
      David Gibson 提交于
      Paulus, I think this is now a reasonable candidate for the post-2.6.13
      queue.
      
      Relax address restrictions for hugepages on ppc64
      
      Presently, 64-bit applications on ppc64 may only use hugepages in the
      address region from 1-1.5T.  Furthermore, if hugepages are enabled in
      the kernel config, they may only use hugepages and never normal pages
      in this area.  This patch relaxes this restriction, allowing any
      address to be used with hugepages, but with a 1TB granularity.  That
      is if you map a hugepage anywhere in the region 1TB-2TB, that entire
      area will be reserved exclusively for hugepages for the remainder of
      the process's lifetime.  This works analagously to hugepages in 32-bit
      applications, where hugepages can be mapped anywhere, but with 256MB
      (mmu segment) granularity.
      
      This patch applies on top of the four level pagetable patch
      (http://patchwork.ozlabs.org/linuxppc64/patch?id=1936).
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c594adad
    • D
      [PATCH] Change address of ppc64 initial segment table · c59c464a
      David Gibson 提交于
      On ppc64 machines with segment tables, CPU0's segment table is at a
      fixed address, currently 0x9000.  This patch moves it to the free
      space at 0x6000, just below the fwnmi data area.  This saves 8k of
      space in vmlinux and the runtime kernel image.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      c59c464a
    • D
      [PATCH] Four level pagetables for ppc64 · e28f7faf
      David Gibson 提交于
      Implement 4-level pagetables for ppc64
      
      This patch implements full four-level page tables for ppc64, thereby
      extending the usable user address range to 44 bits (16T).
      
      The patch uses a full page for the tables at the bottom and top level,
      and a quarter page for the intermediate levels.  It uses full 64-bit
      pointers at every level, thus also increasing the addressable range of
      physical memory.  This patch also tweaks the VSID allocation to allow
      matching range for user addresses (this halves the number of available
      contexts) and adds some #if and BUILD_BUG sanity checks.
      Signed-off-by: NDavid Gibson <dwg@au1.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      e28f7faf