- 21 12月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We're soon running out of CPU features and I need to add some new ones for various MMU related bits, so this patch separates the MMU features from the CPU features. I moved over the 32-bit MMU related ones, added base features for MMU type families, but didn't move over any 64-bit only feature yet. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 04 8月, 2008 1 次提交
-
-
由 Stephen Rothwell 提交于
from include/asm-powerpc. This is the result of a mkdir arch/powerpc/include/asm git mv include/asm-powerpc/* arch/powerpc/include/asm Followed by a few documentation/comment fixups and a couple of places where <asm-powepc/...> was being used explicitly. Of the latter only one was outside the arch code and it is a driver only built for powerpc. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 8月, 2007 1 次提交
-
-
由 Josh Boyer 提交于
Add MMU definitions for 40x platforms. Also fixes two warnings in 40x_mmu.c. Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
-
- 03 7月, 2007 2 次提交
-
-
由 David Gibson 提交于
arch/powerpc still relies on asm-ppc/mmu.h for some 32-bit MMU types. This patch is another step towards fixing this. It takes the portions of asm-ppc/mmu.h related to 8xx embedded CPUs which are still relevant in arch/powerpc and puts them in a new asm-powerpc/mmu-8xx.h, included when appropriate from asm-powerpc/mmu.h. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 David Gibson 提交于
arch/powerpc still relies on asm-ppc/mmu.h for some 32-bit MMU types. This patch is another step towards fixing this. It takes the portions of asm-ppc/mmu.h related to Freescale Book-E which are still relevant in arch/powerpc and puts them in a new asm-powerpc/mmu-fsl-booke.h, included when appropriate from asm-powerpc/mmu.h. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 14 6月, 2007 1 次提交
-
-
由 David Gibson 提交于
arch/powerpc still relies on asm-ppc/mmu.h for most 32-bit MMU types. This is another step towards fixing this. It takes the portions of asm-ppc/mmu.h related to the "classic" 32-bit hash page table MMU which are still relevant in arch/powerpc and puts them in a new asm-powerpc/mmu-hash32.h, included when appropriate from asm-powerpc/mmu.h. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 02 5月, 2007 1 次提交
-
-
由 David Gibson 提交于
This patch takes the definitions for the PPC44x MMU (a software loaded TLB) from asm-ppc/mmu.h, cleans them up of things no longer necessary in arch/powerpc and puts them in a new asm-powerpc/mmu_44x.h file. It also substantially simplifies arch/powerpc/mm/44x_mmu.c and makes a couple of small fixes necessary for the 44x MMU code to build and work properly in arch/powerpc. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 27 4月, 2007 1 次提交
-
-
由 David Gibson 提交于
Currently asm-powerpc/mmu.h has definitions for the 64-bit hash based MMU. If CONFIG_PPC64 is not set, it instead includes asm-ppc/mmu.h which contains a particularly horrible mess of #ifdefs giving the definitions for all the various 32-bit MMUs. It would be nice to have the low level definitions for each MMU type neatly in their own separate files. It would also be good to wean arch/powerpc off dependence on the old asm-ppc/mmu.h. This patch makes a start on such a cleanup by moving the definitions for the 64-bit hash MMU to their own file, asm-powerpc/mmu_hash64.h. Definitions for the other MMUs still all come from asm-ppc/mmu.h, however each MMU type can now be one-by-one moved over to their own file, in the process cleaning them up stripping them of cruft no longer necessary in arch/powerpc. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2007 1 次提交
-
-
由 Arnd Bergmann 提交于
Until now, we have always entered the spu page fault handler with a mutex for the spu context held. This has multiple bad side-effects: - it becomes impossible to suspend the context during page faults - if an spu program attempts to access its own mmio areas through DMA, we get an immediate livelock when the nopage function tries to acquire the same mutex This patch makes the page fault logic operate on a struct spu_context instead of a struct spu, and moves it from spu_base.c to a new file fault.c inside of spufs. We now also need to copy the dar and dsisr contents of the last fault into the saved context to have it accessible in case we schedule out the context before activating the page fault handler. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com>
-
- 07 2月, 2007 1 次提交
-
-
由 Ishizaki Kou 提交于
Adds htab routines for Celleb platform. Signed-off-by: NKou Ishizaki <kou.ishizaki@toshiba.co.jp> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 16 10月, 2006 1 次提交
-
-
由 Geoff Levand 提交于
Change the powerpc hpte_insert routines now called through ppc_md to static scope. Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 6月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
Initialise the ppc_md htab callbacks earlier, in the probe routines. This allows us to call htab_finish_init() from htab_initialize(), and makes it private to hash_utils_64.c. Move htab_finish_init() and make_bl() above htab_initialize() to avoid forward declarations. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 15 6月, 2006 1 次提交
-
-
由 Paul Mackerras 提交于
Some POWER5+ machines can do 64k hardware pages for normal memory but not for cache-inhibited pages. This patch lets us use 64k hardware pages for most user processes on such machines (assuming the kernel has been configured with CONFIG_PPC_64K_PAGES=y). User processes start out using 64k pages and get switched to 4k pages if they use any non-cacheable mappings. With this, we use 64k pages for the vmalloc region and 4k pages for the imalloc region. If anything creates a non-cacheable mapping in the vmalloc region, the vmalloc region will get switched to 4k pages. I don't know of any driver other than the DRM that would do this, though, and these machines don't have AGP. When a region gets switched from 64k pages to 4k pages, we do not have to clear out all the 64k HPTEs from the hash table immediately. We use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page was hashed in as a 64k page or a set of 4k pages. If hash_page is trying to insert a 4k page for a Linux PTE and it sees that it has already been inserted as a 64k page, it first invalidates the 64k HPTE before inserting the 4k HPTE. The hash invalidation routines also use the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a set of 4k HPTEs to remove. With those two changes, we can tolerate a mix of 4k and 64k HPTEs in the hash table, and they will all get removed when the address space is torn down. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 6月, 2006 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Our MMU hash management code would not set the "C" bit (changed bit) in the hardware PTE when updating a RO PTE into a RW PTE. That would cause the hardware to possibly to a write back to the hash table to set it on the first store access, which in addition to being a performance issue, might also hit a bug when running with native hash management (non-HV) as our code is specifically optimized for the case where no write back happens. Thus there is a very small therocial window were a hash PTE can become corrupted if that HPTE has just been upgraded to read write, a store access happens on it, and that races with another processor evicting that same slot. Since eviction (caused by an almost full hash) is extremely rare, the bug is very unlikely to happen fortunately. This fixes by allowing the updating of the protection bits in the native hash handling to also set (but not clear) the "C" bit, and, in order to also improve performances in the general case, by always setting that bit on newly inserted hash PTE so that writeback really never happens. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
This patch cleans up some locking & error handling in the ppc vdso and moves the vdso base pointer from the thread struct to the mm context where it more logically belongs. It brings the powerpc implementation closer to Ingo's new x86 one and also adds an arch_vma_name() function allowing to print [vsdo] in /proc/<pid>/maps if Ingo's x86 vdso patch is also applied. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 22 3月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
In mm_init_ppc64() we calculate the location of the "IO hole", but then no one ever looks at the value. So don't bother. That's actually all mm_init_ppc64() does, so get rid of it too. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 2月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
For kexec we need to know the size of the MMU hash table. Currently we calculate the size once in the htab code, and then twice more in the kexec code, once using htab_hash_mask and once using ppc64_pft_size. On some machines the ppc64_pft_size calculation is broken because ppc64_pft_size is not set. So we need to fix the second calculation, but better still we should just calculate the size once and use it everywhere else. Tested on Power5 LPAR, Power4 non-LPAR and Power3. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 1月, 2006 3 次提交
-
-
由 Arnd Bergmann 提交于
include/asm-ppc/ had #ifdef __KERNEL__ in all header files that are not meant for use by user space, include/asm-powerpc does not have this yet. This patch gets us a lot closer there. There are a few cases where I was not sure, so I left them out. I have verified that no CONFIG_* symbols are used outside of __KERNEL__ any more and that there are no obvious compile errors when including any of the headers in user space libraries. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
There's a few places where we need to fix things up for the kernel to work if it's linked at 32MB: - platforms/powermac/smp.c To start secondary cpus on pmac we patch the reset vector, which is fine. Except if we're above 32MB we don't have enough bits for an absolute branch, it needs to relative. - kernel/head_64.s - A few branches in the cpu hold code need to load the full target address and do a bctr. - after_prom_start needs to load PHYSICAL_START as the dest address, not 0. - The exception prolog needs to load the low word of the target adddress, not just the low halfword. - Fixup handling of the initial stab address. - kernel/setup_64.c smp_release_cpus() needs to write 1 to the spinloop flag near 0, not 32 MB. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
Parsing addresses extracted from Open Firmware isn't a simple matter. We have various bits of code that try to do it in various place, including some heuristics in prom.c that pre-parse addresses at boot and fill device-nodes "addrs", but those are dodgy at best and I want to deprecate them. So this patch introduces a new set of routines that should be capable of parsing most types of addresses and translating them into CPU physical addresses. It currently works for things on PCI busses and ISA busses and should work on "standard" busses like the root bus or the MacIO bus that don't put funky flags in addresses. If you have other bus types that do use funky flags, you'll have to add new bus type translators, which is fairly easy. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 12月, 2005 1 次提交
-
-
由 David Gibson 提交于
On most powerpc CPUs, the dcache and icache are not coherent so between writing and executing a page, the caches must be flushed. Userspace programs assume pages given to them by the kernel are icache clean, so we must do this flush between the kernel clearing a page and it being mapped into userspace for execute. We were not doing this for hugepages, this patch corrects the situation. We use the same lazy mechanism as we use for normal pages, delaying the flush until userspace actually attempts to execute from the page in question. Tested on G5. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 11月, 2005 1 次提交
-
-
由 Paul Mackerras 提交于
For these, I have just done the lame-o merge where the file ends up looking like: #ifndef CONFIG_PPC64 #include <asm-ppc/foo.h> #else ... contents from asm-ppc64/foo.h #endif so nothing has changed, really, except that we reduce include/asm-ppc64 a bit more. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 10 11月, 2005 3 次提交
-
-
由 Paul Mackerras 提交于
This also make klimit have the same type on 32-bit as on 64-bit, namely unsigned long, and defines and initializes it in one place. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This patch merges platform codes. systemcfg->platform is no longer used, systemcfg use in general is deprecated as much as possible (and renamed _systemcfg before it gets completely moved elsewhere in a future patch), _machine is now used on ppc64 along as ppc32. Platform codes aren't gone yet but we are getting a step closer. A bunch of asm code in head[_64].S is also turned into C code. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 David Gibson 提交于
This patch consolidates macros used to generate assembly for compatibility across different CPUs or configs. A new header, asm-powerpc/asm-compat.h contains the main compatibility macros. It uses some preprocessor magic to make the macros suitable both for use in .S files, and in inline asm in .c files. Headers (bitops.h, uaccess.h, atomic.h, bug.h) which had their own such compatibility macros are changed to use asm-compat.h. ppc_asm.h is now for use in .S files *only*, and a #error enforces that. As such, we're a lot more careless about namespace pollution here than in asm-compat.h. While we're at it, this patch adds a call to the PPC405_ERR77 macro in futex.h which should have had it already, but didn't. Built and booted on pSeries, Maple and iSeries (ARCH=powerpc). Built for 32-bit powermac (ARCH=powerpc) and Walnut (ARCH=ppc). Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 11月, 2005 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Adds a new CONFIG_PPC_64K_PAGES which, when enabled, changes the kernel base page size to 64K. The resulting kernel still boots on any hardware. On current machines with 4K pages support only, the kernel will maintain 16 "subpages" for each 64K page transparently. Note that while real 64K capable HW has been tested, the current patch will not enable it yet as such hardware is not released yet, and I'm still verifying with the firmware architects the proper to get the information from the newer hypervisors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 9月, 2005 1 次提交
-
-
由 Michael Ellerman 提交于
early_setup() calls htab_initialize() which is similar, but not identical to iSeries_bolt_kernel(). On iSeries the Hypervisor has already inserted some ptes for us, and we simply have to detect that and bolt them. iSeries_hpte_bolt_or_insert() implements that logic. For the case of a non-existing pte we just call iSeries_hpte_insert(). This appears to work, although it's not entirely equivalent to the old code in iSeries_make_pte() which panicked if we got a secondary slot. Not sure if that's important. Finally we call iSeries_hpte_bolt_or_insert() from create_pte_mapping(), which is called from htab_initialize() for each lmb region. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
-
- 21 9月, 2005 1 次提交
-
-
由 Olof Johansson 提交于
Replace some of the hard-coded constants with PAGE_SIZE/SHIFT/ORDER where appropriate. Likewise, in a couple of places it doesn't make sense to base some allocations on page size when all that's required is a constant 4K, etc. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 9月, 2005 1 次提交
-
-
由 Kumar Gala 提交于
Merged ppc_asm.h between ppc32 & ppc64. The majority of the file is common between the two architectures excluding how a single GPR is saved/restored and which GPRs are non-volatile. Additionally, moved the ASM_CONST macro used on ppc64 into ppc_asm.h. Signed-off-by: NKumar Gala <kumar.gala@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 06 9月, 2005 1 次提交
-
-
由 David Gibson 提交于
Currently, we set the class bit in kernel SLB entries, and clear it on user SLB entries. On POWER5, ERAT entries created in real mode have the class bit clear. So to avoid flushing kernel ERAT entries on each context switch, this patch inverts our usage of the class bit, setting it on user SLB entries and clearing it on kernel SLB entries. Booted on POWER5 and G5. Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 29 8月, 2005 3 次提交
-
-
由 David Gibson 提交于
Paulus, I think this is now a reasonable candidate for the post-2.6.13 queue. Relax address restrictions for hugepages on ppc64 Presently, 64-bit applications on ppc64 may only use hugepages in the address region from 1-1.5T. Furthermore, if hugepages are enabled in the kernel config, they may only use hugepages and never normal pages in this area. This patch relaxes this restriction, allowing any address to be used with hugepages, but with a 1TB granularity. That is if you map a hugepage anywhere in the region 1TB-2TB, that entire area will be reserved exclusively for hugepages for the remainder of the process's lifetime. This works analagously to hugepages in 32-bit applications, where hugepages can be mapped anywhere, but with 256MB (mmu segment) granularity. This patch applies on top of the four level pagetable patch (http://patchwork.ozlabs.org/linuxppc64/patch?id=1936). Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 David Gibson 提交于
On ppc64 machines with segment tables, CPU0's segment table is at a fixed address, currently 0x9000. This patch moves it to the free space at 0x6000, just below the fwnmi data area. This saves 8k of space in vmlinux and the runtime kernel image. Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 David Gibson 提交于
Implement 4-level pagetables for ppc64 This patch implements full four-level page tables for ppc64, thereby extending the usable user address range to 44 bits (16T). The patch uses a full page for the tables at the bottom and top level, and a quarter page for the intermediate levels. It uses full 64-bit pointers at every level, thus also increasing the addressable range of physical memory. This patch also tweaks the VSID allocation to allow matching range for user addresses (this halves the number of available contexts) and adds some #if and BUILD_BUG sanity checks. Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 7月, 2005 2 次提交
-
-
由 David Gibson 提交于
Presently the LparMap, one of the structures the kernel shares with the legacy iSeries hypervisor has a fixed offset address in head.S. This patch changes this so the LparMap is a normally initialized structure, without fixed address. This allows us to use macros to compute some of the values in the structure, which wasn't previously possible because the assembler always uses signed-% which gets the wrong answers for the computations in question. Unfortunately, a gcc bug means that doing this requires another structure (hvReleaseData) to be initialized in asm instead of C, but on the whole the result is cleaner than before. Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 David Gibson 提交于
PPC64 machines before Power4 need a segment table page allocated for each CPU. Currently these are allocated statically in a big array in head.S for all CPUs. The segment tables need to be in the first segment (so do_stab_bolted doesn't take a recursive fault on the stab itself), but other than that there are no constraints which require the stabs for the secondary CPUs to be statically allocated. This patch allocates segment tables dynamically during boot, using lmb_alloc() to ensure they are within the first 256M segment. This reduces the kernel image size by 192k... Tested on RS64 iSeries, POWER3 pSeries, and POWER5. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 14 7月, 2005 1 次提交
-
-
由 David Gibson 提交于
This patch removes the use of bitfield types from the ppc64 hash table manipulation code. Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Acked-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 26 6月, 2005 1 次提交
-
-
由 R Sharada 提交于
Add code to clear the hash table and invalidate the tlb for native (SMP, non-LPAR) mode. Supports 16M and 4k pages. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NR Sharada <sharada@in.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 6月, 2005 1 次提交
-
-
由 Arnd Bergmann 提交于
This adds the basic support for running on BPA machines. So far, this is only the IBM workstation, and it will not run on others without a little more generalization. It should be possible to configure a kernel for any combination of CONFIG_PPC_BPA with any of the other multiplatform targets. Signed-off-by: NArnd Bergmann <arndb@de.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 06 5月, 2005 1 次提交
-
-
由 David Gibson 提交于
This patch started as simply removing a few never-used macros from asm-ppc64/pgtable.h, then kind of grew. It now makes a bunch of cleanups to the ppc64 low-level header files (with corresponding changes to .c files where necessary) such as: - Abolishing never-used macros - Eliminating multiple #defines with the same purpose - Removing pointless macros (cases where just expanding the macro everywhere turns out clearer and more sensible) - Removing some cases where macros which could be defined in terms of each other weren't - Moving imalloc() related definitions from pgtable.h to their own header file (imalloc.h) - Re-arranging headers to group things more logically - Moving all VSID allocation related things to mmu.h, instead of being split between mmu.h and mmu_context.h - Removing some reserved space for flags from the PMD - we're not using it. - Fix some bugs which broke compile with STRICT_MM_TYPECHECKS. Signed-off-by: NDavid Gibson <dwg@au1.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-