1. 28 8月, 2013 1 次提交
  2. 27 8月, 2013 7 次提交
    • B
      powerpc: Don't Oops when accessing /proc/powerpc/lparcfg without hypervisor · f5f6cbb6
      Benjamin Herrenschmidt 提交于
      /proc/powerpc/lparcfg is an ancient facility (though still actively used)
      which allows access to some informations relative to the partition when
      running underneath a PAPR compliant hypervisor.
      
      It makes no sense on non-pseries machines. However, currently, not only
      can it be created on these if the kernel has pseries support, but accessing
      it on such a machine will crash due to trying to do hypervisor calls.
      
      In fact, it should also not do HV calls on older pseries that didn't have
      an hypervisor either.
      
      Finally, it has the plumbing to be a module but is a "bool" Kconfig option.
      
      This fixes the whole lot by turning it into a machine_device_initcall
      that is only created on pseries, and adding the necessary hypervisor
      check before calling the H_GET_EM_PARMS hypercall
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      f5f6cbb6
    • B
      powerpc/btext: Fix CONFIG_PPC_EARLY_DEBUG_BOOTX on ppc32 · ee372bc1
      Benjamin Herrenschmidt 提交于
      The "rmci" stuff only exists on 64-bit
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      ee372bc1
    • M
      powerpc: Cleanup handling of the DSCR bit in the FSCR register · bc683a7e
      Michael Neuling 提交于
      As suggested by paulus we can simplify the Data Stream Control Register
      (DSCR) Facility Status and Control Register (FSCR) handling.
      
      Firstly, we simplify the asm by using a rldimi.
      
      Secondly, we now use the FSCR only to control the DSCR facility, rather
      than both the FSCR and HFSCR.  Users will see no functional change from
      this but will get a minor speedup as they will trap into the kernel only
      once (rather than twice) when they first touch the DSCR.  Also, this
      changes removes a bunch of ugly FTR_SECTION code.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      bc683a7e
    • M
      powerpc: Skip emulating & leave interrupts off for kernel program checks · b3f6a459
      Michael Ellerman 提交于
      In the program check handler we handle some causes with interrupts off
      and others with interrupts on.
      
      We need to enable interrupts to handle the emulation cases, because they
      access userspace memory and might sleep.
      
      For faults in the kernel we don't want to do any emulation, and
      emulate_instruction() enforces that. do_mathemu() doesn't but probably
      should.
      
      The other disadvantage of enabling interrupts for kernel faults is that
      we may take another interrupt, and recurse. As seen below:
      
        --- Exception: e40 at c000000000004ee0 performance_monitor_relon_pSeries_1
        [link register   ] c00000000000f858 .arch_local_irq_restore+0x38/0x90
        [c000000fb185dc10] 0000000000000000 (unreliable)
        [c000000fb185dc80] c0000000007d8558 .program_check_exception+0x298/0x2d0
        [c000000fb185dd00] c000000000002f40 emulation_assist_common+0x140/0x180
        --- Exception: e40 at c000000000004ee0 performance_monitor_relon_pSeries_1
        [link register   ] c00000000000f858 .arch_local_irq_restore+0x38/0x90
        [c000000fb185dff0] 00000000008b9190 (unreliable)
        [c000000fb185e060] c0000000007d8558 .program_check_exception+0x298/0x2d0
      
      So avoid both problems by checking if the fault was in the kernel and
      skipping the enable of interrupts and the emulation. Go straight to
      delivering the SIGILL, which for kernel faults calls die() and so on,
      dropping us in the debugger etc.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      b3f6a459
    • M
      powerpc: Add more exception trampolines for hypervisor exceptions · d671ddd6
      Michael Ellerman 提交于
      This makes back traces and profiles easier to read.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      d671ddd6
    • M
      powerpc: Fix location and rename exception trampolines · fa111f1f
      Michael Ellerman 提交于
      The symbols that name some of our exception trampolines are ahead of the
      location they name. In most cases this is OK because the code is tightly
      packed, but in some cases it means the symbol floats ahead of the
      correct location, eg:
      
        c000000000000ea0 <performance_monitor_pSeries_1>:
                ...
        c000000000000f00:       7d b2 43 a6     mtsprg  2,r13
      
      Fix them all by moving the symbol after the set of the location.
      
      While we're moving them anyway, rename them to loose the camelcase and
      to make it clear that they are trampolines.
      Signed-off-by: NMichael Ellerman <michael@ellerman.id.au>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      fa111f1f
    • A
      powerpc: Never handle VSX alignment exceptions from kernel · 5c2e0823
      Anton Blanchard 提交于
      The VSX alignment handler needs to write out the existing VSX
      state to memory before operating on it (flush_vsx_to_thread()).
      If we take a VSX alignment exception in the kernel bad things
      will happen. It looks like we could write the kernel state out
      to the user process, or we could handle the kernel exception
      using data from the user process (depending if MSR_VSX is set
      or not).
      
      Worse still, if the code to read or write the VSX state causes an
      alignment exception, we will recurse forever. I ended up with
      hundreds of megabytes of kernel stack to look through as a result.
      
      Floating point and SPE code have similar issues but already include
      a user check. Add the same check to emulate_vsx().
      
      With this patch any unaligned VSX loads and stores in the kernel
      will show up as a clear oops rather than silent corruption of
      kernel or userspace VSX state, or worse, corruption of a potentially
      unlimited amount of kernel memory.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      5c2e0823
  3. 24 8月, 2013 1 次提交
  4. 23 8月, 2013 1 次提交
  5. 21 8月, 2013 3 次提交
    • S
      of: move of_get_cpu_node implementation to DT core library · 183912d3
      Sudeep KarkadaNagesha 提交于
      This patch moves the generalized implementation of of_get_cpu_node from
      PowerPC to DT core library, thereby adding support for retrieving cpu
      node for a given logical cpu index on any architecture.
      
      The CPU subsystem can now use this function to assign of_node in the
      cpu device while registering CPUs.
      
      It is recommended to use these helper function only in pre-SMP/early
      initialisation stages to retrieve CPU device node pointers in logical
      ordering. Once the cpu devices are registered, it can be retrieved easily
      from cpu device of_node which avoids unnecessary parsing and matching.
      
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Grant Likely <grant.likely@linaro.org>
      Acked-by: NRob Herring <rob.herring@calxeda.com>
      Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
      183912d3
    • S
      powerpc: refactor of_get_cpu_node to support other architectures · 819d5965
      Sudeep KarkadaNagesha 提交于
      Currently different drivers requiring to access cpu device node are
      parsing the device tree themselves. Since the ordering in the DT need
      not match the logical cpu ordering, the parsing logic needs to consider
      that. However, this has resulted in lots of code duplication and in some
      cases even incorrect logic.
      
      It's better to consolidate them by adding support for getting cpu
      device node for a given logical cpu index in DT core library. However
      logical to physical index mapping can be architecture specific.
      
      PowerPC has it's own implementation to get the cpu node for a given
      logical index.
      
      This patch refactors the current implementation of of_get_cpu_node.
      This in preparation to move the implementation to DT core library.
      It separates out the logical to physical mapping so that a default
      matching of the physical id to the logical cpu index can be added
      when moved to common code. Architecture specific code can override it.
      
      Cc: Rob Herring <rob.herring@calxeda.com>
      Cc: Grant Likely <grant.likely@linaro.org>
      Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
      819d5965
    • S
      powerpc: Convert some mftb/mftbu into mfspr · beb2dc0a
      Scott Wood 提交于
      Some CPUs (such as e500v1/v2) don't implement mftb and will take a
      trap.  mfspr should work on everything that has a timebase, and is the
      preferred instruction according to ISA v2.06.
      
      Currently we get away with mftb on 85xx because the assembler converts
      it to mfspr due to -Wa,-me500.  However, that flag has other effects
      that are undesireable for certain targets (e.g.  lwsync is converted to
      sync), and is hostile to multiplatform kernels.  Thus we would like to
      stop setting it for all e500-family builds.
      
      mftb/mftbu instances which are in 85xx code or common code are
      converted.  Instances which will never run on 85xx are left alone.
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      beb2dc0a
  6. 16 8月, 2013 1 次提交
  7. 14 8月, 2013 26 次提交