1. 15 9月, 2009 2 次提交
    • H
      [IA64] kexec: Make INIT safe while transition to · 07a6a4ae
      Hidetoshi Seto 提交于
      kdump/kexec kernel
      
      Summary:
      
        Asserting INIT on the beginning of kdump/kexec kernel will result
        in unexpected behavior because INIT handler for previous kernel is
        invoked on new kernel.
      
      Description:
      
        In panic situation, we can receive INIT while kernel transition,
        i.e. from beginning of panic to bootstrap of kdump kernel.
        Since we initialize registers on leave from current kernel, no
        longer monarch/slave handlers of current kernel in virtual mode are
        called safely.  (In fact system goes hang as far as I confirmed)
      
      How to Reproduce:
      
        Start kdump
          # echo c > /proc/sysrq-trigger
        Then assert INIT while kdump kernel is booting, before new INIT
        handler for kdump kernel is registered.
      
      Expected(Desirable) result:
      
        kdump kernel boots without any problem, crashdump retrieved
      
      Actual result:
      
        INIT handler for previous kernel is invoked on kdump kernel
        => panic, hang etc. (unexpected)
      
      Proposed fix:
      
        We can unregister these init handlers from SAL before jumping into
        new kernel, however then the INIT will fallback to default behavior,
        result in warmboot by SAL (according to the SAL specification) and
        we cannot retrieve the crashdump.
      
        Therefore this patch introduces a NOP init handler and register it
        to SAL before leave from current kernel, to start kdump safely by
        preventing INITs from entering virtual mode and resulting in warmboot.
      
        On the other hand, in case of kexec that not for kdump, it also
        has same problem with INIT while kernel transition.
        This patch handles this case differently, because for kexec
        unregistering handlers will be preferred than registering NOP
        handler, since the situation "no handlers registered" is usual
        state for kernel's entry.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Haren Myneni <hbabu@us.ibm.com>
      Cc: kexec@lists.infradead.org
      Acked-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      07a6a4ae
    • H
      [IA64] kdump: Mask MCA/INIT on frozen cpus · 4295ab34
      Hidetoshi Seto 提交于
      Summary:
      
        INIT asserted on kdump kernel invokes INIT handler not only on a
        cpu that running on the kdump kernel, but also BSP of the panicked
        kernel, because the (badly) frozen BSP can be thawed by INIT.
      
      Description:
      
        The kdump_cpu_freeze() is called on cpus except one that initiates
        panic and/or kdump, to stop/offline the cpu (on ia64, it means we
        pass control of cpus to SAL, or put them in spinloop).  Note that
        CPU0(BSP) always go to spinloop, so if panic was happened on an AP,
        there are at least 2cpus (= the AP and BSP) which not back to SAL.
      
        On the spinning cpus, interrupts are disabled (rsm psr.i), but INIT
        is still interruptible because psr.mc for mask them is not set unless
        kdump_cpu_freeze() is not called from MCA/INIT context.
      
        Therefore, assume that a panic was happened on an AP, kdump was
        invoked, new INIT handlers for kdump kernel was registered and then
        an INIT is asserted.  From the viewpoint of SAL, there are 2 online
        cpus, so INIT will be delivered to both of them.  It likely means
        that not only the AP (= a cpu executing kdump) enters INIT handler
        which is newly registered, but also BSP (= another cpu spinning in
        panicked kernel) enters the same INIT handler.  Of course setting of
        registers in BSP are still old (for panicked kernel), so what happen
        with running handler with wrong setting will be extremely unexpected.
        I believe this is not desirable behavior.
      
      How to Reproduce:
      
        Start kdump on one of APs (e.g. cpu1)
          # taskset 0x2 echo c > /proc/sysrq-trigger
        Then assert INIT after kdump kernel is booted, after new INIT handler
        for kdump kernel is registered.
      
      Expected results:
      
        An INIT handler is invoked only on the AP.
      
      Actual results:
      
        An INIT handler is invoked on the AP and BSP.
      
      Sample of results:
      
        I got following console log by asserting INIT after prompt "root:/>".
        It seems that two monarchs appeared by one INIT, and one panicked at
        last.  And it also seems that the panicked one supposed there were
        4 online cpus and no one did rendezvous:
      
          :
          [  0 %]dropping to initramfs shell
          exiting this shell will reboot your system
          root:/> Entered OS INIT handler. PSP=fff301a0 cpu=0 monarch=0
          ia64_init_handler: Promoting cpu 0 to monarch.
          Delaying for 5 seconds...
          All OS INIT slaves have reached rendezvous
          Processes interrupted by INIT - 0 (cpu 0 task 0xa000000100af0000)
          :
          <<snip>>
          :
          Entered OS INIT handler. PSP=fff301a0 cpu=0 monarch=1
          Delaying for 5 seconds...
          mlogbuf_finish: printing switched to urgent mode, MCA/INIT might be dodgy or fail.
          OS INIT slave did not rendezvous on cpu 1 2 3
          INIT swapper 0[0]: bugcheck! 0 [1]
          :
          <<snip>>
          :
          Kernel panic - not syncing: Attempted to kill the idle task!
      
      Proposed fix:
      
        To avoid this problem, this patch inserts ia64_set_psr_mc() to mask
        INIT on cpus going to be frozen.  This masking have no effect if the
        kdump_cpu_freeze() is called from INIT handler when kdump_on_init == 1,
        because psr.mc is already turned on to 1 before entering OS_INIT.
        I confirmed that weird log like above are disappeared after applying
        this patch.
      Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Haren Myneni <hbabu@us.ibm.com>
      Cc: kexec@lists.infradead.org
      Acked-by: NFenghua Yu <fenghua.yu@intel.com>
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      4295ab34
  2. 03 9月, 2009 2 次提交
  3. 12 8月, 2009 7 次提交
  4. 05 8月, 2009 1 次提交
  5. 28 7月, 2009 1 次提交
    • B
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb() · 9e1b32ca
      Benjamin Herrenschmidt 提交于
      mm: Pass virtual address to [__]p{te,ud,md}_free_tlb()
      
      Upcoming paches to support the new 64-bit "BookE" powerpc architecture
      will need to have the virtual address corresponding to PTE page when
      freeing it, due to the way the HW table walker works.
      
      Basically, the TLB can be loaded with "large" pages that cover the whole
      virtual space (well, sort-of, half of it actually) represented by a PTE
      page, and which contain an "indirect" bit indicating that this TLB entry
      RPN points to an array of PTEs from which the TLB can then create direct
      entries. Thus, in order to invalidate those when PTE pages are deleted,
      we need the virtual address to pass to tlbilx or tlbivax instructions.
      
      The old trick of sticking it somewhere in the PTE page struct page sucks
      too much, the address is almost readily available in all call sites and
      almost everybody implemets these as macros, so we may as well add the
      argument everywhere. I added it to the pmd and pud variants for consistency.
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Acked-by: David Howells <dhowells@redhat.com> [MN10300 & FRV]
      Acked-by: NNick Piggin <npiggin@suse.de>
      Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> [s390]
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e1b32ca
  6. 17 7月, 2009 3 次提交
  7. 13 7月, 2009 1 次提交
  8. 11 7月, 2009 1 次提交
  9. 01 7月, 2009 4 次提交
  10. 28 6月, 2009 1 次提交
  11. 22 6月, 2009 1 次提交
  12. 20 6月, 2009 1 次提交
  13. 19 6月, 2009 2 次提交
  14. 18 6月, 2009 4 次提交
  15. 17 6月, 2009 6 次提交
  16. 16 6月, 2009 3 次提交
新手
引导
客服 返回
顶部