1. 08 4月, 2014 1 次提交
  2. 15 1月, 2014 1 次提交
  3. 12 9月, 2013 2 次提交
    • M
      vmcore: introduce remap_oldmem_pfn_range() · 9cb21813
      Michael Holzheu 提交于
      For zfcpdump we can't map the HSA storage because it is only available via
      a read interface.  Therefore, for the new vmcore mmap feature we have
      introduce a new mechanism to create mappings on demand.
      
      This patch introduces a new architecture function remap_oldmem_pfn_range()
      that should be used to create mappings with remap_pfn_range() for oldmem
      areas that can be directly mapped.  For zfcpdump this is everything
      besides of the HSA memory.  For the areas that are not mapped by
      remap_oldmem_pfn_range() a generic vmcore a new generic vmcore fault
      handler mmap_vmcore_fault() is called.
      
      This handler works as follows:
      
      * Get already available or new page from page cache (find_or_create_page)
      * Check if /proc/vmcore page is filled with data (PageUptodate)
      * If yes:
        Return that page
      * If no:
        Fill page using __vmcore_read(), set PageUptodate, and return page
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Jan Willeke <willeke@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9cb21813
    • M
      vmcore: introduce ELF header in new memory feature · be8a8d06
      Michael Holzheu 提交于
      For s390 we want to use /proc/vmcore for our SCSI stand-alone dump
      (zfcpdump).  We have support where the first HSA_SIZE bytes are saved into
      a hypervisor owned memory area (HSA) before the kdump kernel is booted.
      When the kdump kernel starts, it is restricted to use only HSA_SIZE bytes.
      
      The advantages of this mechanism are:
      
       * No crashkernel memory has to be defined in the old kernel.
       * Early boot problems (before kexec_load has been done) can be dumped
       * Non-Linux systems can be dumped.
      
      We modify the s390 copy_oldmem_page() function to read from the HSA memory
      if memory below HSA_SIZE bytes is requested.
      
      Since we cannot use the kexec tool to load the kernel in this scenario,
      we have to build the ELF header in the 2nd (kdump/new) kernel.
      
      So with the following patch set we would like to introduce the new
      function that the ELF header for /proc/vmcore can be created in the 2nd
      kernel memory.
      
      The following steps are done during zfcpdump execution:
      
      1.  Production system crashes
      2.  User boots a SCSI disk that has been prepared with the zfcpdump tool
      3.  Hypervisor saves CPU state of boot CPU and HSA_SIZE bytes of memory into HSA
      4.  Boot loader loads kernel into low memory area
      5.  Kernel boots and uses only HSA_SIZE bytes of memory
      6.  Kernel saves registers of non-boot CPUs
      7.  Kernel does memory detection for dump memory map
      8.  Kernel creates ELF header for /proc/vmcore
      9.  /proc/vmcore uses this header for initialization
      10. The zfcpdump user space reads /proc/vmcore to write dump to SCSI disk
          - copy_oldmem_page() copies from HSA for memory below HSA_SIZE
          - copy_oldmem_page() copies from real memory for memory above HSA_SIZE
      
      Currently for s390 we create the ELF core header in the 2nd kernel with a
      small trick.  We relocate the addresses in the ELF header in a way that
      for the /proc/vmcore code it seems to be in the 1st kernel (old) memory
      and the read_from_oldmem() returns the correct data.  This allows the
      /proc/vmcore code to use the ELF header in the 2nd kernel.
      
      This patch:
      
      Exchange the old mechanism with the new and much cleaner function call
      override feature that now offcially allows to create the ELF core header
      in the 2nd kernel.
      
      To use the new feature the following function have to be defined
      by the architecture backend code to read from new memory:
      
       * elfcorehdr_alloc: Allocate ELF header
       * elfcorehdr_free: Free the memory of the ELF header
       * elfcorehdr_read: Read from ELF header
       * elfcorehdr_read_notes: Read from ELF notes
      Signed-off-by: NMichael Holzheu <holzheu@linux.vnet.ibm.com>
      Acked-by: NVivek Goyal <vgoyal@redhat.com>
      Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
      Cc: Jan Willeke <willeke@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      be8a8d06
  4. 16 3月, 2012 1 次提交
    • P
      device.h: audit and cleanup users in main include dir · 313162d0
      Paul Gortmaker 提交于
      The <linux/device.h> header includes a lot of stuff, and
      it in turn gets a lot of use just for the basic "struct device"
      which appears so often.
      
      Clean up the users as follows:
      
      1) For those headers only needing "struct device" as a pointer
      in fcn args, replace the include with exactly that.
      
      2) For headers not really using anything from device.h, simply
      delete the include altogether.
      
      3) For headers relying on getting device.h implicitly before
      being included themselves, now explicitly include device.h
      
      4) For files in which doing #1 or #2 uncovers an implicit
      dependency on some other header, fix by explicitly adding
      the required header(s).
      
      Any C files that were implicitly relying on device.h to be
      present have already been dealt with in advance.
      
      Total removals from #1 and #2: 51.  Total additions coming
      from #3: 9.  Total other implicit dependencies from #4: 7.
      
      As of 3.3-rc1, there were 110, so a net removal of 42 gives
      about a 38% reduction in device.h presence in include/*
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      313162d0
  5. 13 1月, 2012 1 次提交
    • F
      include/linux/crash_dump.h needs elf.h · 1f536b9e
      Fabio Estevam 提交于
      Building an ARM target we get the following warnings:
      
        CC      arch/arm/kernel/setup.o
        In file included from arch/arm/kernel/setup.c:39:
        arch/arm/include/asm/elf.h:102:1: warning: "vmcore_elf64_check_arch" redefined
        In file included from arch/arm/kernel/setup.c:24:
        include/linux/crash_dump.h:30:1: warning: this is the location of the previous definition
      
      Quoting Russell King:
      
      "linux/crash_dump.h makes no attempt to include asm/elf.h, but it depends
      on stuff in asm/elf.h to determine how stuff inside this file is defined
      at parse time.
      
      So, if asm/elf.h is included after linux/crash_dump.h or not at all, you
      get a different result from the situation where asm/elf.h is included
      before."
      
      So add elf.h header to crash_dump.h to avoid this problem.
      
      The original discussion about this can be found at:
      http://www.spinics.net/lists/arm-kernel/msg154113.htmlSigned-off-by: NFabio Estevam <fabio.estevam@freescale.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: <stable@vger.kernel.org>	[3.2.1]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1f536b9e
  6. 30 10月, 2011 1 次提交
  7. 27 5月, 2011 1 次提交
    • O
      fs/proc/vmcore.c: add hook to read_from_oldmem() to check for non-ram pages · 997c136f
      Olaf Hering 提交于
      The balloon driver in a Xen guest frees guest pages and marks them as
      mmio.  When the kernel crashes and the crash kernel attempts to read the
      oldmem via /proc/vmcore a read from ballooned pages will generate 100%
      load in dom0 because Xen asks qemu-dm for the page content.  Since the
      reads come in as 8byte requests each ballooned page is tried 512 times.
      
      With this change a hook can be registered which checks wether the given
      pfn is really ram.  The hook has to return a value > 0 for ram pages, a
      value < 0 on error (because the hypercall is not known) and 0 for non-ram
      pages.
      
      This will reduce the time to read /proc/vmcore.  Without this change a
      512M guest with 128M crashkernel region needs 200 seconds to read it, with
      this change it takes just 2 seconds.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      997c136f
  8. 30 11月, 2010 1 次提交
  9. 13 7月, 2009 1 次提交
  10. 23 10月, 2008 1 次提交
  11. 20 10月, 2008 2 次提交
    • S
      kdump: add is_vmcore_usable() and vmcore_unusable() · 85a0ee34
      Simon Horman 提交于
      The usage of elfcorehdr_addr has changed recently such that being set to
      ELFCORE_ADDR_MAX is used by is_kdump_kernel() to indicate if the code is
      executing in a kernel executed as a crash kernel.
      
      However, arch/ia64/kernel/setup.c:reserve_elfcorehdr will rest
      elfcorehdr_addr to ELFCORE_ADDR_MAX on error, which means any subsequent
      calls to is_kdump_kernel() will return 0, even though they should return
      1.
      
      Ok, at this point in time there are no subsequent calls, but I think its
      fair to say that there is ample scope for error or at the very least
      confusion.
      
      This patch add an extra state, ELFCORE_ADDR_ERR, which indicates that
      elfcorehdr_addr was passed on the command line, and thus execution is
      taking place in a crashdump kernel, but vmcore can't be used for some
      reason.  This is tested for using is_vmcore_usable() and set using
      vmcore_unusable().  A subsequent patch makes use of this new code.
      
      To summarise, the states that elfcorehdr_addr can now be in are as follows:
      
      ELFCORE_ADDR_MAX: not a crashdump kernel
      ELFCORE_ADDR_ERR: crashdump kernel but vmcore is unusable
      any other value:  crash dump kernel and vmcore is usable
      Signed-off-by: NSimon Horman <horms@verge.net.au>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      85a0ee34
    • V
      kdump: make elfcorehdr_addr independent of CONFIG_PROC_VMCORE · 57cac4d1
      Vivek Goyal 提交于
      o elfcorehdr_addr is used by not only the code under CONFIG_PROC_VMCORE
        but also by the code which is not inside CONFIG_PROC_VMCORE.  For
        example, is_kdump_kernel() is used by powerpc code to determine if
        kernel is booting after a panic then use previous kernel's TCE table.
        So even if CONFIG_PROC_VMCORE is not set in second kernel, one should be
        able to correctly determine that we are booting after a panic and setup
        calgary iommu accordingly.
      
      o So remove the assumption that elfcorehdr_addr is under
        CONFIG_PROC_VMCORE.
      
      o Move definition of elfcorehdr_addr to arch dependent crash files.
        (Unfortunately crash dump does not have an arch independent file
        otherwise that would have been the best place).
      
      o kexec.c is not the right place as one can Have CRASH_DUMP enabled in
        second kernel without KEXEC being enabled.
      
      o I don't see sh setup code parsing the command line for
        elfcorehdr_addr.  I am wondering how does vmcore interface work on sh.
        Anyway, I am atleast defining elfcoredhr_addr so that compilation is not
        broken on sh.
      Signed-off-by: NVivek Goyal <vgoyal@redhat.com>
      Acked-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      Acked-by: NSimon Horman <horms@verge.net.au>
      Acked-by: NPaul Mundt <lethal@linux-sh.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      57cac4d1
  12. 26 7月, 2008 2 次提交
    • I
      crashdump: fix undefined reference to `elfcorehdr_addr' · 36ac2617
      Ingo Molnar 提交于
      fix build bug introduced by 95b68dec "calgary iommu: use the first
      kernels TCE tables in kdump":
      
      arch/x86/kernel/built-in.o: In function `calgary_iommu_init':
      (.init.text+0x8399): undefined reference to `elfcorehdr_addr'
      arch/x86/kernel/built-in.o: In function `calgary_iommu_init':
      (.init.text+0x856c): undefined reference to `elfcorehdr_addr'
      arch/x86/kernel/built-in.o: In function `detect_calgary':
      (.init.text+0x8c68): undefined reference to `elfcorehdr_addr'
      arch/x86/kernel/built-in.o: In function `detect_calgary':
      (.init.text+0x8d0c): undefined reference to `elfcorehdr_addr'
      
      make elfcorehdr_addr a generally available symbol.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      36ac2617
    • C
      calgary iommu: use the first kernels TCE tables in kdump · 95b68dec
      Chandru 提交于
      kdump kernel fails to boot with calgary iommu and aacraid driver on a x366
      box.  The ongoing dma's of aacraid from the first kernel continue to exist
      until the driver is loaded in the kdump kernel.  Calgary is initialized
      prior to aacraid and creation of new tce tables causes wrong dma's to
      occur.  Here we try to get the tce tables of the first kernel in kdump
      kernel and use them.  While in the kdump kernel we do not allocate new tce
      tables but instead read the base address register contents of calgary
      iommu and use the tables that the registers point to.  With these changes
      the kdump kernel and hence aacraid now boots normally.
      Signed-off-by: NChandru Siddalingappa <chandru@in.ibm.com>
      Acked-by: NMuli Ben-Yehuda <muli@il.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      95b68dec
  13. 03 5月, 2007 1 次提交
  14. 29 3月, 2006 1 次提交
  15. 26 6月, 2005 3 次提交