1. 05 10月, 2015 4 次提交
    • Y
      mm: Check if section present during memory block (un)registering · 7568fb63
      Yinghai Lu 提交于
      Tony found on his setup, if memory block size 512M will cause crash
      during booting.
      
       BUG: unable to handle kernel paging request at ffffea0074000020
       IP: [<ffffffff81670527>] get_nid_for_pfn+0x17/0x40
       PGD 128ffcb067 PUD 128ffc9067 PMD 0
       Oops: 0000 [#1] SMP
       Modules linked in:
       CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.2.0-rc8 #1
      ...
       Call Trace:
        [<ffffffff81453b56>] ? register_mem_sect_under_node+0x66/0xe0
        [<ffffffff81453eeb>] register_one_node+0x17b/0x240
        [<ffffffff81b1f1ed>] ? pci_iommu_alloc+0x6e/0x6e
        [<ffffffff81b1f229>] topology_init+0x3c/0x95
        [<ffffffff8100213d>] do_one_initcall+0xcd/0x1f0
      
      The system has non continuous RAM address:
       BIOS-e820: [mem 0x0000001300000000-0x0000001cffffffff] usable
       BIOS-e820: [mem 0x0000001d70000000-0x0000001ec7ffefff] usable
       BIOS-e820: [mem 0x0000001f00000000-0x0000002bffffffff] usable
       BIOS-e820: [mem 0x0000002c18000000-0x0000002d6fffefff] usable
       BIOS-e820: [mem 0x0000002e00000000-0x00000039ffffffff] usable
      
      So there are start sections in memory block not present.
      For example:
      memory block : [0x2c18000000, 0x2c20000000) 512M
      first three sections are not present.
      
      Current register_mem_sect_under_node() assume first section is present,
      but memory block section number range [start_section_nr, end_section_nr]
      would include not present section.
      
      For arch that support vmemmap, we don't setup memmap for struct page area
      within not present sections area.
      
      So skip the pfn range that belong to absent section.
      
      Also fixes unregister_mem_sect_under_nodes() that assume one section per
      memory block.
      Reported-by: NTony Luck <tony.luck@intel.com>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Fixes: bdee237c ("x86: mm: Use 2GB memory block size on large memory x86-64 systems")
      Fixes: 982792c7 ("x86, mm: probe memory block size for generic x86 64bit")
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: stable@vger.kernel.org #v3.15
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      7568fb63
    • T
      CMA: fix CONFIG_CMA_SIZE_MBYTES overflow in 64bit · a785ce9c
      Tan Xiaojun 提交于
      In 64bit system, if you set CONFIG_CMA_SIZE_MBYTES>=2048, it will
      overflow and size_bytes will be a big wrong number.
      
      Set CONFIG_CMA_SIZE_MBYTES=2048 and you will get an info below
      during system boot:
      
      *********
      cma: Failed to reserve 17592186042368 MiB
      *********
      Signed-off-by: NTan Xiaojun <tanxiaojun@huawei.com>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      a785ce9c
    • U
      base/platform: assert that dev_pm_domain callbacks are called unconditionally · b8b2c7d8
      Uwe Kleine-König 提交于
      When a platform driver doesn't provide a .remove callback the function
      platform_drv_remove isn't called and so the call to dev_pm_domain_attach
      called at probe time isn't paired by dev_pm_domain_detach at remove
      time.
      
      To fix this (and similar issues if different callbacks are missing) hook
      up the driver callbacks unconditionally and make them aware that the
      platform callbacks might be missing.
      Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b8b2c7d8
    • L
      base: soc: siplify ida usage · cfcf6a91
      Lee Duncan 提交于
      Simplify ida index allocation and removal by
      using the ida_simple_* helper functions
      Signed-off-by: NLee Duncan <lduncan@suse.com>
      Reviewed-by: NJohannes Thumshirn <jthumshirn@suse.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      cfcf6a91
  2. 04 10月, 2015 1 次提交
  3. 17 9月, 2015 1 次提交
    • B
      cpu/cacheinfo: Fix teardown path · 2110d70c
      Borislav Petkov 提交于
      Philip Müller reported a hang when booting 32-bit 4.1 kernel on an AMD
      box. A fragment of the splat was enough to pinpoint the issue:
      
        task: f58e0000 ti: f58e8000 task.ti: f58e800
        EIP: 0060:[<c135a903>] EFLAGS: 00010206 CPU: 0
        EIP is at free_cache_attributes+0x83/0xd0
        EAX: 00000001 EBX: f589d46c ECX: 00000090 EDX: 360c2000
        ESI: 00000000 EDI: c1724a80 EBP: f58e9ec0 ESP: f58e9ea0
         DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
        CR0: 8005003b CR2: 000000ac CR3: 01731000 CR4: 000006d0
      
      cache_shared_cpu_map_setup() did check sibling CPUs cacheinfo descriptor
      while the respective teardown path cache_shared_cpu_map_remove() didn't.
      Fix that.
      
      >From tglx's version: to be on the safe side, move the cacheinfo
      descriptor check to free_cache_attributes(), thus cleaning up the
      hotplug path a little and making this even more robust.
      Reported-and-tested-by: NPhilip Müller <philm@manjaro.org>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NSudeep Holla <sudeep.holla@arm.com>
      Cc: <stable@vger.kernel.org> # 4.1
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: linux-kernel@vger.kernel.org
      Cc: manjaro-dev@manjaro.org
      Cc: Philip Müller <philm@manjaro.org>
      Link: https://lkml.kernel.org/r/55B47BB8.6080202@manjaro.orgSigned-off-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      2110d70c
  4. 15 9月, 2015 1 次提交
  5. 10 9月, 2015 1 次提交
  6. 09 9月, 2015 2 次提交
  7. 05 9月, 2015 2 次提交
    • J
      PM / Domains: Ensure subdomain is not in use before removing · 30e7a65b
      Jon Hunter 提交于
      The function pm_genpd_remove_subdomain() removes a subdomain from a
      generic PM domain, however, it does not check if the subdomain has any
      slave domains or device attached before doing so. Therefore, add a test
      to verify that the subdomain does not have any slave domains associated
      or any device attached before removing.
      Signed-off-by: NJon Hunter <jonathanh@nvidia.com>
      Acked-by: NKevin Hilman <khilman@linaro.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      30e7a65b
    • Y
      mm: check if section present during memory block registering · 04697858
      Yinghai Lu 提交于
      Tony Luck found on his setup, if memory block size 512M will cause crash
      during booting.
      
        BUG: unable to handle kernel paging request at ffffea0074000020
        IP: get_nid_for_pfn+0x17/0x40
        PGD 128ffcb067 PUD 128ffc9067 PMD 0
        Oops: 0000 [#1] SMP
        Modules linked in:
        CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.2.0-rc8 #1
        ...
        Call Trace:
           ? register_mem_sect_under_node+0x66/0xe0
           register_one_node+0x17b/0x240
           ? pci_iommu_alloc+0x6e/0x6e
           topology_init+0x3c/0x95
           do_one_initcall+0xcd/0x1f0
      
      The system has non continuous RAM address:
       BIOS-e820: [mem 0x0000001300000000-0x0000001cffffffff] usable
       BIOS-e820: [mem 0x0000001d70000000-0x0000001ec7ffefff] usable
       BIOS-e820: [mem 0x0000001f00000000-0x0000002bffffffff] usable
       BIOS-e820: [mem 0x0000002c18000000-0x0000002d6fffefff] usable
       BIOS-e820: [mem 0x0000002e00000000-0x00000039ffffffff] usable
      
      So there are start sections in memory block not present.  For example:
      
          memory block : [0x2c18000000, 0x2c20000000) 512M
      
      first three sections are not present.
      
      The current register_mem_sect_under_node() assume first section is
      present, but memory block section number range [start_section_nr,
      end_section_nr] would include not present section.
      
      For arch that support vmemmap, we don't setup memmap for struct page
      area within not present sections area.
      
      So skip the pfn range that belong to absent section.
      
      [akpm@linux-foundation.org: simplification]
      [rientjes@google.com: more simplification]
      Fixes: bdee237c ("x86: mm: Use 2GB memory block size on large memory x86-64 systems")
      Fixes: 982792c7 ("x86, mm: probe memory block size for generic x86 64bit")
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NDavid Rientjes <rientjes@google.com>
      Reported-by: NTony Luck <tony.luck@intel.com>
      Tested-by: NTony Luck <tony.luck@intel.com>
      Cc: Greg KH <greg@kroah.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Tested-by: NDavid Rientjes <rientjes@google.com>
      Cc: <stable@vger.kernel.org>	[3.15+]
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      04697858
  8. 03 9月, 2015 1 次提交
  9. 31 8月, 2015 2 次提交
  10. 30 8月, 2015 4 次提交
  11. 29 8月, 2015 3 次提交
  12. 28 8月, 2015 4 次提交
  13. 26 8月, 2015 2 次提交
  14. 22 8月, 2015 3 次提交
  15. 21 8月, 2015 2 次提交
  16. 15 8月, 2015 1 次提交
  17. 14 8月, 2015 1 次提交
  18. 12 8月, 2015 3 次提交
  19. 07 8月, 2015 2 次提交
    • N
      regmap: Use different lockdep class for each regmap init call · 3cfe7a74
      Nicolas Boichat 提交于
      Lockdep validator complains about recursive locking and deadlock
      when two different regmap instances are called in a nested order.
      That happens anytime a regmap read/write call needs to access
      another regmap.
      
      This is because, for performance reason, lockdep groups all locks
      initialized by the same mutex_init() in the same lock class.
      Therefore all regmap mutexes are in the same lock class, leading
      to lockdep "nested locking" warnings if a regmap accesses another
      regmap.
      
      In general, it is impossible to establish in advance the hierarchy
      of regmaps, so we make sure that each regmap init call initializes
      its own static lock_class_key. This is done by wrapping all
      regmap_init calls into macros.
      
      This also allows us to give meaningful names to the lock_class_key.
      For example, in rt5677 case, we have in /proc/lockdep_chains:
      irq_context: 0
      [ffffffc0018d2198] &dev->mutex
      [ffffffc0018d2198] &dev->mutex
      [ffffffc001bd7f60] rt5677:5104:(&rt5677_regmap)->_lock
      [ffffffc001bd7f58] rt5677:5096:(&rt5677_regmap_physical)->_lock
      [ffffffc001b95448] &(&base->lock)->rlock
      
      The above would have resulted in a lockdep recursive warning
      previously. This is not the case anymore as the lockdep validator
      now clearly identifies the 2 regmaps as separate.
      Signed-off-by: NNicolas Boichat <drinkcat@chromium.org>
      Signed-off-by: NMark Brown <broonie@kernel.org>
      3cfe7a74
    • A
      regmap: debugfs: Fix misuse of IS_ENABLED · 1635e888
      Axel Lin 提交于
      IS_ENABLED should only be used for CONFIG_* symbols.
      
      I have done a small test:
        #define REGMAP_ALLOW_WRITE_DEBUGFS
        IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS) returns 0.
      
        #define REGMAP_ALLOW_WRITE_DEBUGFS 0
        IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS) returns 0.
      
        #define REGMAP_ALLOW_WRITE_DEBUGFS 1
        IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS) returns 1.
      
        #define REGMAP_ALLOW_WRITE_DEBUGFS 2
        IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS) returns 0.
      
      So fix the misuse of IS_ENABLED(REGMAP_ALLOW_WRITE_DEBUGFS) and switch to
      use #if defined(REGMAP_ALLOW_WRITE_DEBUGFS) instead.
      Signed-off-by: NAxel Lin <axel.lin@ingics.com>
      Signed-off-by: NMark Brown <broonie@kernel.org>
      1635e888