1. 12 10月, 2015 1 次提交
  2. 08 8月, 2015 1 次提交
  3. 07 8月, 2015 2 次提交
    • N
      mm: check __PG_HWPOISON separately from PAGE_FLAGS_CHECK_AT_* · f4c18e6f
      Naoya Horiguchi 提交于
      The race condition addressed in commit add05cec ("mm: soft-offline:
      don't free target page in successful page migration") was not closed
      completely, because that can happen not only for soft-offline, but also
      for hard-offline.  Consider that a slab page is about to be freed into
      buddy pool, and then an uncorrected memory error hits the page just
      after entering __free_one_page(), then VM_BUG_ON_PAGE(page->flags &
      PAGE_FLAGS_CHECK_AT_PREP) is triggered, despite the fact that it's not
      necessary because the data on the affected page is not consumed.
      
      To solve it, this patch drops __PG_HWPOISON from page flag checks at
      allocation/free time.  I think it's justified because __PG_HWPOISON
      flags is defined to prevent the page from being reused, and setting it
      outside the page's alloc-free cycle is a designed behavior (not a bug.)
      
      For recent months, I was annoyed about BUG_ON when soft-offlined page
      remains on lru cache list for a while, which is avoided by calling
      put_page() instead of putback_lru_page() in page migration's success
      path.  This means that this patch reverts a major change from commit
      add05cec about the new refcounting rule of soft-offlined pages, so
      "reuse window" revives.  This will be closed by a subsequent patch.
      Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Dean Nelson <dnelson@redhat.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f4c18e6f
    • M
      fs, file table: reinit files_stat.max_files after deferred memory initialisation · 4248b0da
      Mel Gorman 提交于
      Dave Hansen reported the following;
      
      	My laptop has been behaving strangely with 4.2-rc2.  Once I log
      	in to my X session, I start getting all kinds of strange errors
      	from applications and see this in my dmesg:
      
              	VFS: file-max limit 8192 reached
      
      The problem is that the file-max is calculated before memory is fully
      initialised and miscalculates how much memory the kernel is using.  This
      patch recalculates file-max after deferred memory initialisation.  Note
      that using memory hotplug infrastructure would not have avoided this
      problem as the value is not recalculated after memory hot-add.
      
      4.1:             files_stat.max_files = 6582781
      4.2-rc2:         files_stat.max_files = 8192
      4.2-rc2 patched: files_stat.max_files = 6562467
      
      Small differences with the patch applied and 4.1 but not enough to matter.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Reported-by: NDave Hansen <dave.hansen@intel.com>
      Cc: Nicolai Stange <nicstange@gmail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Alex Ng <alexng@microsoft.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4248b0da
  4. 28 7月, 2015 1 次提交
    • R
      cpufreq: Avoid attempts to create duplicate symbolic links · 559ed407
      Rafael J. Wysocki 提交于
      After commit 87549141 (cpufreq: Stop migrating sysfs files on
      hotplug) there is a problem with CPUs that share cpufreq policy
      objects with other CPUs and are initially offline.
      
      Say CPU1 shares a policy with CPU0 which is online and is registered
      first.  As part of the registration process, cpufreq_add_dev() is
      called for it.  It creates the policy object and a symbolic link
      to it from the CPU1's sysfs directory.  If CPU1 is registered
      subsequently and it is offline at that time, cpufreq_add_dev() will
      attempt to create a symbolic link to the policy object for it, but
      that link is present already, so a warning about that will be
      triggered.
      
      To avoid that warning, make cpufreq use an additional CPU mask
      containing related CPUs that are actually present for each policy
      object.  That mask is initialized when the policy object is populated
      after its creation (for the first online CPU using it) and it includes
      CPUs from the "policy CPUs" mask returned by the cpufreq driver's
      ->init() callback that are physically present at that time.  Symbolic
      links to the policy are created only for the CPUs in that mask.
      
      If cpufreq_add_dev() is invoked for an offline CPU, it checks the
      new mask and only creates the symlink if the CPU was not in it (the
      CPU is added to the mask at the same time).
      
      In turn, cpufreq_remove_dev() drops the given CPU from the new mask,
      removes its symlink to the policy object and returns, unless it is
      the CPU owning the policy object.  In that case, the policy object
      is moved to a new CPU's sysfs directory or deleted if the CPU being
      removed was the last user of the policy.
      
      While at it, notice that cpufreq_remove_dev() can't fail, because
      its return value is ignored, so make it ignore return values from
      __cpufreq_remove_dev_prepare() and __cpufreq_remove_dev_finish()
      and prevent these functions from aborting on errors returned by
      __cpufreq_governor().  Also drop the now unused sif argument from
      them.
      
      Fixes: 87549141 (cpufreq: Stop migrating sysfs files on hotplug)
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Reported-and-tested-by: NRussell King <linux@arm.linux.org.uk>
      Acked-by: NViresh Kumar <viresh.kumar@linaro.org>
      559ed407
  5. 27 7月, 2015 2 次提交
  6. 25 7月, 2015 1 次提交
  7. 24 7月, 2015 1 次提交
  8. 23 7月, 2015 2 次提交
  9. 19 7月, 2015 1 次提交
    • S
      hid-sensor: Fix suspend/resume delay · 88cc7b4e
      Srinivas Pandruvada 提交于
      By default all the sensors are runtime suspended state (lowest power
      state). During Linux suspend process, all the run time suspended
      devices are resumed and then suspended. This caused all sensors to
      power up and introduced delay in suspend time, when we introduced
      runtime PM for HID sensors. The opposite process happens during resume
      process.
      
      To fix this, we do powerup process of the sensors only when the request
      is issued from user (raw or tiggerred). In this way when runtime,
      resume calls for powerup it will simply return as this will not match
      user requested state.
      
      Note this is a regression fix as the increase in suspend / resume
      times can be substantial (report of 8 seconds on Len's laptop!)
      Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
      Tested-by: NLen Brown <len.brown@intel.com>
      Cc: <Stable@vger.kernel.org>
      Signed-off-by: NJonathan Cameron <jic23@kernel.org>
      88cc7b4e
  10. 18 7月, 2015 6 次提交
  11. 16 7月, 2015 1 次提交
  12. 15 7月, 2015 3 次提交
    • D
      libata: add ATA_HORKAGE_MAX_SEC_1024 to revert back to previous max_sectors limit · af34d637
      David Milburn 提交于
      Since no longer limiting max_sectors to BLK_DEF_MAX_SECTORS (commit 34b48db6),
      data corruption may occur on ST380013AS drive configured on 82801JI (ICH10 Family)
      SATA controller. This patch will allow the driver to limit max_sectors as before
      
       # cat /sys/block/sdb/queue/max_sectors_kb
       512
      
      I was able to double the max_sectors_kb value up to 16384 on linux-4.2.0-rc2
      before seeing corruption, but seems safer to use previous limit. Without this
      patch max_sectors_kb will be 32767.
      
      tj: Minor comment update.
      Reported-by: NJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: NDavid Milburn <dmilburn@redhat.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org # v3.19 and later
      Fixes: 34b48db6 ("block: remove artifical max_hw_sectors cap")
      af34d637
    • A
      libata: add ATA_HORKAGE_NOTRIM · 71d126fd
      Arne Fitzenreiter 提交于
      Some devices lose data on TRIM whether queued or not.  This patch adds
      a horkage to disable TRIM.
      
      tj: Collapsed unnecessary if() nesting.
      Signed-off-by: NArne Fitzenreiter <arne_f@ipfire.org>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      71d126fd
    • L
      efi: Handle memory error structures produced based on old versions of standard · 4c62360d
      Luck, Tony 提交于
      The memory error record structure includes as its first field a
      bitmask of which subsequent fields are valid. The allows new fields
      to be added to the structure while keeping compatibility with older
      software that parses these records. This mechanism was used between
      versions 2.2 and 2.3 to add four new fields, growing the size of the
      structure from 73 bytes to 80. But Linux just added all the new
      fields so this test:
      	if (gdata->error_data_length >= sizeof(*mem_err))
      		cper_print_mem(newpfx, mem_err);
      	else
      		goto err_section_too_small;
      now make Linux complain about old format records being too short.
      
      Add a definition for the old format of the structure and use that
      for the minimum size check. Pass the actual size to cper_print_mem()
      so it can sanity check the validation_bits field to ensure that if
      a BIOS using the old format sets bits as if it were new, we won't
      access fields beyond the end of the structure.
      Signed-off-by: NTony Luck <tony.luck@intel.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
      4c62360d
  13. 13 7月, 2015 3 次提交
  14. 10 7月, 2015 5 次提交
    • P
      KVM: count number of assigned devices · 5544eb9b
      Paolo Bonzini 提交于
      If there are no assigned devices, the guest PAT are not providing
      any useful information and can be overridden to writeback; VMX
      always does this because it has the "IPAT" bit in its extended
      page table entries, but SVM does not have anything similar.
      Hook into VFIO and legacy device assignment so that they
      provide this information to KVM.
      Reviewed-by: NAlex Williamson <alex.williamson@redhat.com>
      Tested-by: NJoerg Roedel <jroedel@suse.de>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5544eb9b
    • E
      cdc_ncm: Add support for moving NDP to end of NCM frame · 4a0e3e98
      Enrico Mioso 提交于
      NCM specs are not actually mandating a specific position in the frame for
      the NDP (Network Datagram Pointer). However, some Huawei devices will
      ignore our aggregates if it is not placed after the datagrams it points
      to. Add support for doing just this, in a per-device configurable way.
      While at it, update NCM subdrivers, disabling this functionality in all of
      them, except in huawei_cdc_ncm where it is enabled instead.
      We aren't making any distinction between different Huawei NCM devices,
      based on what the vendor driver does. Standard NCM devices are left
      unaffected: if they are compliant, they should be always usable, still
      stay on the safe side.
      
      This change has been tested and working with a Huawei E3131 device (which
      works regardless of NDP position), a Huawei E3531 (also working both
      ways) and an E3372 (which mandates NDP to be after indexed datagrams).
      
      V1->V2:
      - corrected wrong NDP acronym definition
      - fixed possible NULL pointer dereference
      - patch cleanup
      V2->V3:
      - Properly account for the NDP size when writing new packets to SKB
      Signed-off-by: NEnrico Mioso <mrkiko.rs@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4a0e3e98
    • T
      blkcg: fix blkcg_policy_data allocation bug · 06b285bd
      Tejun Heo 提交于
      e48453c3 ("block, cgroup: implement policy-specific per-blkcg
      data") updated per-blkcg policy data to be dynamically allocated.
      When a policy is registered, its policy data aren't created.  Instead,
      when the policy is activated on a queue, the policy data are allocated
      if there are blkg's (blkcg_gq's) which are attached to a given blkcg.
      This is buggy.  Consider the following scenario.
      
      1. A blkcg is created.  No blkg's attached yet.
      
      2. The policy is registered.  No policy data is allocated.
      
      3. The policy is activated on a queue.  As the above blkcg doesn't
         have any blkg's, it won't allocate the matching blkcg_policy_data.
      
      4. An IO is issued from the blkcg and blkg is created and the blkcg
         still doesn't have the matching policy data allocated.
      
      With cfq-iosched, this leads to an oops.
      
      It also doesn't free policy data on policy unregistration assuming
      that freeing of all policy data on blkcg destruction should take care
      of it; however, this also is incorrect.
      
      1. A blkcg has policy data.
      
      2. The policy gets unregistered but the policy data remains.
      
      3. Another policy gets registered on the same slot.
      
      4. Later, the new policy tries to allocate policy data on the previous
         blkcg but the slot is already occupied and gets skipped.  The
         policy ends up operating on the policy data of the previous policy.
      
      There's no reason to manage blkcg_policy_data lazily.  The reason we
      do lazy allocation of blkg's is that the number of all possible blkg's
      is the product of cgroups and block devices which can reach a
      surprising level.  blkcg_policy_data is contrained by the number of
      cgroups and shouldn't be a problem.
      
      This patch makes blkcg_policy_data to be allocated for all existing
      blkcg's on policy registration and freed on unregistration and removes
      blkcg_policy_data handling from policy [de]activation paths.  This
      makes that blkcg_policy_data are created and removed with the policy
      they belong to and fixes the above described problems.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Fixes: e48453c3 ("block, cgroup: implement policy-specific per-blkcg data")
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Arianna Avanzini <avanzini.arianna@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      06b285bd
    • T
      blkcg: implement all_blkcgs list · 7876f930
      Tejun Heo 提交于
      Add all_blkcgs list goes through blkcg->all_blkcgs_node and is
      protected by blkcg_pol_mutex.  This will be used to fix
      blkcg_policy_data allocation bug.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Cc: Vivek Goyal <vgoyal@redhat.com>
      Cc: Arianna Avanzini <avanzini.arianna@gmail.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      7876f930
    • I
      libceph: enable ceph in a non-default network namespace · 757856d2
      Ilya Dryomov 提交于
      Grab a reference on a network namespace of the 'rbd map' (in case of
      rbd) or 'mount' (in case of ceph) process and use that to open sockets
      instead of always using init_net and bailing if network namespace is
      anything but init_net.  Be careful to not share struct ceph_client
      instances between different namespaces and don't add any code in the
      !CONFIG_NET_NS case.
      
      This is based on a patch from Hong Zhiguo <zhiguohong@tencent.com>.
      Signed-off-by: NIlya Dryomov <idryomov@gmail.com>
      Reviewed-by: NSage Weil <sage@redhat.com>
      757856d2
  15. 09 7月, 2015 1 次提交
  16. 08 7月, 2015 4 次提交
    • T
      hotplug: Prevent alloc/free of irq descriptors during cpu up/down · a8994181
      Thomas Gleixner 提交于
      When a cpu goes up some architectures (e.g. x86) have to walk the irq
      space to set up the vector space for the cpu. While this needs extra
      protection at the architecture level we can avoid a few race
      conditions by preventing the concurrent allocation/free of irq
      descriptors and the associated data.
      
      When a cpu goes down it moves the interrupts which are targeted to
      this cpu away by reassigning the affinities. While this happens
      interrupts can be allocated and freed, which opens a can of race
      conditions in the code which reassignes the affinities because
      interrupt descriptors might be freed underneath.
      
      Example:
      
      CPU1				CPU2
      cpu_up/down
       irq_desc = irq_to_desc(irq);
      				remove_from_radix_tree(desc);
       raw_spin_lock(&desc->lock);
      				free(desc);
      
      We could protect the irq descriptors with RCU, but that would require
      a full tree change of all accesses to interrupt descriptors. But
      fortunately these kind of race conditions are rather limited to a few
      things like cpu hotplug. The normal setup/teardown is very well
      serialized. So the simpler and obvious solution is:
      
      Prevent allocation and freeing of interrupt descriptors accross cpu
      hotplug.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: xiao jin <jin.xiao@intel.com>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Yanmin Zhang <yanmin_zhang@linux.intel.com>
      Link: http://lkml.kernel.org/r/20150705171102.063519515@linutronix.de
      a8994181
    • S
      mtd: nand: Fix NAND_USE_BOUNCE_BUFFER flag conflict · 5f867db6
      Scott Wood 提交于
      Commit 66507c7b ("mtd: nand: Add support to use nand_base
      poi databuf as bounce buffer") added a flag NAND_USE_BOUNCE_BUFFER
      using the same bit value as the existing NAND_BUSWIDTH_AUTO.
      
      Cc: Kamal Dasu <kdasu.kdev@gmail.com>
      Fixes: 66507c7b ("mtd: nand: Add support to use nand_base
      	poi databuf as bounce buffer")
      Signed-off-by: NScott Wood <scottwood@freescale.com>
      Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
      5f867db6
    • T
      tick/broadcast: Unbreak CONFIG_GENERIC_CLOCKEVENTS=n build · 37b64a42
      Thomas Gleixner 提交于
      Making tick_broadcast_oneshot_control() independent from
      CONFIG_GENERIC_CLOCKEVENTS_BROADCAST broke the build for
      CONFIG_GENERIC_CLOCKEVENTS=n because the function is not defined
      there.
      
      Provide a proper stub inline.
      
      Fixes: f32dd117 'tick/broadcast: Make idle check independent from mode and config'
      Reported-by: Nkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      37b64a42
    • T
      tick/broadcast: Make idle check independent from mode and config · f32dd117
      Thomas Gleixner 提交于
      Currently the broadcast busy check, which prevents the idle code from
      going into deep idle, works only in one shot mode.
      
      If NOHZ and HIGHRES are off (config or command line) there is no
      sanity check at all, so under certain conditions cpus are allowed to
      go into deep idle, where the local timer stops, and are not woken up
      again because there is no broadcast timer installed or a hrtimer based
      broadcast device is not evaluated.
      
      Move tick_broadcast_oneshot_control() into the common code and provide
      proper subfunctions for the various config combinations.
      
      The common check in tick_broadcast_oneshot_control() is for the C3STOP
      misfeature flag of the local clock event device. If its not set, idle
      can proceed. If set, further checks are necessary.
      
      Provide checks for the trivial cases:
      
       - If broadcast is disabled in the config, then return busy
      
       - If oneshot mode (NOHZ/HIGHES) is disabled in the config, return
         busy if the broadcast device is hrtimer based.
      
       - If oneshot mode is enabled in the config call the original
         tick_broadcast_oneshot_control() function. That function needs
         extra checks which will be implemented in seperate patches.
      
      [ Split out from a larger combo patch ]
      Reported-and-tested-by: NSudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Suzuki Poulose <Suzuki.Poulose@arm.com>
      Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com>
      Cc: Catalin Marinas <Catalin.Marinas@arm.com>
      Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Preeti U Murthy <preeti@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1507070929360.3916@nanos
      f32dd117
  17. 07 7月, 2015 2 次提交
    • S
      ACPI / scan: Add support for ACPI _CLS device matching · 26095a01
      Suthikulpanit, Suravee 提交于
      Device drivers typically use ACPI _HIDs/_CIDs listed in struct device_driver
      acpi_match_table to match devices. However, for generic drivers, we do not
      want to list _HID for all supported devices. Also, certain classes of devices
      do not have _CID (e.g. SATA, USB). Instead, we can leverage ACPI _CLS,
      which specifies PCI-defined class code (i.e. base-class, subclass and
      programming interface). This patch adds support for matching ACPI devices using
      the _CLS method.
      
      To support loadable module, current design uses _HID or _CID to match device's
      modalias. With the new way of matching with _CLS this would requires modification
      to the current ACPI modalias key to include _CLS. This patch appends PCI-defined
      class-code to the existing ACPI modalias as following.
      
          acpi:<HID>:<CID1>:<CID2>:..:<CIDn>:<bbsspp>:
      E.g:
          # cat /sys/devices/platform/AMDI0600:00/modalias
          acpi:AMDI0600:010601:
      
      where bb is th base-class code, ss is te sub-class code, and pp is the
      programming interface code
      
      Since there would not be _HID/_CID in the ACPI matching table of the driver,
      this patch adds a field to acpi_device_id to specify the matching _CLS.
      
          static const struct acpi_device_id ahci_acpi_match[] = {
              { ACPI_DEVICE_CLASS(PCI_CLASS_STORAGE_SATA_AHCI, 0xffffff) },
              {},
          };
      
      In this case, the corresponded entry in modules.alias file would be:
      
          alias acpi*:010601:* ahci_platform
      Acked-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Reviewed-by: NHanjun Guo <hanjun.guo@linaro.org>
      Signed-off-by: NSuravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      26095a01
    • R
      ACPI / PNP: Reserve ACPI resources at the fs_initcall_sync stage · 0294112e
      Rafael J. Wysocki 提交于
      This effectively reverts the following three commits:
      
       7bc10388 ACPI / resources: free memory on error in add_region_before()
       0f1b414d ACPI / PNP: Avoid conflicting resource reservations
       b9a5e5e1 ACPI / init: Fix the ordering of acpi_reserve_resources()
      
      (commit b9a5e5e1 introduced regressions some of which, but not
      all, were addressed by commit 0f1b414d and commit 7bc10388
      was a fixup on top of the latter) and causes ACPI fixed hardware
      resources to be reserved at the fs_initcall_sync stage of system
      initialization.
      
      The story is as follows.  First, a boot regression was reported due
      to an apparent resource reservation ordering change after a commit
      that shouldn't lead to such changes.  Investigation led to the
      conclusion that the problem happened because acpi_reserve_resources()
      was executed at the device_initcall() stage of system initialization
      which wasn't strictly ordered with respect to driver initialization
      (and with respect to the initialization of the pcieport driver in
      particular), so a random change causing the device initcalls to be
      run in a different order might break things.
      
      The response to that was to attempt to run acpi_reserve_resources()
      as soon as we knew that ACPI would be in use (commit b9a5e5e1).
      However, that turned out to be too early, because it caused resource
      reservations made by the PNP system driver to fail on at least one
      system and that failure was addressed by commit 0f1b414d.
      
      That fix still turned out to be insufficient, though, because
      calling acpi_reserve_resources() before the fs_initcall stage of
      system initialization caused a boot regression to happen on the
      eCAFE EC-800-H20G/S netbook.  That meant that we only could call
      acpi_reserve_resources() at the fs_initcall initialization stage
      or later, but then we might just as well call it after the PNP
      initalization in which case commit 0f1b414d wouldn't be
      necessary any more.
      
      For this reason, the changes made by commit 0f1b414d are reverted
      (along with a memory leak fixup on top of that commit), the changes
      made by commit b9a5e5e1 that went too far are reverted too and
      acpi_reserve_resources() is changed into fs_initcall_sync, which
      will cause it to be executed after the PNP subsystem initialization
      (which is an fs_initcall) and before device initcalls (including
      the pcieport driver initialization) which should avoid the initial
      issue.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=100581
      Link: http://marc.info/?t=143092384600002&r=1&w=2
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=99831
      Link: http://marc.info/?t=143389402600001&r=1&w=2
      Fixes: b9a5e5e1 "ACPI / init: Fix the ordering of acpi_reserve_resources()"
      Reported-by: NRoland Dreier <roland@purestorage.com>
      Cc: All applicable <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      0294112e
  18. 06 7月, 2015 1 次提交
    • P
      module: relocate module_init from init.h to module.h · 0fd972a7
      Paul Gortmaker 提交于
      Modular users will always be users of init functionality, but
      users of init functionality are not necessarily always modules.
      
      Hence any functionality like module_init and module_exit would
      be more at home in the module.h file.  And module.h should
      explicitly include init.h to make the dependency clear.
      
      We've already done all the legwork needed to ensure that this
      move does not cause any build regressions due to implicit
      header file include assumptions about where module_init lives.
      
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0fd972a7
  19. 05 7月, 2015 2 次提交