- 23 2月, 2015 3 次提交
-
-
由 Seth Forshee 提交于
d1c7e29e (HID: i2c-hid: prevent buffer overflow in early IRQ) changed hid_get_input() to read ihid->bufsize bytes, which can be more than wMaxInputLength. This is the case with the Dell XPS 13 9343, and it is causing events to be missed. In some cases the missed events are releases, which can cause the cursor to jump or freeze, among other problems. Limit the number of bytes read to min(wMaxInputLength, ihid->bufsize) to prevent such problems. Fixes: d1c7e29e "HID: i2c-hid: prevent buffer overflow in early IRQ" Signed-off-by: NSeth Forshee <seth.forshee@canonical.com> Reviewed-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Frank Praznik 提交于
Per-controller spinlock needs to be properly initialized during device probe. [jkosina@suse.cz: massage changelog] [jkosina@suse.cz: drop hunk that has already been applied by previous patch] Signed-off-by: NFrank Praznik <frank.praznik@oh.rr.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Jiri Kosina 提交于
sony_dev_list_lock spinlock (which was introduced in d2d782fc ("HID: sony: Prevent duplicate controller connections") is not being initialized properly. Fix that. Reported-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 19 2月, 2015 1 次提交
-
-
由 Antonio Ospite 提交于
ida_destroy() must be called _after_ all the devices have been unregistered; otherwise, when calling "rmmod hid_sony" with devices still plugged in, the following warning would show up because of calls to ida_simple_remove() on a destroyed ID allocator: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 5509 at lib/idr.c:1052 ida_simple_remove+0x26/0x50() ida_remove called for id=0 which is not allocated. Modules linked in: ... CPU: 0 PID: 5509 Comm: rmmod Not tainted 3.19.0-rc6-ao2 #35 Hardware name: System manufacturer System Product Name/M2N-MX SE, BIOS 0501 03/20/2008 0000000000000000 ffffffff8176320d ffffffff815b3a88 ffff880036f7fdd8 ffffffff8106ce01 0000000000000000 ffffffffa07658e0 0000000000000246 ffff88005077d8b8 ffff88005077d8d0 ffffffff8106ce7a ffffffff81763260 Call Trace: [<ffffffff815b3a88>] ? dump_stack+0x40/0x50 [<ffffffff8106ce01>] ? warn_slowpath_common+0x81/0xb0 [<ffffffff8106ce7a>] ? warn_slowpath_fmt+0x4a/0x50 [<ffffffff812ccb86>] ? ida_simple_remove+0x26/0x50 [<ffffffffa0762dc8>] ? sony_remove+0x58/0xe0 [hid_sony] [<ffffffffa00fff15>] ? hid_device_remove+0x65/0xd0 [hid] [<ffffffff8140425e>] ? __device_release_driver+0x7e/0x100 [<ffffffff81404c70>] ? driver_detach+0xa0/0xb0 [<ffffffff81403ee5>] ? bus_remove_driver+0x55/0xe0 [<ffffffffa01000ff>] ? hid_unregister_driver+0x2f/0xa0 [hid] [<ffffffff810e45bf>] ? SyS_delete_module+0x1bf/0x270 [<ffffffff81014089>] ? do_notify_resume+0x69/0xa0 [<ffffffff815b952d>] ? system_call_fastpath+0x16/0x1b ---[ end trace bc794b3d22c30ede ]--- Signed-off-by: NAntonio Ospite <ao2@ao2.it> Acked-by: NFrank Praznik <frank.praznik@oh.rr.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 17 2月, 2015 3 次提交
-
-
由 Srinivas Pandruvada 提交于
Commit 0ccf091d ("HID: sensor-hub: make dyn_callback_lock IRQ-safe) was supposed to change locks in sensor_hub_get_callback(), but missed. Signed-off-by: NSrinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Darren Salt 提交于
Signed-off-by: NDarren Salt <devspam@moreofthesa.me.uk> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Mika Westerberg 提交于
The Microsoft HID over I2C specification says two things regarding the interrupt: 1) The interrupt should be level sensitive 2) The device keeps the interrupt asserted as long as it has more reports available. We've seen that at least some Atmel and N-Trig panels keep the line low as long as they have something to send. The current version of the driver only detects the first edge but then fails to read rest of the reports (as the line is still asserted). Make the driver follow the specification and configure the HID interrupt to be level sensitive. The Windows HID over I2C driver also seems to do the same. Signed-off-by: NMika Westerberg <mika.westerberg@linux.intel.com> Acked-by: NBenjamin Tissoires <benjamin.tissoires@redhat.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 12 2月, 2015 1 次提交
-
-
由 Ping Cheng 提交于
27QHD has the same x_min/y_min (WACOM_CINTIQ_OFFSET) as other Cintiqs. ABS_MISC event is required for PAD packet to work properly with xf86-input-wacom. Signed-off-by: NPing Cheng <pingc@wacom.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 11 2月, 2015 2 次提交
-
-
由 Linus Torvalds 提交于
Commit 84683a7e ("sata_dwc_460ex: enable COMPILE_TEST for the driver") enabled this driver for non-ppc460-ex platforms, but it was then disabled for ARM and ARM64 by commit 2de5a9c0 ("sata_dwc_460ex: disable compilation on ARM and ARM64") because it's too noisy and broken. This disabled is entirely, because it's too noisy on x86-64 too, and there's no point in disabling architectures one by one. At a minimum, the code isn't 64-bit clean, and even on 32-bit it is questionable whether it makes sense. Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Tejun Heo <tj@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Kirill A. Shutemov 提交于
One bit in ->vm_flags is unused now! Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dan Carpenter <dan.carpenter@oracle.com> Cc: Michal Hocko <mhocko@suse.cz> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 10 2月, 2015 2 次提交
-
-
由 Rafael J. Wysocki 提交于
Merge branch 'pci/host-generic' of git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci into acpi-resources modified: drivers/of/of_pci.c This fixes a build failure after merging the 'acpi-resources' branch with the PCI tree caused by bad interactions between that branch and the only commit in 'pci/host-generic'. Also that commit contains a bug which can be fixed by removing one line of code, so do that too. Link: http://marc.info/?l=linux-kernel&m=142344882101429&w=2 Link: http://marc.info/?l=linux-next&m=142346304003932&w=2Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 George Spelvin 提交于
There was a bad typo in commit 43759d4f ("random: use an improved fast_mix() function") and I didn't notice because it "looked right", so I saw what I expected to see when I reviewed it. Only months later did I look and notice it's not the Threefish-inspired mix function that I had designed and optimized. Mea Culpa. Each input bit still has a chance to affect each output bit, and the fast pool is spilled *long* before it fills, so it's not a total disaster, but it's definitely not the intended great improvement. I'm still working on finding better rotation constants. These are good enough, but since it's unrolled twice, it's possible to get better mixing for free by using eight different constants rather than repeating the same four. Signed-off-by: NGeorge Spelvin <linux@horizon.com> Cc: Theodore Ts'o <tytso@mit.edu> Cc: stable@vger.kernel.org # v3.16+ Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 2月, 2015 3 次提交
-
-
由 Hans de Goede 提交于
Backlight control through the native intel interface does not work properly on the Samsung 510R, where as using the acpi_video interface does work, add a quirk for this. Link: https://bugzilla.redhat.com/show_bug.cgi?id=1186097 Cc: All applicable <stable@vger.kernel.org> Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andreas Ruprecht 提交于
In commit 5de21bb9 ("ACPI / PM: Drop CONFIG_PM_RUNTIME from the ACPI core"), all occurrences of CONFIG_PM_RUNTIME were replaced with CONFIG_PM. This created the following structure of #ifdef blocks in the code: [...] #ifdef CONFIG_PM #ifdef CONFIG_PM /* always on / undead */ #ifdef CONFIG_PM_SLEEP [...] #endif #endif [...] #endif This patch removes the inner "#ifdef CONFIG_PM" block as it will always be enabled when the outer block is enabled. This inconsistency was found using the undertaker-checkpatch tool. Signed-off-by: NAndreas Ruprecht <rupran@einserver.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Andreas Ruprecht 提交于
In commit ceb6c9c8 ("USB / PM: Drop CONFIG_PM_RUNTIME from the USB core"), all occurrences of CONFIG_PM_RUNTIME in the USB core code were replaced by CONFIG_PM. This created the following structure of #ifdef blocks in drivers/usb/core/hub.c: [...] #ifdef CONFIG_PM #ifdef CONFIG_PM /* always on / undead */ #else /* dead */ #endif [...] This patch removes unnecessary inner "#ifdef CONFIG_PM" as well as the corresponding dead #else block. This inconsistency was found using the undertaker-checkpatch tool. Signed-off-by: NAndreas Ruprecht <rupran@einserver.de> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 07 2月, 2015 1 次提交
-
-
由 Kristen Carlson Accardi 提交于
Allow users the option to disable the driver for any hardware which does not support HWP. Signed-off-by: NKristen Carlson Accardi <kristen@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 06 2月, 2015 12 次提交
-
-
由 Ross Lagerwall 提交于
Commit 61a734d3 ("xen/manage: Always freeze/thaw processes when suspend/resuming") ensured that userspace processes were always frozen before suspending to reduce interaction issues when resuming devices. However, freeze_processes() does not freeze kernel threads. Freeze kernel threads as well to prevent deadlocks with the khubd thread when resuming devices. This is what native suspend and resume does. Example deadlock: [ 7279.648010] [<ffffffff81446bde>] ? xen_poll_irq_timeout+0x3e/0x50 [ 7279.648010] [<ffffffff81448d60>] xen_poll_irq+0x10/0x20 [ 7279.648010] [<ffffffff81011723>] xen_lock_spinning+0xb3/0x120 [ 7279.648010] [<ffffffff810115d1>] __raw_callee_save_xen_lock_spinning+0x11/0x20 [ 7279.648010] [<ffffffff815620b6>] ? usb_control_msg+0xe6/0x120 [ 7279.648010] [<ffffffff81747e50>] ? _raw_spin_lock_irq+0x50/0x60 [ 7279.648010] [<ffffffff8174522c>] wait_for_completion+0xac/0x160 [ 7279.648010] [<ffffffff8109c520>] ? try_to_wake_up+0x2c0/0x2c0 [ 7279.648010] [<ffffffff814b60f2>] dpm_wait+0x32/0x40 [ 7279.648010] [<ffffffff814b6eb0>] device_resume+0x90/0x210 [ 7279.648010] [<ffffffff814b7d71>] dpm_resume+0x121/0x250 [ 7279.648010] [<ffffffff8144c570>] ? xenbus_dev_request_and_reply+0xc0/0xc0 [ 7279.648010] [<ffffffff814b80d5>] dpm_resume_end+0x15/0x30 [ 7279.648010] [<ffffffff81449fba>] do_suspend+0x10a/0x200 [ 7279.648010] [<ffffffff8144a2f0>] ? xen_pre_suspend+0x20/0x20 [ 7279.648010] [<ffffffff8144a1d0>] shutdown_handler+0x120/0x150 [ 7279.648010] [<ffffffff8144c60f>] xenwatch_thread+0x9f/0x160 [ 7279.648010] [<ffffffff810ac510>] ? finish_wait+0x80/0x80 [ 7279.648010] [<ffffffff8108d189>] kthread+0xc9/0xe0 [ 7279.648010] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 [ 7279.648010] [<ffffffff8175087c>] ret_from_fork+0x7c/0xb0 [ 7279.648010] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 [ 7441.216287] INFO: task khubd:89 blocked for more than 120 seconds. [ 7441.219457] Tainted: G X 3.13.11-ckt12.kz #1 [ 7441.222176] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 7441.225827] khubd D ffff88003f433440 0 89 2 0x00000000 [ 7441.229258] ffff88003ceb9b98 0000000000000046 ffff88003ce83000 0000000000013440 [ 7441.232959] ffff88003ceb9fd8 0000000000013440 ffff88003cd13000 ffff88003ce83000 [ 7441.236658] 0000000000000286 ffff88003d3e0000 ffff88003ceb9bd0 00000001001aa01e [ 7441.240415] Call Trace: [ 7441.241614] [<ffffffff817442f9>] schedule+0x29/0x70 [ 7441.243930] [<ffffffff81743406>] schedule_timeout+0x166/0x2c0 [ 7441.246681] [<ffffffff81075b80>] ? call_timer_fn+0x110/0x110 [ 7441.249339] [<ffffffff8174357e>] schedule_timeout_uninterruptible+0x1e/0x20 [ 7441.252644] [<ffffffff81077710>] msleep+0x20/0x30 [ 7441.254812] [<ffffffff81555f00>] hub_port_reset+0xf0/0x580 [ 7441.257400] [<ffffffff81558465>] hub_port_init+0x75/0xb40 [ 7441.259981] [<ffffffff814bb3c9>] ? update_autosuspend+0x39/0x60 [ 7441.262817] [<ffffffff814bb4f0>] ? pm_runtime_set_autosuspend_delay+0x50/0xa0 [ 7441.266212] [<ffffffff8155a64a>] hub_thread+0x71a/0x1750 [ 7441.268728] [<ffffffff810ac510>] ? finish_wait+0x80/0x80 [ 7441.271272] [<ffffffff81559f30>] ? usb_port_resume+0x670/0x670 [ 7441.274067] [<ffffffff8108d189>] kthread+0xc9/0xe0 [ 7441.276305] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 [ 7441.279131] [<ffffffff8175087c>] ret_from_fork+0x7c/0xb0 [ 7441.281659] [<ffffffff8108d0c0>] ? flush_kthread_worker+0x80/0x80 Signed-off-by: NRoss Lagerwall <ross.lagerwall@citrix.com> Cc: stable@vger.kernel.org Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Lv Zheng 提交于
This patch enhances debugging with the GPE reference count messages added. No functional changes. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
This patch implementes the QR_EC flushing support. Grace periods are implemented from the detection of an SCI_EVT to the submission/completion of the QR_EC transaction. During this period, all EC command transactions are allowed to be submitted. Note that query periods and event periods are intentionally distiguished to allow further improvements. 1. Query period: from the detection of an SCI_EVT to the sumission of the QR_EC command. This period is used for storming prevention, as currently QR_EC is deferred to a work queue rather than directly issued from the IRQ context even there is no other transactions pending, so malicous SCI_EVT GPE can act like "level triggered" to trigger a GPE storm. We need to be prepared for this. And in the future, we may change it to be a part of the advance_transaction() where we will try QR_EC submission in appropriate positions to avoid such GPE storming. 2. Event period: from the detection of an SCI_EVT to the completion of the QR_EC command. We may extend it to the completion of _Qxx evaluation. This is actually a grace period for event flushing, but we only flush queries due to the reason stated in known issue 1. That's also why we use EC_FLAGS_EVENT_xxx. During this period, QR_EC transactions need to pass the flushable submission check. In this patch, the following flags are implemented: 1. EC_FLAGS_EVENT_ENABLED: this is derived from the old EC_FLAGS_QUERY_PENDING flag which can block SCI_EVT handlings. With this flag, the logics implemented by the original flag are extended: 1. Old logic: unless both of the flags are set, the event poller will not be scheduled, and 2. New logic: as soon as both of the flags are set, the evet poller will be scheduled. 2. EC_FLAGS_EVENT_DETECTED: this is also derived from the old EC_FLAGS_QUERY_PENDING flag which can block SCI_EVT detection. It thus can be used to indicate the storming prevention period for query submission. acpi_ec_submit_request()/acpi_ec_complete_request() are invoked to implement this period so that acpi_set_gpe() can be invoked under the "reference count > 0" condition. 3. EC_FLAGS_EVENT_PENDING: this is newly added to indicate the grace period for event flushing (query flushing for now). acpi_ec_submit_request()/acpi_ec_complete_request() are invoked to implement this period so that the flushing process can wait until the event handling (query transaction for now) to be completed. Link: https://bugzilla.kernel.org/show_bug.cgi?id=82611 Link: https://bugzilla.kernel.org/show_bug.cgi?id=77431Signed-off-by: NLv Zheng <lv.zheng@intel.com> Tested-by: NOrtwin Glück <odi@odi.ch> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
This patch refines EC command storm prevention support. Current command storming code is wrong, when the storming condition is detected, it only flags the condition without doing anything for the current command but performing storming prevention for the follow-up commands. So: 1. The first command which suffers from the storming still suffers from storming. 2. The follow-up commands which may not suffer from the storming are unconditionally forced into the storming prevention mode. Ideally, we should only enable storm prevention immediately after detection for the current command so that the next command can try the power/performance efficient interrupt mode again. This patch improves the command storm prevention by disabling GPE right after the detection and re-enabling it right before completing the command transaction using the GPE storming prevention APIs. This thus deploys the following GPE handling model: 1. acpi_enable_gpe()/acpi_disable_gpe() for reference count changes: This set of APIs are used for EC usage reference counting. 2. acpi_set_gpe(ACPI_GPE_ENABLE)/acpi_set_gpe(ACPI_GPE_DISABLE): This set of APIs are used for preventing GPE storm. They must be invoked when the reference count > 0. Note that as the storming prevention should always happen when there is an outstanding request, or GPE enabling value will be messed up by the races. This patch also adds BUG_ON() to enforces this rule to prevent future bugs. The msleep(1) used after completing a transaction is useless now as this sounds like a guard time only useful for platforms that need the EC_FLAGS_MSI quirks while we have fixed GPE race issues using the previous raw handler mode enabling. It is kept to avoid regressions. A seperate patch which deletes EC_FLAGS_MSI quirks should take care of deleting it. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
This patch implements the EC command flushing support. During the grace period indicated by EC_FLAGS_STARTED and EC_FLAGS_STOPPED, all submitted EC command transactions can be completed and new submissions are prevented before suspending so that the EC hardware can be ensured to be in the idle state when the system is resumed. There is a good indicator for flush support: All acpi_ec_submit_request() is invoked after checking driver state with acpi_ec_started() except the first one. This means all code paths can be flushed as fast as possible by discarding the requests occurred after the flush operation. The reference increased for such kind of code path is wrapped by acpi_ec_submit_flushable_request(). Signed-off-by: NLv Zheng <lv.zheng@intel.com> Tested-by: NOrtwin Glück <odi@odi.ch> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
By using the 2 flags, we can indicate an inter-mediate state where the current transactions should be completed while the new transactions should be dropped. The comparison of the old flag and the new flags: Old New about to set BLOCKED STOPPED set / STARTED set BLOCKED set STOPPED clear / STARTED clear BLOCKED clear STOPPED clear / STARTED set A new period can be indicated by the 2 flags. The new period is between the point where we are about to set BLOCKED and the point when the BLOCKED is set. The new flags facilitate us with acpi_ec_started() check to allow the EC transaction to be submitted during the new period. This period thus can be used as a grace period for the EC transaction flushing. The only functional change after applying this patch is: 1. The GPE enabling/disabling is protected by the EC specific lock. We can do this because of recent ACPICA GPE API enhancement. This is reasonable as the GPE disabling/enabling state should only be determined by the EC driver's state machine which is protected by the EC spinlock. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Tested-by: NOrtwin Glück <odi@odi.ch> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ken Xue 提交于
This new feature is to interpret AMD specific ACPI device to platform device such as I2C, UART, GPIO found on AMD CZ and later chipsets. It based on example intel LPSS. Now, it can support AMD I2C, UART and GPIO. Signed-off-by: NKen Xue <Ken.Xue@amd.com> Acked-by: NMika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Yann Droneaud 提交于
While commit 7e36ef82 ("IB/core: Temporarily disable ex_query_device uverb") is correct as it makes the extended QUERY_DEVICE uverb (which came as part of commit 5a77abf9 ("IB/core: Add support for extended query device caps") and commit 860f10a7 ("IB/core: Add flags for on demand paging support")) not available to userspace, it doesn't address the initial issue regarding ib_copy_to_udata() [1][2]. Additionally, further discussions around this new uverb seems to conclude it would require a different data structure than the one currently described in <rdma/ib_user_verbs.h> [3]. Both of these issues require a revert of the changes, so this patch partially reverts commit 8cdd312c ("IB/mlx5: Implement the ODP capability query verb") and commit 860f10a7 ("IB/core: Add flags for on demand paging support") and fully reverts commit 5a77abf9 ("IB/core: Add support for extended query device caps"). [1] "Re: [PATCH v3 06/17] IB/core: Add support for extended query device caps" http://mid.gmane.org/1418733236.2779.26.camel@opteya.com [2] "Re: [PATCH] IB/core: Temporarily disable ex_query_device uverb" http://mid.gmane.org/1423067503.3030.83.camel@opteya.com [3] "RE: [PATCH v1 1/5] IB/uverbs: ex_query_device: answer must not depend on request's comp_mask" http://mid.gmane.org/2807E5FD2F6FDA4886F6618EAC48510E0CC12C30@CRSMSX101.amr.corp.intel.com Cc: Eli Cohen <eli@mellanox.com> Cc: Haggai Eran <haggaie@mellanox.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Cc: Sagi Grimberg <sagig@mellanox.com> Cc: Shachar Raindel <raindel@mellanox.com> Signed-off-by: NYann Droneaud <ydroneaud@opteya.com> Signed-off-by: NRoland Dreier <roland@purestorage.com>
-
由 Hanjun Guo 提交于
In acpi_table_parse(), pointer of the table to pass to handler() is checked before handler() called, so remove all the duplicate NULL check in the handler function. CC: Tony Luck <tony.luck@intel.com> CC: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NHanjun Guo <hanjun.guo@linaro.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Nicholas Mc Guire 提交于
return type of wait_for_completion_timeout is unsigned long not int, this patch uses the return value of wait_for_completion_timeout in the condition directly rather than adding a additional appropriately typed variable. Signed-off-by: NNicholas Mc Guire <hofrat@osadl.org> Signed-off-by: NMark Brown <broonie@kernel.org>
-
由 Nicholas Mc Guire 提交于
return type of wait_for_completion_timeout is unsigned long not int, this patch uses the return value of wait_for_completion_timeout in the condition directly rather than assigning it to an incorrect type variable. Signed-off-by: NNicholas Mc Guire <hofrat@osadl.org> Signed-off-by: NMark Brown <broonie@kernel.org>
-
由 Jaewon Kim 提交于
This patch adds new regulator driver to support max77843 MFD(Multi Function Device) chip`s regulators. The Max77843 has two voltage regulators for USB safeout. Signed-off-by: NJaewon Kim <jaewon02.kim@samsung.com> Signed-off-by: NBeomho Seo <beomho.seo@samsung.com> Signed-off-by: NMark Brown <broonie@kernel.org>
-
- 05 2月, 2015 12 次提交
-
-
由 Jennifer Herbert 提交于
If Xenstore sends back a XS_ERROR for TRANSACTION_END, the driver BUGs because it cannot find the matching transaction in the list. For TRANSACTION_START, it leaks memory. Check the message as returned from xenbus_dev_request_and_reply(), and clean up for TRANSACTION_START or discard the error for TRANSACTION_END. Signed-off-by: NJennifer Herbert <Jennifer.Herbert@citrix.com> Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
-
由 Lv Zheng 提交于
The bug fixes around GPE races have been done to the EC driver by the previous commits. This patch increases the revision to 3 to indicate the behavior differences between the old and the new drivers. The copyright/authorship notices are also updated. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
Timeout in the ec_poll() doesn't refer to the last register access time. It thus can win the competition against the acpi_ec_gpe_handler() if a transaction takes longer than 1ms but individual register accesses are less than 1ms. In some cases, it can make the following silicon bug easier to be triggered: GPE EN is not wired to the GPE trigger line, so when GPE STS is already set when 1 is written to GPE EN, no GPE can be triggered. This patch adds register access timestamp reference support for ec_poll() to reduce the number of ec_poll() invocations. Reported-by: NVenkat Raghavulu <venkat.raghavulu@intel.com> Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
This patch switches EC driver into ACPI_GPE_DISPATCH_RAW_HANDLER mode where the GPE lock is not held for acpi_ec_gpe_handler() and the ACPICA internal GPE enabling/disabling/clearing operations are bypassed so that further improvements are possible with the GPE APIs. There are 2 strong reasons for deploying raw GPE handler mode in the EC driver: 1. Some hardware logics can control their interrupts via their own registers, so their interrupts can be disabled/enabled/acknowledged without using the super IRQ controller provided functions. While there is no mean (EC commands) for the EC driver to achieve this. 2. During suspending, the EC driver is still working for a while to complete the platform firmware provided functionailities using ec_poll() after all GPEs are disabled (see acpi_ec_block_transactions()), which means the EC driver will drive the EC GPE out of the GPE core's control. Without deploying the raw GPE handler mode, we can see many races between the EC driver and the GPE core due to the above restrictions: 1. There is a race condition due to ACPICA internal GPE disabling/clearing/enabling logics in acpi_ev_gpe_dispatch(): Orignally EC GPE is disabled (EN=0), cleared (STS=0) before invoking a GPE handler and re-enabled (EN=1) after invoking a GPE handler in acpi_ev_gpe_dispatch(). When re-enabling appears, GPE may be flagged (STS=1). ================================================================= (event pending A) ================================================================= acpi_ev_gpe_dispatch() ec_poll() EN=0 STS=0 acpi_ec_gpe_handler() ***************************************************************** (event handling A) Lock(EC) advance_transaction() EC_SC read ================================================================= (event pending B) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** ***************************************************************** (event handling B) Lock(EC) advance_transaction() EC_SC read ================================================================= (event pending C) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** EN=1 This race condition is the root cause of different issues on different silicon variations. A. Silicon variation A: On some platforms, GPE will be triggered due to "writing 1 to EN when STS=1". This is because both EN and STS lines are wired to the GPE trigger line. 1. Issue 1: We can see no-op acpi_ec_gpe_handler() invoked on such platforms. This is because: a. event pending B: An event can arrive after ACPICA's GPE clearing performed in acpi_ev_gpe_dispatch(), this event may fail to be detected by EC_SC read that is performed before its arrival; b. event handling B: The event can be handled in ec_poll() because EC lock is released after acpi_ec_gpe_handler() invocation; c. There is no code in ec_poll() to clear STS but the GPE can still be triggered by the EN=1 write performed in acpi_ev_finish_gpe(), this leads to a no-op EC GPE handler invocation; d. As no-op GPE handler invocations are counted by the EC driver to trigger the command storming conditions, the wrong no-op GPE handler invocations thus can easily trigger wrong command storming conditions. Note 1: If we removed GPE disabling/enabling code from acpi_ev_gpe_dispatch(), we could still see no-op GPE handlers triggered by the event arriving after the GPE clearing and before the GPE handling on both silicon variation A and B. This can only occur if the CPU is very slow (timing slice between STS=0 write and EC_SC read should be short enough before hardware sets another GPE indication). Thus this is very rare and is not what we need to fix. B. Silicon variation B: On other platforms, GPE may not be triggered due to "writing 1 to EN when STS=1". This is because only STS line is wired to the GPE trigger line. 2. Issue 2: We can see GPE loss on such platforms. This is because: a. event pending B vs. event handling A: An event can arrive after ACPICA's GPE handling performed in acpi_ev_gpe_dispatch(), or event pending C vs. event handling B: An event can arrive after Linux's GPE handling performed in ec_poll(), these events may fail to be detected by EC_SC read that is performed before their arrival; b. The GPE cannot be triggered by EN=1 write performed in acpi_ev_finish_gpe(); c. If no polling mechanism is implemented in the driver for the pending event (for example, SCI_EVT), this event is lost due to no GPE being triggered. Note 2: On most platforms, there might be another rule that GPE may not be triggered due to "writing 1 to STS when STS=1 and EN=1". Then on silicon variation B, an even worse case is if the issue 2 event loss happens, further events may never trigger GPE again on such platforms due to being blocked by the current STS=1. Unless someone clears STS, all events have to be polled. 2. There is a race condition due to lacking in GPE status checking in EC driver: Originally, GPE status is checked in ACPICA core but not checked in the GPE handler. Thus since the status checking and handling is not locked, it can be interrupted by another handling path. ================================================================= (event pending A) ================================================================= acpi_ev_gpe_detect() ec_poll() if (EN==1 && STS==1) ***************************************************************** (event handling A) Lock(EC) advance_transaction() EC_SC read EC_SC handled Unlock(EC) ***************************************************************** acpi_ev_gpe_dispatch() EN=0 STS=0 acpi_ec_gpe_handler() ***************************************************************** (event handling B) Lock(EC) advance_transaction() EC_SC read Unlock(EC) ***************************************************************** 3. Issue 3: We can see no-op acpi_ec_gpe_handler() invoked on both silicon variation A and B. This is because: a. event pending A: An event can arrive to trigger an EC GPE and ACPICA checks it and is about to invoke the EC GPE handler; b. event handling A: The event can be handled in ec_poll() because EC lock is not held after the GPE status checking; c. event handling B: Then when the EC GPE handler is invoked, it becomes a no-op GPE handler invocation. d. As no-op GPE handler invocations are counted by the EC driver to trigger the command storming conditions, the wrong no-op GPE handler invocations thus can easily trigger wrong command storming conditions. Note 3: This no-op GPE handler invocation is rare because the time between the IRQ arrival and the acpi_ec_gpe_handler() invocation is less than the timeout value waited in ec_poll(). So most of the no-op GPE handler invocations are caused by the reason described in issue 1. 3. There is a race condition due to ACPICA internal GPE clearing logic in acpi_enable_gpe(): During runtime, acpi_enable_gpe() can be invoked by the EC storming prevention code. When it is invoked, GPE may be flagged (STS=1). ================================================================= (event pending A) ================================================================= acpi_ev_gpe_dispatch() acpi_ec_transaction() EN=0 STS=0 acpi_ec_gpe_handler() ***************************************************************** (event handling A) Lock(EC) advance_transaction() EC_SC read EC_SC handled Unlock(EC) ***************************************************************** EN=1 ? Lock(EC) Unlock(EC) ================================================================= (event pending B) ================================================================= acpi_enable_gpe() STS=0 EN=1 4. Issue 4: We can see GPE loss on both silicon variation A and B platforms. This is because: a. event pending B: An event can arrive right before ACPICA's GPE clearing performed in acpi_enable_gpe(); b. If the GPE is cleared when GPE is disabled, then EN=1 write in acpi_enable_gpe() cannot trigger this GPE; c. If no polling mechanism is implemented in the driver for this event (for example, SCI_EVT), this event is lost due to no GPE being triggered. Note 4: Currently we don't have this issue, but after we switch the EC driver into ACPI_GPE_DISPATCH_RAW_HANDLER mode, we need to take care of handling this because the EN=1 write in acpi_ev_gpe_dispatch() will be abandoned. There might be more race issues for the current GPE handler usages. This is because the EC IRQ's enabling/disabling/checking/clearing/handling operations should be locked by a single lock that is under the EC driver's control to achieve the serialization. Which means we need to invoke GPE APIs with EC driver's lock held and all ACPICA internal GPE operations related to the GPE handler should be abandoned. Invoking GPE APIs inside of the EC driver lock and bypassing ACPICA internal GPE operations requires the ACPI_GPE_DISPATCH_RAW_HANDLER mode where the same lock used by the APIs are released prior than invoking the handlers. Otherwise, we can see dead locks due to circular locking dependencies (see Reference below). This patch then switches the EC driver into the ACPI_GPE_DISPATCH_RAW_HANDLER mode so that it can perform correct GPE operations using the GPE APIs: 1. Bypasses EN modifications performed in acpi_ev_gpe_dispatch() by using acpi_install_gpe_raw_handler() and invoking all GPE APIs with EC spin lock held. This can fix issue 1 as it makes a non frequent GPE enabling/disabling environment. 2. Bypasses STS clearing performed in acpi_enable_gpe() by replacing acpi_enable_gpe()/acpi_disable_gpe() with acpi_set_gpe(). This can fix issue 4. And this can also help to fix issue 1 as it makes a no sudden GPE clearing environment when GPE is frequently enabled/disabled. 3. Ensures STS acknowledged before handling by invoking acpi_clear_gpe() in advance_transaction(). This can finally fix issue 1 even in a frequent GPE enabling/disabling environment. And this can also finally fix issue 3 when issue 2 is fixed. Note 3: GPE clearing is edge triggered W1C, which means we can clear it right before handling it. Since all EC GPE indications are handled in advance_transaction() by previous commits, we can now move GPE clearing into it to implement the correct GPE clearing. Note 4: We can use acpi_set_gpe() which is not shared GPE safer instead of acpi_enable_gpe()/acpi_disable_gpe() because EC GPE is not shared by other hardware, which is mentioned in the ACPI specification 5.0, 12.6 Interrupt Model: "OSPM driver treats this as an edge event (the EC SCI cannot be shared)". So we can stop using shared GPE safer APIs acpi_enable_gpe()/acpi_disable_gpe() in the EC driver. Otherwise cleanups need to be made in acpi_ev_enable_gpe() to bypass the GPE clearing logic before keeping acpi_enable_gpe(). This patch also invokes advance_transaction() when GPE is re-enabled in the task context which: 1. Ensures EN=1 can trigger GPE by checking and handling EC status register right after EN=1 writes. This can fix issue 2. After applying this patch, without frequent GPE enablings considered: ================================================================= (event pending A) ================================================================= acpi_ec_gpe_handler() ec_poll() ***************************************************************** (event handling A) Lock(EC) advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending B) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** ***************************************************************** (event handling B) Lock(EC) advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending C) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** The event pending for issue 1 (event pending B) can arrive as a next GPE due to the previous IRQ context STS=0 write. And if it is handled by ec_poll() (event handling B), as it is also acknowledged by ec_poll(), the event pending for issue 2 (event pending C) can properly arrive as a next GPE after the task context STS=0 write. So no GPE will be lost and never triggered due to GPE clearing performed in the wrong position. And since all GPE handling is performed after a locked GPE status checking, we can hardly see no-op GPE handler invocations due to issue 1 and 3. We may still see no-op GPE handler invocations due to "Note 1", but as it is inevitable, it needn't be fixed. After applying this patch, with frequent GPE enablings considered: ================================================================= (event pending A) ================================================================= acpi_ec_gpe_handler() acpi_ec_transaction() ***************************************************************** (event handling A) Lock(EC) advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending B) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** ***************************************************************** (event handling B) Lock(EC) EN=1 if STS==1 advance_transaction() if STS==1 STS=0 EC_SC read ================================================================= (event pending C) ================================================================= EC_SC handled Unlock(EC) ***************************************************************** The event pending for issue 2 can be manually handled by advance_transaction(). And after the STS=0 write performed in the manual triggered advance_transaction(), GPE can always arrive. So no GPE will be lost due to frequent GPE disabling/enabling performed in the driver like issue 4. Note 5: It's ideally when EN=1 write occurred, an IRQ thread should be woken up to handle the GPE when the GPE was raised. But this requires the IRQ thread to contain the poller code for all EC GPE indications, while currently some of the indications are handled in the user tasks. It then is very hard for the code to determine whether a user task should be invoked or the poller work item should be scheduled. So we have to invoke advance_transaction() directly now and it leaves us such a restriction for the GPE re-enabling: it must be performed in the task context to avoid starving the GPEs. As a conclusion: we can see the EC GPE is always handled in serial after deploying the raw GPE handler mode: Lock(EC) if (STS==1) STS=0 EC_SC read EC_SC handled Unlock(EC) The EC driver specific lock is responsible to make the EC GPE handling processes serialized so that EC can handle its GPE from both IRQ and task contexts and the next IRQ can be ensured to arrive after this process. Note 6: We have many EC_FLAGS_MSI qurik users in the current driver. They all seem to be suffering from unexpected GPE triggering source lost. And they are false root caused to a timing issue. Since EC communication protocol has already flow control defined, timing shouldn't be the root cause, while this fix might be fixing the root cause of the old bugs. Link: https://lkml.org/lkml/2014/11/4/974 Link: https://lkml.org/lkml/2014/11/18/316 Link: https://www.spinics.net/lists/linux-acpi/msg54340.htmlSigned-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
ACPICA commit da9a83e1a845f2d7332bdbc0632466b2595e5424 For acpi_set_gpe()/acpi_enable_gpe(), our target is to purify them to be APIs that can be used for various GPE handling models, so we need them to be pure GPE enabling APIs. GPE enabling/disabling has 2 use cases: 1. Driver may permanently enable/disable GPEs according to the usage counts. 1. When upper layers (the users of the driver) submit requests to the driver, it means they care about the underlying hardware. GPE need to be enabled for the first request submission and disabled for the last request completion. 2. When the GPE is shared between 2+ silicon logics. GPE need to be enabled for either silicon logic's driver and disabled when all of the drivers are not started. For these cases, acpi_enable_gpe()/acpi_disable_gpe() should be used. When the usage count is increased from 0 to 1, the GPE is enabled and it is disabled when the usage count is decrased from 1 to 0. 2. Driver may temporarily disables the GPE to enter an GPE polling mode and wants to re-enable it later. 1. Prevent GPE storming: when a driver cannot fully solve the condition that triggered the GPE in the GPE context, in order not to trigger GPE storm, driver has to disable GPE to switch into the polling mode and re-enables it in the non interrupt context after the storming condition is cleared. 2. Meet throughput requirement: some IO drivers need to poll hardware again and again until nothing indicated instead of just handling once for one interruption, this need to be done in the polling mode or the IO flood may prevent the GPE handler from returning. 3. Meet realtime requirement: in order not to block CPU to handle higher realtime prioritized GPEs, lower priority GPEs can be handled in the polling mode. For these cases, acpi_set_gpe() should be used to switch to/from the polling mode. This patch adds unconditional GPE enabling support into acpi_set_gpe() so that this API can be used by the drivers to switch back from the GPE polling mode unconditionally. Originally this function includes GPE clearing logic in it. First, the GPE clearing is typically used in the GPE handling code to: 1. Acknowledge the GPE when we know there is an edge triggered GPE raised and is about to handle it, otherwise the unexpected clearing may lead to a GPE loss; 2. Issue actions after we have handled a level triggered GPE, otherwise the unexpected clearing may trigger unwanted OSPM actions to the hardware (for example, clocking in out-dated write FIFO data). Thus the GPE clearing is not suitable to be used in the GPE enabling APIs. Second, the combination of acknowledging and enabling may also not be expected by the hardware drivers. For GPE clearing, we have a seperate API acpi_clear_gpe(). There are cases drivers do want the 2 operations to be split. So splitting these 2 operations could facilitates drivers the maximum possibilities to achieve success. For a combined one, we already have acpi_finish_gpe() ready to be invoked. Given the fact that drivers should complete all outstanding requests before putting themselves into the sleep states, as this API is executed for outstanding requests, it should also have nothing to do with the "RUN"/"WAKE" distinguishing. That's why the acpi_set_gpe(ACPI_GPE_ENABLE) should not be implemented by acpi_hw_low_set_gpe(ACPI_GPE_CONDITIONAL_ENABLE). This patch thus converts acpi_set_gpe(ACPI_GPE_ENABLE) into acpi_hw_low_set_gpe(ACPI_GPE_ENABLE) to achieve a seperate GPE enabling API. Drivers then are encouraged to use this API when they need to switch to/from the GPE polling mode. Note that the acpi_set_gpe()/acpi_finish_gpe() should be first introduced to Linux using a divergence reduction patch before sending a linuxized version of this patch. Lv Zheng. Link: https://github.com/acpica/acpica/commit/da9a83e1Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NDavid E. Box <david.e.box@linux.intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
This can help to reduce source code differences between Linux and ACPICA upstream. Further driver cleanups also require these APIs to eliminate GPE storms. 1. acpi_set_gpe(): An API that driver should invoke in the case it wants to disable/enable IRQ without honoring the reference count implemented in the acpi_disable_gpe() and acpi_enable_gpe(). Note that this API should only be invoked inside a acpi_enable_gpe()/acpi_disable_gpe() pair. 2. acpi_finish_gpe(): Drivers returned ACPI_REENABLE_GPE unset from the GPE handler should use this API instead of invoking acpi_set_gpe()/acpi_enable_gpe() to re-enable the GPE. The GPE APIs can be invoked inside of a GPE handler or in the task context with a driver provided lock held. This driver provided lock is safe to be held in the GPE handler by the driver. Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
ACPICA commit 199cad16530a45aea2bec98e528866e20c5927e1 Since whether the GPE should be disabled/enabled/cleared should only be determined by the GPE driver's state machine: 1. GPE should be disabled if the driver wants to switch to the GPE polling mode when a GPE storm condition is indicated and should be enabled if the driver wants to switch back to the GPE interrupt mode when all of the storm conditions are cleared. The conditions should be protected by the driver's specific lock. 2. GPE should be enabled if the driver has accepted more than one request and should be disabled if the driver has completed all of the requests. The request count should be protected by the driver's specific lock. 3. GPE should be cleared either when the driver is about to handle an edge triggered GPE or when the driver has completed to handle a level triggered GPE. The handling code should be protected by the driver's specific lock. Thus the GPE enabling/disabling/clearing operations are likely to be performed with the driver's specific lock held while we currently cannot do this. This is because: 1. We have the acpi_gbl_gpe_lock held before invoking the GPE driver's handler. Driver's specific lock is likely to be held inside of the handler, thus we can see some dead lock issues due to the reversed locking order or recursive locking. In order to solve such dead lock issues, we need to unlock the acpi_gbl_gpe_lock before invoking the handler. BZ 1100. 2. Since GPE disabling/enabling/clearing should be determined by the GPE driver's state machine, we shouldn't perform such operations inside of ACPICA for a GPE handler to mess up the driver's state machine. BZ 1101. Originally this patch includes a logic to flush GPE handlers, it is dropped due to the following reasons: 1. This is a different issue; 2. Linux OSL has fixed this by flushing SCI in acpi_os_wait_events_complete(). We will pick up this topic when the Linux OSL fix turns out to be not sufficient. Note that currently the internal operations and the acpi_gbl_gpe_lock are also used by ACPI_GPE_DISPATCH_METHOD and ACPI_GPE_DISPATCH_NOTIFY. In order not to introduce regressions, we add one ACPI_GPE_DISPATCH_RAW_HANDLER type to be distiguished from ACPI_GPE_DISPATCH_HANDLER. For which the acpi_gbl_gpe_lock is unlocked before invoking the GPE handler and the internal enabling/disabling operations are bypassed to allow drivers to perform them at a proper position using the GPE APIs and ACPI_GPE_DISPATCH_RAW_HANDLER users should invoke acpi_set_gpe() instead of acpi_enable_gpe()/acpi_disable_gpe() to bypass the internal GPE clearing code in acpi_enable_gpe(). Lv Zheng. Known issues: 1. Edge-triggered GPE lost for frequent enablings On some buggy silicon platforms, GPE enable line may not be directly wired to the GPE trigger line. In that case, when GPE enabling is frequently performed for edge-triggered GPEs, GPE status may stay set without being triggered. This patch may maginify this problem as it allows GPE enabling to be parallel performed during the process the GPEs are handled. This is an existing issue, because: 1. For task context: Current ACPI_GPE_DISPATCH_METHOD practices have proven that this isn't a real issue - we can re-enable edge-triggered GPE in a work queue where the GPE status bit might already be set. 2. For IRQ context: This can even happen when the GPE enabling occurs before returning from the GPE handler and after unlocking the GPE lock. Thus currently no code is included to protect this. Link: https://github.com/acpica/acpica/commit/199cad16Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 David E. Box 提交于
ACPICA commit 8990e73ab2aa15d6a0068b860ab54feff25bee36 Link: https://github.com/acpica/acpica/commit/8990e73aSigned-off-by: NDavid E. Box <david.e.box@linux.intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 David E. Box 提交于
ACPICA commit 490ec7f7839bf7ee5e8710a34d1d1a78d54a49b6 In function acpi_hw_low_set_gpe(), cast enable_mask to u8 before storing. The mask was read from a 32 bit register but is an 8 bit value. Fixes Visual Studio compiler warning. Link: https://github.com/acpica/acpica/commit/490ec7f7Signed-off-by: NDavid E. Box <david.e.box@linux.intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
ACPICA commit 7926d5ca9452c87f866938dcea8f12e1efb58f89 There is an issue in acpi_install_gpe_handler() and acpi_remove_gpe_handler(). The code to obtain the GPE dispatcher type from the Handler->original_flags is wrong: if (((Handler->original_flags & ACPI_GPE_DISPATCH_METHOD) || (Handler->original_flags & ACPI_GPE_DISPATCH_NOTIFY)) && ACPI_GPE_DISPATCH_NOTIFY is 0x03 and ACPI_GPE_DISPATCH_METHOD is 0x02, thus this statement is TRUE for the following dispatcher types: 0x01 (ACPI_GPE_DISPATCH_HANDLER): not expected 0x02 (ACPI_GPE_DISPATCH_METHOD): expected 0x03 (ACPI_GPE_DISPATCH_NOTIFY): expected There is no functional issue due to this because Handler->original_flags is only set in acpi_install_gpe_handler(), and an earlier checker has excluded the ACPI_GPE_DISPATCH_HANDLER: if ((gpe_event_info->Flags & ACPI_GPE_DISPATCH_MASK) == ACPI_GPE_DISPATCH_HANDLER) { Status = AE_ALREADY_EXISTS; goto free_and_exit; } ... Handler->original_flags = (u8) (gpe_event_info->Flags & (ACPI_GPE_XRUPT_TYPE_MASK | ACPI_GPE_DISPATCH_MASK)); We need to clean this up before modifying the GPE dispatcher type values. In order to prevent such issue from happening in the future, this patch introduces ACPI_GPE_DISPATCH_TYPE() macro to be used to obtain the GPE dispatcher types. Lv Zheng. Link: https://github.com/acpica/acpica/commit/7926d5caSigned-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NDavid E. Box <david.e.box@linux.intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
ACPICA: Events: Cleanup to move acpi_gbl_global_event_handler invocation out of acpi_ev_gpe_dispatch() ACPICA commit 04f25acdd4f655ae33f83de789bb5f4b7790171c This patch follows acpi_ev_fixed_event_detect(), which invokes acpi_gbl_global_event_handler instead of invoking it in acpi_ev_fixed_event_dispatch(), moves acpi_gbl_global_event_handler from acpi_ev_gpe_dispatch() to acpi_ev_gpe_detect(). This makes further cleanups around acpi_ev_gpe_dispatch() simpler. Lv Zheng. Link: https://github.com/acpica/acpica/commit/04f25acdSigned-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NDavid E. Box <david.e.box@linux.intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Lv Zheng 提交于
ACPICA commit b2b18bb38045404e253f10787b8a4ae6e94cdee6 This patch prevents acpi_remove_gpe_handler() from leaking the stale gpe_event_info->Dispatch.Handler to the caller to avoid possible NULL pointer references. Lv Zheng. Link: https://github.com/acpica/acpica/commit/b2b18bb3Signed-off-by: NLv Zheng <lv.zheng@intel.com> Signed-off-by: NDavid E. Box <david.e.box@linux.intel.com> Signed-off-by: NBob Moore <robert.moore@intel.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-