- 05 11月, 2020 10 次提交
-
-
由 Akhil P Oommen 提交于
Implement the shutdown callback for adreno gpu platform device to safely shutdown it before a system reboot. This helps to avoid futher transactions from gpu after the smmu is moved to bypass mode. Signed-off-by: NAkhil P Oommen <akhilpo@codeaurora.org> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Kuogee Hsieh 提交于
Set link rate by using OPP set rate api so that CX level will be set accordingly based on the link rate. Changes in v2: -- remove dev from dp_ctrl_put() parameters -- Add more information to commit message Changes in v3: -- return when dev_pm_opp_set_clkname() failed Signed-off-by: NKuogee Hsieh <khsieh@codeaurora.org> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Tian Tao 提交于
clk_prepare_enable() and clk_disable_unprepare() will check NULL clock parameter, so It is not necessary to add additional checks. Signed-off-by: NTian Tao <tiantao6@hisilicon.com> Reviewed-by: NJordan Crouse <jcrouse@codeaurora.org> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Dmitry Baryshkov 提交于
Implement phy_disable() callback to disable DSI PHY lanes and blocks when phy is not used. Signed-off-by: NDmitry Baryshkov <dmitry.baryshkov@linaro.org> Fixes: ff73ff19 ("drm/msm/dsi: Populate the 10nm PHY funcs") Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Dmitry Baryshkov 提交于
Implement phy_disable() callback to disable DSI PHY lanes and blocks when phy is not used. Signed-off-by: NDmitry Baryshkov <dmitry.baryshkov@linaro.org> Fixes: 1ef7c99d ("drm/msm/dsi: add support for 7nm DSI PHY/PLL") Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Dmitry Baryshkov 提交于
PHY disable/enable resets PLL registers to default values. Thus in addition to restoring several registers we also need to restore VCO rate settings. Signed-off-by: NDmitry Baryshkov <dmitry.baryshkov@linaro.org> Fixes: c6659785 ("drm/msm/dsi/pll: call vco set rate explicitly") Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Dmitry Baryshkov 提交于
PHY disable/enable resets PLL registers to default values. Thus in addition to restoring several registers we also need to restore VCO rate settings. Signed-off-by: NDmitry Baryshkov <dmitry.baryshkov@linaro.org> Fixes: 1ef7c99d ("drm/msm/dsi: add support for 7nm DSI PHY/PLL") Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Stephen Boyd 提交于
Printk messages need newlines. Add it here. Cc: Abhinav Kumar <abhinavk@codeaurora.org> Cc: Jeykumar Sankaran <jsanka@codeaurora.org> Fixes: 25fdd593 ("drm/msm: Add SDM845 DPU support") Signed-off-by: NStephen Boyd <swboyd@chromium.org> Reviewed-by: NAbhinav Kumar <abhinavk@codeaurora.org> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Tanmay Shah 提交于
Bandwidth code was being used as test link rate. Fix this by converting bandwidth code to test link rate Do not reset voltage and pre-emphasis level during IRQ HPD attention interrupt. Also fix pre-emphasis parsing during test link status process Signed-off-by: NTanmay Shah <tanmay@codeaurora.org> Fixes: 8ede2ecc ("drm/msm/dp: Add DP compliance tests on Snapdragon Chipsets") Reviewed-by: NStephen Boyd <swboyd@chromium.org> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Tian Tao 提交于
fix warnings reported by make W=1 drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c:195: warning: cannot understand function prototype: 'const struct dpu_intr_reg dpu_intr_set[] = ' drivers/gpu/drm/msm/disp/dpu1/dpu_hw_interrupts.c:252: warning: cannot understand function prototype: 'const struct dpu_irq_type dpu_irq_map[] = ' Signed-off-by: NTian Tao <tiantao6@hisilicon.com> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
- 02 11月, 2020 7 次提交
-
-
由 Robin Murphy 提交于
DRM_MSM fails to build with DRM_MSM_DP=n; add the missing stub. Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Reviewed-by: NRob Clark <robdclark@gmail.com> Fixes: 8ede2ecc ("drm/msm/dp: Add DP compliance tests on Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Viresh Kumar 提交于
dev_pm_opp_of_remove_table() doesn't report any errors when it fails to find the OPP table with error -ENODEV (i.e. OPP table not present for the device). And we can call dev_pm_opp_of_remove_table() unconditionally here. While at it, also create a label to put clkname. Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Rob Clark 提交于
Use a SCHED_FIFO kthread_worker for async atomic commits. We have a hard deadline if we don't want to miss a frame. Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Rob Clark 提交于
Add msm_kms_destroy() and add err return from msm_kms_init(). Prep work for next patch. Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Rob Clark 提交于
Signed-off-by: NRob Clark <robdclark@chromium.org>
-
由 Rob Clark 提交于
lockdep dislikes seeing locks unwound in a non-nested fashion. Fixes: b3d91800 ("drm/msm: Fix race condition in msm driver with async layer updates") Signed-off-by: NRob Clark <robdclark@chromium.org> Reviewed-by: NAbhinav Kumar <abhinavk@codeaurora.org>
-
由 Krishna Manikandan 提交于
When there are back to back commits with async cursor update, there is a case where second commit can program the DPU hw blocks while first didn't complete flushing config to HW. Synchronize the compositions such that second commit waits until first commit flushes the composition. This change also introduces per crtc commit lock, such that commits on different crtcs are not blocked by each other. Changes in v2: - Use an array of mutexes in kms to handle commit lock per crtc. (Rob Clark) Changes in v3: - Add wrapper functions to handle lock and unlock of commit_lock for each crtc. (Rob Clark) Signed-off-by: NKrishna Manikandan <mkrishn@codeaurora.org> Reviewed-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NRob Clark <robdclark@chromium.org>
-
- 26 10月, 2020 2 次提交
-
-
由 Joe Perches 提交于
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences. Remove the quote operator # from compiler_attributes.h __section macro. Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms. Conversion done using the script at: https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.plSigned-off-by: NJoe Perches <joe@perches.com> Reviewed-by: NNick Desaulniers <ndesaulniers@gooogle.com> Reviewed-by: NMiguel Ojeda <ojeda@kernel.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Eric Biggers 提交于
Commit 453431a5 ("mm, treewide: rename kzfree() to kfree_sensitive()") renamed kzfree() to kfree_sensitive(), but it left a compatibility definition of kzfree() to avoid being too disruptive. Since then a few more instances of kzfree() have slipped in. Just get rid of them and remove the compatibility definition once and for all. Signed-off-by: NEric Biggers <ebiggers@google.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 10月, 2020 2 次提交
-
-
由 Hans de Goede 提交于
Commit 21653a41 ("i2c: core: Call i2c_acpi_install_space_handler() before i2c_acpi_register_devices()")'s intention was to only move the acpi_install_address_space_handler() call to the point before where the ACPI declared i2c-children of the adapter where instantiated by i2c_acpi_register_devices(). But i2c_acpi_install_space_handler() had a call to acpi_walk_dep_device_list() hidden (that is I missed it) at the end of it, so as an unwanted side-effect now acpi_walk_dep_device_list() was also being called before i2c_acpi_register_devices(). Move the acpi_walk_dep_device_list() call to the end of i2c_acpi_register_devices(), so that it is once again called *after* the i2c_client-s hanging of the adapter have been created. This fixes the Microsoft Surface Go 2 hanging at boot. Fixes: 21653a41 ("i2c: core: Call i2c_acpi_install_space_handler() before i2c_acpi_register_devices()") Link: https://bugzilla.kernel.org/show_bug.cgi?id=209627Reported-by: NRainer Finke <rainer@finke.cc> Reported-by: NKieran Bingham <kieran.bingham@ideasonboard.com> Suggested-by: NMaximilian Luz <luzmaximilian@gmail.com> Tested-by: NKieran Bingham <kieran.bingham@ideasonboard.com> Signed-off-by: NHans de Goede <hdegoede@redhat.com> Signed-off-by: NWolfram Sang <wsa@kernel.org>
-
由 George Spelvin 提交于
Non-cryptographic PRNGs may have great statistical properties, but are usually trivially predictable to someone who knows the algorithm, given a small sample of their output. An LFSR like prandom_u32() is particularly simple, even if the sample is widely scattered bits. It turns out the network stack uses prandom_u32() for some things like random port numbers which it would prefer are *not* trivially predictable. Predictability led to a practical DNS spoofing attack. Oops. This patch replaces the LFSR with a homebrew cryptographic PRNG based on the SipHash round function, which is in turn seeded with 128 bits of strong random key. (The authors of SipHash have *not* been consulted about this abuse of their algorithm.) Speed is prioritized over security; attacks are rare, while performance is always wanted. Replacing all callers of prandom_u32() is the quick fix. Whether to reinstate a weaker PRNG for uses which can tolerate it is an open question. Commit f227e3ec ("random32: update the net random state on interrupt and activity") was an earlier attempt at a solution. This patch replaces it. Reported-by: NAmit Klein <aksecurity@gmail.com> Cc: Willy Tarreau <w@1wt.eu> Cc: Eric Dumazet <edumazet@google.com> Cc: "Jason A. Donenfeld" <Jason@zx2c4.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: tytso@mit.edu Cc: Florian Westphal <fw@strlen.de> Cc: Marc Plumb <lkml.mplumb@gmail.com> Fixes: f227e3ec ("random32: update the net random state on interrupt and activity") Signed-off-by: NGeorge Spelvin <lkml@sdf.org> Link: https://lore.kernel.org/netdev/20200808152628.GA27941@SDF.ORG/ [ willy: partial reversal of f227e3ec; moved SIPROUND definitions to prandom.h for later use; merged George's prandom_seed() proposal; inlined siprand_u32(); replaced the net_rand_state[] array with 4 members to fix a build issue; cosmetic cleanups to make checkpatch happy; fixed RANDOM32_SELFTEST build ] Signed-off-by: NWilly Tarreau <w@1wt.eu>
-
- 24 10月, 2020 2 次提交
-
-
由 Helge Deller 提交于
I tested this driver on my HP PA-RISC C3000 workstation and it does work with the built-in TEAC CD-532E-B CD-ROM drive. So drop the TODO item and adjust the file header. Signed-off-by: NHelge Deller <deller@gmx.de>
-
由 Mauro Carvalho Chehab 提交于
Some functions have different names between their prototypes and the kernel-doc markup. Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
- 23 10月, 2020 14 次提交
-
-
由 James Smart 提交于
We've had several complaints about a 10s reconnect delay (the default) when there was an error while there is connectivity to a subsystem. The max_reconnects and reconnect_delay are set in common code prior to calling the transport to create the controller. This change checks if the default reconnect delay is being used, and if so, it adjusts it to a shorter period (2s) for the nvme-fc transport. It does so by calculating the controller loss tmo window, changing the value of the reconnect delay, and then recalculating the maximum number of reconnect attempts allowed. Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 James Smart 提交于
On reconnect, the code currently does not freeze the controller before possibly updating the number hw queues for the controller. Add the freeze before updating the number of hw queues. Note: the queues are already started and remain started through the reconnect. Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 James Smart 提交于
The loop that backs out of hw io queue creation continues through index 0, which corresponds to the admin queue as well. Fix the loop so it only proceeds through indexes 1..n which correspond to I/O queues. Signed-off-by: NJames Smart <james.smart@broadcom.com> Reviewed-by: NHimanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: NHannes Reinecke <hare@suse.de> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 James Smart 提交于
Currently, an I/O timeout unconditionally invokes nvme_fc_error_recovery() which checks for LIVE or CONNECTING state. If live, the routine resets the controller which initiates a reconnect - which is valid. If CONNECTING, err_work is scheduled. Err_work then calls the terminate_io routine, which also checks for CONNECTING and noops any further action on outstanding I/O. The result is nothing happened to the timed out io. As such, if the command was dropped on the wire, it will never timeout / complete, and the connect process will hang. Change the behavior of the io timeout routine to unconditionally abort the I/O. I/O completion handling will note that an io failed due to an abort and will terminate the connection / association as needed. If the abort was unable to happen, continue with a call to nvme_fc_error_recovery(). To ensure something different happens in nvme_fc_error_recovery() rework it so at it will abort all I/Os on the association to force a failure. As I/O aborts now may occur outside of delete_association, counting for completion must be wary and only count those aborted during delete_association when TERMIO is set on the controller. Signed-off-by: NJames Smart <james.smart@broadcom.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Juergen Gross 提交于
Unmasking an event channel with fifo events channels being used can require a hypercall to be made, so try to avoid that by checking whether the event channel was really masked. Suggested-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> Link: https://lore.kernel.org/r/20201022094907.28560-5-jgross@suse.comSigned-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
由 Juergen Gross 提交于
xen_debug_interrupt() is specific to 2-level event handling. So don't register it with fifo event handling being active. Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> Link: https://lore.kernel.org/r/20201022094907.28560-4-jgross@suse.comSigned-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
由 Juergen Gross 提交于
The struct irq_info of Xen's event handling is used only for two evtchn_ops functions outside of events_base.c. Those two functions can easily be switched to avoid that usage. This allows to make struct irq_info and its related access functions private to events_base.c. Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> Link: https://lore.kernel.org/r/20201022094907.28560-3-jgross@suse.comSigned-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
由 Juergen Gross 提交于
With the switch to the lateeoi model for interdomain event channels some functions are no longer in use. Remove them. Suggested-by: NJan Beulich <jbeulich@suse.com> Signed-off-by: NJuergen Gross <jgross@suse.com> Reviewed-by: NJan Beulich <jbeulich@suse.com> Link: https://lore.kernel.org/r/20201022094907.28560-2-jgross@suse.comSigned-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
-
由 Helge Deller 提交于
When starting a HP machine with HIL driver but without an HIL keyboard or HIL mouse attached, it may happen that data written to the HIL loop gets stuck (e.g. because the transaction queue is full). Usually one will then have to reboot the machine because all you see is and endless output of: Transaction add failed: transaction already queued? In the higher layers hp_sdc_enqueue_transaction() is called to queued up a HIL packet. This function returns an error code, and this patch adds the necessary checks for this return code and disables the HIL driver if further packets can't be sent. Tested on a HP 730 and a HP 715/64 machine. Signed-off-by: NHelge Deller <deller@gmx.de> Cc: <stable@vger.kernel.org>
-
由 Tom Rix 提交于
A break following a return statement is pointless, so drop all of the breaks following return statements from this file. Signed-off-by: NTom Rix <trix@redhat.com> [ rjw: Subject and changelog edits ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Tom Rix 提交于
A break following a return statement is pointless, so drop it. Signed-off-by: NTom Rix <trix@redhat.com> [ rjw: Subject and changelog edits ] Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Ulf Hansson 提交于
All avs drivers have now been moved to their corresponding soc specific directories. Additionally, they don't depend on the POWER_AVS Kconfig anymore. Therefore, let's simply drop the drivers/power/avs directory altogether. Cc: Nishanth Menon <nm@ti.com> Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Reviewed-by: NNishanth Menon <nm@ti.com> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
由 Keith Busch 提交于
The block layer provides special status codes when requests go beyond the zone resource limits. Use these codes instead of the generic IOERR for requests that exceed the max active or open limits the null_blk device was configured with so that applications know how these special conditions should be handled. Signed-off-by: NKeith Busch <kbusch@kernel.org> Reviewed-by: NNiklas Cassel <niklas.cassel@wdc.com> Cc: Damien Le Moal <damien.lemoal@wdc.com> Cc: Niklas Cassel <niklas.cassel@wdc.com> Signed-off-by: NJens Axboe <axboe@kernel.dk>
-
由 Ulf Hansson 提交于
The avs drivers are all SoC specific drivers that doesn't share any code. Instead they are located in a directory, mostly to keep similar functionality together. From a maintenance point of view, it makes better sense to collect SoC specific drivers like these, into the SoC specific directories. Therefore, let's move the qcom-cpr driver to the qcom directory. Signed-off-by: NUlf Hansson <ulf.hansson@linaro.org> Acked-by: NBjorn Andersson <bjorn.andersson@linaro.org> Acked-by: NNiklas Cassel <nks@flawful.org> Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 22 10月, 2020 3 次提交
-
-
由 Chaitanya Kulkarni 提交于
By default, we set the passthru request allocation flag such that it returns the error in the following code path and we fail the I/O when BLK_MQ_REQ_NOWAIT is used for request allocation :- nvme_alloc_request() blk_mq_alloc_request() blk_mq_queue_enter() if (flag & BLK_MQ_REQ_NOWAIT) return -EBUSY; <-- return if busy. On some controllers using BLK_MQ_REQ_NOWAIT ends up in I/O error where the controller is perfectly healthy and not in a degraded state. Block layer request allocation does allow us to wait instead of immediately returning the error when we BLK_MQ_REQ_NOWAIT flag is not used. This has shown to fix the I/O error problem reported under heavy random write workload. Remove the BLK_MQ_REQ_NOWAIT parameter for passthru request allocation which resolves this issue. Signed-off-by: NChaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Reviewed-by: NLogan Gunthorpe <logang@deltatee.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Logan Gunthorpe 提交于
Clean up some confusing elements of nvmet_passthru_map_sg() by returning early if the request is greater than the maximum bio size. This allows us to drop the sg_cnt variable. This should not result in any functional change but makes the code clearer and more understandable. The original code allocated a truncated bio then would return EINVAL when bio_add_pc_page() filled that bio. The new code just returns EINVAL early if this would happen. Fixes: c1fef73f ("nvmet: add passthru code to process commands") Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Suggested-by: NDouglas Gilbert <dgilbert@interlog.com> Reviewed-by: NSagi Grimberg <sagi@grimberg.me> Cc: Christoph Hellwig <hch@lst.de> Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
由 Logan Gunthorpe 提交于
nvmet_passthru_map_sg() only supports mapping a single BIO, not a chain so the effective maximum transfer should also be limitted by BIO_MAX_PAGES (presently this works out to 1MB). For PCI passthru devices the max_sectors would typically be more limitting than BIO_MAX_PAGES, but this may not be true for all passthru devices. Fixes: c1fef73f ("nvmet: add passthru code to process commands") Suggested-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NLogan Gunthorpe <logang@deltatee.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Sagi Grimberg <sagi@grimberg.me> Cc: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-