- 02 9月, 2020 1 次提交
-
-
由 Dave Jiang 提交于
to #27305291 commit b3ed2ce024c36054e51cca2eb31a1cdbe4a5f11e upstream. Add command definition for security commands defined in Intel DSM specification v1.8 [1]. This includes "get security state", "set passphrase", "unlock unit", "freeze lock", "secure erase", "overwrite", "overwrite query", "master passphrase enable/disable", and "master erase", . Since this adds several Intel definitions, move the relevant bits to their own header. These commands mutate physical data, but that manipulation is not cache coherent. The requirement to flush and invalidate caches makes these commands unsuitable to be called from userspace, so extra logic is added to detect and block these commands from being submitted via the ioctl command submission path. Lastly, the commands may contain sensitive key material that should not be dumped in a standard debug session. Update the nvdimm-command payload-dump facility to move security command payloads behind a default-off compile time switch. [1]: http://pmem.io/documents/NVDIMM_DSM_Interface-V1.8.pdfSigned-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> [ Shile: fixed conflicts: This patch updated the file "drivers/acpi/nfit/intel.h". The header file is introduced by commit 0ead111 ("acpi, nfit: Collect shutdown status") in upstream, which also update the test files. So let's fetch this part to fix the conflict: - tools/testing/nvdimm/test/nfit.c - tools/testing/nvdimm/test/nfit_test.h ] Signed-off-by: NShile Zhang <shile.zhang@linux.alibaba.com> Reviewed-by: NYang Shi <yang.shi@linux.alibaba.com>
-
- 27 4月, 2019 4 次提交
-
-
由 Dan Williams 提交于
commit 78153dd45e7e0596ba32b15d02bda08e1513111e upstream. Gate ARS result consumption on whether the OS issued start-ARS since the previous consumption. The BIOS may only clear its result buffers after a successful start-ARS. Fixes: 0caeef63 ("libnvdimm: Add a poison list and export badblocks") Cc: <stable@vger.kernel.org> Reported-by: NKrzysztof Rusocki <krzysztof.rusocki@intel.com> Reported-by: NVishal Verma <vishal.l.verma@intel.com> Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Williams 提交于
commit 5479b2757f26fe9908fc341d105b2097fe820b6f upstream. The ARS implementation implements exponential back-off on the poll interval to prevent high-frequency access to the DIMM / platform interface. Depending on when the ARS completes the poll interval may exceed the completion event by minutes. Allow root to reset the timeout each time it probes the status. A one-second timeout is still enforced, but root can otherwise can control the poll interval. Fixes: bc6ba808 ("nfit, address-range-scrub: rework and simplify ARS...") Cc: <stable@vger.kernel.org> Reported-by: NErwin Tsaur <erwin.tsaur@oracle.com> Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Williams 提交于
commit e34b8252a3d2893ca55c82dbfcdaa302fa03d400 upstream. In preparation for introducing new flags to gate whether ARS results are stale, or poll the completion state, convert the existing flags to an unsigned long with enumerated values. This conversion allows the flags to be atomically updated outside of ->init_mutex. Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
由 Dan Williams 提交于
commit 317a992ab9266b86b774b9f6b0f87eb4f59879a1 upstream. The ars_start_flags property of 'struct acpi_nfit_desc' is no longer used since ARS_REQ_SHORT and ARS_REQ_LONG were added. Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NSasha Levin <sashal@kernel.org>
-
- 14 11月, 2018 1 次提交
-
-
由 Dan Williams 提交于
commit d3abaf43bab8d5b0a3c6b982100d9e2be96de4ad upstream. The Address Range Scrub implementation tried to skip running scrubs against ranges that were already scrubbed by the BIOS. Unfortunately that support also resulted in early scrub completions as evidenced by this debug output from nfit_test: nd_region region9: ARS: range 1 short complete nd_region region3: ARS: range 1 short complete nd_region region4: ARS: range 2 ARS start (0) nd_region region4: ARS: range 2 short complete ...i.e. completions without any indications that the scrub was started. This state of affairs was hard to see in the code due to the proliferation of state bits and mistakenly trying to track done state per-range when the completion is a global property of the bus. So, kill the four ARS state bits (ARS_REQ, ARS_REQ_REDO, ARS_DONE, and ARS_SHORT), and replace them with just 2 request flags ARS_REQ_SHORT and ARS_REQ_LONG. The implementation will still complete and reap the results of BIOS initiated ARS, but it will not attempt to use that information to affect the completion status of scrubbing the ranges from a Linux perspective. Instead, try to synchronously run a short ARS per range at init time and schedule a long scrub in the background. If ARS is busy with an ARS request, schedule both a short and a long scrub for when ARS returns to idle. This logic also satisfies the intent of what ARS_REQ_REDO was trying to achieve. The new rule is that the REQ flag stays set until the next successful ars_start() for that range. With the new policy that the REQ flags are not cleared until the next start, the implementation no longer loses requests as can be seen from the following log: nd_region region3: ARS: range 1 ARS start short (0) nd_region region9: ARS: range 1 ARS start short (0) nd_region region3: ARS: range 1 complete nd_region region4: ARS: range 2 ARS start short (0) nd_region region9: ARS: range 1 complete nd_region region9: ARS: range 1 ARS start long (0) nd_region region4: ARS: range 2 complete nd_region region3: ARS: range 1 ARS start long (0) nd_region region9: ARS: range 1 complete nd_region region3: ARS: range 1 complete nd_region region4: ARS: range 2 ARS start long (0) nd_region region4: ARS: range 2 complete ...note that the nfit_test emulated driver provides 2 buses, that is why some of the range indices are duplicated. Notice that each range now successfully completes a short and long scrub. Cc: <stable@vger.kernel.org> Fixes: 14c73f99 ("nfit, address-range-scrub: introduce nfit_spa->ars_state") Fixes: cc3d3458 ("acpi/nfit: queue issuing of ars when an uc error...") Reported-by: NJacek Zloch <jacek.zloch@intel.com> Reported-by: NKrzysztof Rusocki <krzysztof.rusocki@intel.com> Reviewed-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 28 7月, 2018 1 次提交
-
-
由 Dave Jiang 提交于
When the ACPI UC error notifier gets called and ARS_REQ bit is set with the passed in flag, we can receive -EBUSY if ARS_REQ bit is already set for the nfit_spa->ars_state. When that happens, the ARS request is dropped. That can potentially cause us to miss the unreported errors that the on going ARS request does not receive. Add an ARS_REQ_REDO state that will request short ARS upon ARS completion to grab any errors we missed. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com>
-
- 06 7月, 2018 1 次提交
-
-
由 Dan Williams 提交于
The notification of scrub completion happens within the scrub workqueue. That can clearly race someone running scrub_show() and work_busy() before the workqueue has a chance to flush the recently completed work. Add a flag to reliably indicate the idle vs busy state. Without this change applications using poll(2) to wait for scrub-completion may falsely wakeup and read ARS as being busy even though the thread is going idle and then hang indefinitely. Fixes: bc6ba808 ("nfit, address-range-scrub: rework and simplify ARS...") Cc: <stable@vger.kernel.org> Reported-by: NVishal Verma <vishal.l.verma@intel.com> Tested-by: NVishal Verma <vishal.l.verma@intel.com> Reported-by: NLukasz Dorau <lukasz.dorau@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 4月, 2018 2 次提交
-
-
由 Dan Williams 提交于
ARS is an operation that can take 10s to 100s of seconds to find media errors that should rarely be present. If the platform crashes due to media errors in persistent memory, the expectation is that the BIOS will report those known errors in a 'short' ARS request. A 'short' ARS request asks platform firmware to return an ARS payload with all known errors, but without issuing a 'long' scrub. At driver init a short request is issued to all PMEM ranges before registering regions. Then, in the background, a long ARS is scheduled for each region. The ARS implementation is simplified to centralize ARS completion work in the ars_complete() helper. The timeout is removed since there is no facility to cancel ARS, and this otherwise arranges for system init to never be blocked waiting for a 'long' ARS. The ars_state flags are used to coordinate ARS requests from driver init, ARS requests from userspace, and ARS requests in response to media error notifications. Given that there is no notification of ARS completion the implementation still needs to poll. It backs off exponentially to a maximum poll period of 30 minutes. Suggested-by: NToshi Kani <toshi.kani@hpe.com> Co-developed-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
acpi_nfit_query_poison() is awkward in that it requires an nfit_spa argument in order to determine what max_ars value to use. Instead probe for the minimum max_ars across all scrub-capable ranges in the system and drop the nfit_spa argument. This enables a larger rework / simplification of the ARS state machine whereby the status can be retrieved once and then iterated over all address ranges to reap completions. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 4月, 2018 1 次提交
-
-
由 Dan Williams 提交于
In preparation for re-working the ARS implementation to better handle short vs long ARS runs, introduce nfit_spa->ars_state. For now this just replaces the nfit_spa->ars_required bit-field/flag, but going forward it will be used to track ARS completion and make short vs long ARS requests. Reviewed-by: NDave Jiang <dave.jiang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 29 3月, 2018 1 次提交
-
-
由 Dan Williams 提交于
Some BIOSen do not handle 0-byte transfer lengths for the _LSR and _LSW (label storage read/write) methods. This causes Linux to fallback to the deprecated _DSM path, or otherwise disable label support. Introduce acpi_nvdimm_has_method() to detect whether a method is available rather than calling the method, require _LSI and _LSR to be paired, and require read support before enabling write support. Cc: <stable@vger.kernel.org> Fixes: 4b27db7e ("acpi, nfit: add support for the _LS...") Suggested-by: NErik Schmauss <erik.schmauss@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 02 2月, 2018 1 次提交
-
-
由 Dave Jiang 提交于
In ACPI 6.2a the platform capability structure has been added to the NFIT tables. That provides software the ability to determine whether a system supports the auto flushing of CPU caches on power loss. If the capability is supported, we do not need to do dax_flush(). Plumbing the path to set the property on per region from the NFIT tables. This patch depends on the ACPI NFIT 6.2a platform capabilities support code in include/acpi/actbl1.h. Signed-off-by: NDave Jiang <dave.jiang@intel.com> Reviewed-by: NRoss Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
-
- 16 11月, 2017 1 次提交
-
-
由 Dan Williams 提交于
The NVDIMM_FAMILY_INTEL 'Enable Latch System Shutdown Status' command indicates to the platform that system software has acknowledged the most recent unsafe shutdown status. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 31 10月, 2017 1 次提交
-
-
由 Dan Williams 提交于
Per v1.6 of the NVDIMM_FAMILY_INTEL command set [1] some of the new commands require rev-id 2. In addition to enabling ND_CMD_CALL for these new function numbers, add a lookup table for revision-ids by family and function number. [1]: http://pmem.io/documents/NVDIMM_DSM_Interface-V1.6.pdfSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 30 10月, 2017 1 次提交
-
-
由 Dan Williams 提交于
For vendor specific commands that do not have a common kernel translation, hide them from nmemX/commands. For example, the following results from new enabling to probe for support of the new NVDIMM_FAMILY_INTEL DSMs specified in v1.6 of the command specification [1]: # cat /sys/bus/nd/devices/nmem0/commands smart smart_thresh flags get_size get_data set_data effect_size effect_log vendor cmd_call unknown unknown unknown unknown unknown unknown unknown unknown [1]: https://pmem.io/documents/NVDIMM_DSM_Interface-V1.6.pdfSigned-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 08 10月, 2017 2 次提交
-
-
由 Dan Williams 提交于
ACPI 6.2 adds support for named methods to access the label storage area of an NVDIMM. We prefer these new methods if available and otherwise fallback to the NVDIMM_FAMILY_INTEL _DSMs. The kernel ioctls, ND_IOCTL_{GET,SET}_CONFIG_{SIZE,DATA}, remain generic and the driver translates the 'package' payloads into the NVDIMM_FAMILY_INTEL 'buffer' format to maintain compatibility with existing userspace and keep the output buffer parsing code in the driver common. The output payloads are mostly compatible save for the 'label area locked' status that moves from the 'config_size' (_LSI) command to the 'config_read' (_LSR) command status. Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Yasunori Goto 提交于
Though nfit_test need to show what feature is supported via ND_CMD_CALL on device/nfit/dsm_mask, currently there is no way to tell it. This patch makes to enable it. Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 03 7月, 2017 1 次提交
-
-
由 Toshi Kani 提交于
ACPI 6.2 defines in section 9.20.7.2 that the OSPM may call a Start ARS with Flags Bit [1] set upon receiving the 0x81 notification. Upon receiving the notification, the OSPM may decide to issue a Start ARS with Flags Bit [1] set to prepare for the retrieval of existing records and issue the Query ARS Status function to retrieve the records. Add support to call a Start ARS from acpi_nfit_uc_error_notify() with ND_ARS_RETURN_PREV_DATA set when HW_ERROR_SCRUB_ON is not set. Link: http://www.uefi.org/sites/default/files/resources/ACPI_6_2.pdfSigned-off-by: NToshi Kani <toshi.kani@hpe.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Linda Knippers <linda.knippers@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 16 6月, 2017 1 次提交
-
-
由 Toshi Kani 提交于
ACPI 6.2 defines a new ACPI notification value to NVDIMM Root Device in Table 5-169. 0x81 Unconsumed Uncorrectable Memory Error Detected Used to pro-actively notify OSPM of uncorrectable memory errors detected (for example a memory scrubbing engine that continuously scans the NVDIMMs memory). This is an optional notification. Only locations that were mapped in to SPA by the platform will generate a notification. Add support of this notification value by initiating an ARS scan. This will find new error locations and add their badblocks information. Link: http://www.uefi.org/sites/default/files/resources/ACPI_6_2.pdfSigned-off-by: NToshi Kani <toshi.kani@hpe.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Linda Knippers <linda.knippers@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 06 6月, 2017 1 次提交
-
-
由 Andy Shevchenko 提交于
There are new types and helpers that are supposed to be used in new code. As a preparation to get rid of legacy types and API functions do the conversion here. Reviewed-by: NDan Williams <dan.j.williams@intel.com> Signed-off-by: NAndy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: NChristoph Hellwig <hch@lst.de>
-
- 19 4月, 2017 1 次提交
-
-
由 Dan Williams 提交于
The workqueue may still be running when the devres callbacks start firing to deallocate an acpi_nfit_desc instance. Stop and flush the workqueue before letting any other devres de-allocations proceed. Reported-by: NLinda Knippers <linda.knippers@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 18 4月, 2017 2 次提交
-
-
由 Dan Williams 提交于
The nvdimm probe flushing mechanism gives userspace a sync point where it knows all asynchronous driver probe sequences have completed. However, it need not wait for other asynchronous actions, like on-demand address-range-scrub. Track the init work separately from other work in the workqueue, and only flush the former. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Stop requiring dimms be successfully mapped into a system-physical-address range. For provisioning and hardware remediation purposes the kernel should account for failed devices in sysfs. If possible it should still allow management commands to be sent to the device. Reported-by: NToshi Kani <toshi.kani@hpe.com> Tested-by: NToshi Kani <toshi.kani@hpe.com> Reported-by: NLinda Knippers <linda.knippers@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 07 12月, 2016 1 次提交
-
-
由 Dan Williams 提交于
A recent flurry of bug discoveries in the nfit driver's DSM marshalling routine has highlighted the fact that we do not have unit test coverage for this routine. Add a self-test of acpi_nfit_ctl() routine before probing the "nfit_test.0" device. This mocks stimulus to acpi_nfit_ctl() and if any of the tests fail "nfit_test.0" will be unavailable causing the rest of the tests to not run / fail. This unit test will also be a place to land reproductions of quirky BIOS behavior discovered in the field and ensure the kernel does not regress against implementations it has seen in practice. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 01 10月, 2016 1 次提交
-
-
由 Vishal Verma 提交于
Starting a full Address Range Scrub (ARS) on hitting a memory error machine check exception may not always be desirable. Provide a way through sysfs to toggle the behavior between just adding the address (cache line) where the MCE happened to the poison list and doing a full scrub. The former (selective insertion of the address) is done unconditionally. Cc: linux-acpi@vger.kernel.org Cc: Linda Knippers <linda.knippers@hpe.com> Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 02 9月, 2016 1 次提交
-
-
由 Dan Williams 提交于
Trigger an nmemX/nfit/flags attribute to fire an event whenever a smart-threshold DSM is received. Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Acked-by: NRafael J. Wysocki <rafael@kernel.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 30 8月, 2016 1 次提交
-
-
由 Dan Williams 提交于
Per "ACPI 6.1 Section 9.20.3" NVDIMM devices, children of the ACPI0012 NVDIMM Root device, can receive health event notifications. Given that these devices are precluded from registering a notification handler via acpi_driver.acpi_device_ops (due to no _HID), we use acpi_install_notify_handler() directly. The registered handler, acpi_nvdimm_notify(), triggers a poll(2) event on the nmemX/nfit/flags sysfs attribute when a health event notification is received. Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: NToshi Kani <toshi.kani@hpe.com> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Acked-by: NRafael J. Wysocki <rafael@kernel.org> Reviewed-by: NToshi Kani <toshi.kani@hpe.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 23 8月, 2016 2 次提交
-
-
由 Dan Williams 提交于
We have had a couple bugs in this implementation in the past and before we add another ->notify() implementation for nvdimm devices, lets allow this routine to be exercised via nfit_test. Rewrite acpi_nfit_notify() in terms of a generic struct device and acpi_handle parameter, and then implement a mock acpi_evaluate_object() that returns a _FIT payload. Cc: Vishal Verma <vishal.l.verma@intel.com> Reviewed-by: NVishal Verma <vishal.l.verma@intel.com> Acked-by: NRafael J. Wysocki <rafael@kernel.org> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Vishal Verma 提交于
Commit 20985164 "acpi: nfit: Add support for hot-add" added support for _FIT notifications, but it neglected to verify the notification event code matches the one in the ACPI spec for "NFIT Update". Currently there is only one code in the spec, but once additional codes are added, older kernels (without this fix) will misbehave by assuming all event notifications are for an NFIT Update. Fixes: 20985164 ("acpi: nfit: Add support for hot-add") Cc: <stable@vger.kernel.org> Cc: <linux-acpi@vger.kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Reported-by: NLinda Knippers <linda.knippers@hpe.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 24 7月, 2016 3 次提交
-
-
由 Vishal Verma 提交于
When a latent (unknown to 'badblocks') error is encountered, it will trigger a machine check exception. On a system with machine check recovery, this will only SIGBUS the process(es) which had the bad page mapped (as opposed to a kernel panic on platforms without machine check recovery features). In the former case, we want to trigger a full rescan of that nvdimm bus. This will allow any additional, new errors to be captured in the block devices' badblocks lists, and offending operations on them can be trapped early, avoiding machine checks. This is done by registering a callback function with the x86_mce_decoder_chain and calling the new ars_rescan functionality with the address in the mce notificatiion. Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
With the arrival of x86-machine-check support the nfit driver will add a (conditionally-compiled) source file. Prepare for this by moving all nfit source to drivers/acpi/nfit/. This is pure code movement, no functional changes. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Vishal Verma 提交于
Normally, an ARS (Address Range Scrub) only happens at boot/initialization time. There can however arise situations where a bus-wide rescan is needed - notably, in the case of discovering a latent media error, we should do a full rescan to figure out what other sectors are bad, and thus potentially avoid triggering an mce on them in the future. Also provide a sysfs trigger to start a bus-wide scrub. Cc: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: NVishal Verma <vishal.l.verma@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 22 7月, 2016 2 次提交
-
-
由 Dan Williams 提交于
Pass the nfit buffer as a parameter rather than hanging it off of acpi_desc. Reviewed-by: N"Lee, Chun-Yi" <jlee@suse.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
acpi_evaluate_object() allocates memory. Free the buffer allocated during acpi_nfit_add(). In order for this memory to be freed acpi_nfit_init() needs to be converted to duplicate the nfit contents in its internal allocation. Use zero-length arrays to minimize the thrash with the rest of the nfit driver implementation. All of the add_<nfit-sub-table>() routines now validate a minimum table size and expect hotplugged tables to match the size of the original table to count as a duplicate. For variable length tables, like 'idt' and 'flush', we calculate the dynamic size. Note that hotplug by definition cannot change the interleave as it would cause data corruption of in-use namespaces. Cc: Vishal Verma <vishal.l.verma@intel.com> Reported-by: NXiao Guangrong <guangrong.xiao@intel.com> Reported-by: NHaozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 13 7月, 2016 1 次提交
-
-
由 Dan Williams 提交于
The __pmem address space was meant to annotate codepaths that touch persistent memory and need to coordinate a call to wmb_pmem(). Now that wmb_pmem() is gone, there is little need to keep this annotation. Cc: Christoph Hellwig <hch@lst.de> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 12 7月, 2016 3 次提交
-
-
由 Dan Williams 提交于
nvdimm_flush() is a replacement for the x86 'pcommit' instruction. It is an optional write flushing mechanism that an nvdimm bus can provide for the pmem driver to consume. In the case of the NFIT nvdimm-bus-provider nvdimm_flush() is implemented as a series of flush-hint-address [1] writes to each dimm in the interleave set (region) that backs the namespace. The nvdimm_has_flush() routine relies on platform firmware to describe the flushing capabilities of a platform. It uses the heuristic of whether an nvdimm bus provider provides flush address data to return a ternary result: 1: flush addresses defined 0: dimm topology described without flush addresses (assume ADR) -errno: no topology information, unable to determine flush mechanism The pmem driver is expected to take the following actions on this ternary result: 1: nvdimm_flush() in response to REQ_FUA / REQ_FLUSH and shutdown 0: do not set, WC or FUA on the queue, take no further action -errno: warn and then operate as if nvdimm_has_flush() returned '0' The caveat of this heuristic is that it can not distinguish the "dimm does not have flush address" case from the "platform firmware is broken and failed to describe a flush address". Given we are already explicitly trusting the NFIT there's not much more we can do beyond blacklisting broken firmwares if they are ever encountered. Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
In preparation for triggering flushes of a DIMM's writes-posted-queue (WPQ) via the pmem driver move mapping of flush hint addresses to the region driver. Since this uses devm_nvdimm_memremap() the flush addresses will remain mapped while any region to which the dimm belongs is active. We need to communicate more information to the nvdimm core to facilitate this mapping, namely each dimm object now carries an array of flush hint address resources. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
由 Dan Williams 提交于
Now that all shared mappings are handled by devm_nvdimm_memremap() we no longer need nfit_spa_map() nor do we need to trigger a callback to the bus provider at region disable time. Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-
- 30 6月, 2016 1 次提交
-
-
由 Dan Williams 提交于
Per JEDEC Annex L Release 3 the SPD data is: Bits 9~5 00 000 = Function Undefined 00 001 = Byte addressable energy backed 00 010 = Block addressed 00 011 = Byte addressable, no energy backed All other codes reserved Bits 4~0 0 0000 = Proprietary interface 0 0001 = Standard interface 1 All other codes reserved; see Definitions of Functions ...and per the ACPI 6.1 spec: byte0: Bits 4~0 (0 or 1) byte1: Bits 9~5 (1, 2, or 3) ...so a format interface code displayed as 0x301 should be stored in the nfit as (0x1, 0x3), little-endian. Cc: Toshi Kani <toshi.kani@hpe.com> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Robert Moore <robert.moore@intel.com> Cc: Robert Elliott <elliott@hpe.com> Link: https://bugzilla.kernel.org/show_bug.cgi?id=121161 Fixes: 30ec5fd4 ("nfit: fix format interface code byte order per ACPI6.1") Fixes: 5ad9a7fd ("acpi/nfit: Update nfit driver to comply with ACPI 6.1") Reported-by: NKristin Jacque <kristin.jacque@intel.com> Signed-off-by: NDan Williams <dan.j.williams@intel.com>
-