提交 51bcc738 编写于 作者: E Emmanuel Grumbach

Merge tag 'mac80211-next-for-davem-2016-02-26' into next2

Here's another round of updates for -next:
 * big A-MSDU RX performance improvement (avoid linearize of paged RX)
 * rfkill changes: cleanups, documentation, platform properties
 * basic PBSS support in cfg80211
 * MU-MIMO action frame processing support
 * BlockAck reordering & duplicate detection offload support
 * various cleanups & little fixes
...@@ -2,28 +2,12 @@ rfkill - radio frequency (RF) connector kill switch support ...@@ -2,28 +2,12 @@ rfkill - radio frequency (RF) connector kill switch support
For details to this subsystem look at Documentation/rfkill.txt. For details to this subsystem look at Documentation/rfkill.txt.
What: /sys/class/rfkill/rfkill[0-9]+/state
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Current state of the transmitter.
This file is deprecated and scheduled to be removed in 2014,
because its not possible to express the 'soft and hard block'
state of the rfkill driver.
Values: A numeric value.
0: RFKILL_STATE_SOFT_BLOCKED
transmitter is turned off by software
1: RFKILL_STATE_UNBLOCKED
transmitter is (potentially) active
2: RFKILL_STATE_HARD_BLOCKED
transmitter is forced off by something outside of
the driver's control.
What: /sys/class/rfkill/rfkill[0-9]+/claim What: /sys/class/rfkill/rfkill[0-9]+/claim
Date: 09-Jul-2007 Date: 09-Jul-2007
KernelVersion v2.6.22 KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org Contact: linux-wireless@vger.kernel.org
Description: This file is deprecated because there no longer is a way to Description: This file was deprecated because there no longer was a way to
claim just control over a single rfkill instance. claim just control over a single rfkill instance.
This file is scheduled to be removed in 2012. This file was scheduled to be removed in 2012, and was removed
in 2016.
Values: 0: Kernel handles events Values: 0: Kernel handles events
...@@ -2,9 +2,8 @@ rfkill - radio frequency (RF) connector kill switch support ...@@ -2,9 +2,8 @@ rfkill - radio frequency (RF) connector kill switch support
For details to this subsystem look at Documentation/rfkill.txt. For details to this subsystem look at Documentation/rfkill.txt.
For the deprecated /sys/class/rfkill/*/state and For the deprecated /sys/class/rfkill/*/claim knobs of this interface look in
/sys/class/rfkill/*/claim knobs of this interface look in Documentation/ABI/removed/sysfs-class-rfkill.
Documentation/ABI/obsolete/sysfs-class-rfkill.
What: /sys/class/rfkill What: /sys/class/rfkill
Date: 09-Jul-2007 Date: 09-Jul-2007
...@@ -42,6 +41,28 @@ Values: A numeric value. ...@@ -42,6 +41,28 @@ Values: A numeric value.
1: true 1: true
What: /sys/class/rfkill/rfkill[0-9]+/state
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Current state of the transmitter.
This file was scheduled to be removed in 2014, but due to its
large number of users it will be sticking around for a bit
longer. Despite it being marked as stabe, the newer "hard" and
"soft" interfaces should be preffered, since it is not possible
to express the 'soft and hard block' state of the rfkill driver
through this interface. There will likely be another attempt to
remove it in the future.
Values: A numeric value.
0: RFKILL_STATE_SOFT_BLOCKED
transmitter is turned off by software
1: RFKILL_STATE_UNBLOCKED
transmitter is (potentially) active
2: RFKILL_STATE_HARD_BLOCKED
transmitter is forced off by something outside of
the driver's control.
What: /sys/class/rfkill/rfkill[0-9]+/hard What: /sys/class/rfkill/rfkill[0-9]+/hard
Date: 12-March-2010 Date: 12-March-2010
KernelVersion v2.6.34 KernelVersion v2.6.34
......
...@@ -24,7 +24,5 @@ net_prio.txt ...@@ -24,7 +24,5 @@ net_prio.txt
- Network priority cgroups details and usages. - Network priority cgroups details and usages.
pids.txt pids.txt
- Process number cgroups details and usages. - Process number cgroups details and usages.
resource_counter.txt
- Resource Counter API.
unified-hierarchy.txt unified-hierarchy.txt
- Description the new/next cgroup interface. - Description the new/next cgroup interface.
...@@ -84,8 +84,7 @@ Throttling/Upper Limit policy ...@@ -84,8 +84,7 @@ Throttling/Upper Limit policy
- Run dd to read a file and see if rate is throttled to 1MB/s or not. - Run dd to read a file and see if rate is throttled to 1MB/s or not.
# dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024 # dd iflag=direct if=/mnt/common/zerofile of=/dev/null bs=4K count=1024
# iflag=direct
1024+0 records in 1024+0 records in
1024+0 records out 1024+0 records out
4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s
...@@ -374,82 +373,3 @@ One can experience an overall throughput drop if you have created multiple ...@@ -374,82 +373,3 @@ One can experience an overall throughput drop if you have created multiple
groups and put applications in that group which are not driving enough groups and put applications in that group which are not driving enough
IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle
on individual groups and throughput should improve. on individual groups and throughput should improve.
Writeback
=========
Page cache is dirtied through buffered writes and shared mmaps and
written asynchronously to the backing filesystem by the writeback
mechanism. Writeback sits between the memory and IO domains and
regulates the proportion of dirty memory by balancing dirtying and
write IOs.
On traditional cgroup hierarchies, relationships between different
controllers cannot be established making it impossible for writeback
to operate accounting for cgroup resource restrictions and all
writeback IOs are attributed to the root cgroup.
If both the blkio and memory controllers are used on the v2 hierarchy
and the filesystem supports cgroup writeback, writeback operations
correctly follow the resource restrictions imposed by both memory and
blkio controllers.
Writeback examines both system-wide and per-cgroup dirty memory status
and enforces the more restrictive of the two. Also, writeback control
parameters which are absolute values - vm.dirty_bytes and
vm.dirty_background_bytes - are distributed across cgroups according
to their current writeback bandwidth.
There's a peculiarity stemming from the discrepancy in ownership
granularity between memory controller and writeback. While memory
controller tracks ownership per page, writeback operates on inode
basis. cgroup writeback bridges the gap by tracking ownership by
inode but migrating ownership if too many foreign pages, pages which
don't match the current inode ownership, have been encountered while
writing back the inode.
This is a conscious design choice as writeback operations are
inherently tied to inodes making strictly following page ownership
complicated and inefficient. The only use case which suffers from
this compromise is multiple cgroups concurrently dirtying disjoint
regions of the same inode, which is an unlikely use case and decided
to be unsupported. Note that as memory controller assigns page
ownership on the first use and doesn't update it until the page is
released, even if cgroup writeback strictly follows page ownership,
multiple cgroups dirtying overlapping areas wouldn't work as expected.
In general, write-sharing an inode across multiple cgroups is not well
supported.
Filesystem support for cgroup writeback
---------------------------------------
A filesystem can make writeback IOs cgroup-aware by updating
address_space_operations->writepage[s]() to annotate bio's using the
following two functions.
* wbc_init_bio(@wbc, @bio)
Should be called for each bio carrying writeback data and associates
the bio with the inode's owner cgroup. Can be called anytime
between bio allocation and submission.
* wbc_account_io(@wbc, @page, @bytes)
Should be called for each data segment being written out. While
this function doesn't care exactly when it's called during the
writeback session, it's the easiest and most natural to call it as
data segments are added to a bio.
With writeback bio's annotated, cgroup support can be enabled per
super_block by setting MS_CGROUPWB in ->s_flags. This allows for
selective disabling of cgroup writeback support which is helpful when
certain filesystem features, e.g. journaled data mode, are
incompatible.
wbc_init_bio() binds the specified bio to its cgroup. Depending on
the configuration, the bio may be executed at a lower priority and if
the writeback session is holding shared resources, e.g. a journal
entry, may lead to priority inversion. There is no one easy solution
for the problem. Filesystems can try to work around specific problem
cases by skipping wbc_init_bio() or using bio_associate_blkcg()
directly.
此差异已折叠。
此差异已折叠。
Intel P-state driver Intel P-State driver
-------------------- --------------------
This driver provides an interface to control the P state selection for This driver provides an interface to control the P-State selection for the
SandyBridge+ Intel processors. The driver can operate two different SandyBridge+ Intel processors.
modes based on the processor model, legacy mode and Hardware P state (HWP)
mode. The following document explains P-States:
http://events.linuxfoundation.org/sites/events/files/slides/LinuxConEurope_2015.pdf
In legacy mode, the Intel P-state implements two internal governors, As stated in the document, P-State doesn’t exactly mean a frequency. However, for
performance and powersave, that differ from the general cpufreq governors of the sake of the relationship with cpufreq, P-State and frequency are used
the same name (the general cpufreq governors implement target(), whereas the interchangeably.
internal Intel P-state governors implement setpolicy()). The internal
performance governor sets the max_perf_pct and min_perf_pct to 100; that is, Understanding the cpufreq core governors and policies are important before
the governor selects the highest available P state to maximize the performance discussing more details about the Intel P-State driver. Based on what callbacks
of the core. The internal powersave governor selects the appropriate P state a cpufreq driver provides to the cpufreq core, it can support two types of
based on the current load on the CPU. drivers:
- with target_index() callback: In this mode, the drivers using cpufreq core
In HWP mode P state selection is implemented in the processor simply provide the minimum and maximum frequency limits and an additional
itself. The driver provides the interfaces between the cpufreq core and interface target_index() to set the current frequency. The cpufreq subsystem
the processor to control P state selection based on user preferences has a number of scaling governors ("performance", "powersave", "ondemand",
and reporting frequency to the cpufreq core. In this mode the etc.). Depending on which governor is in use, cpufreq core will call for
internal Intel P-state governor code is disabled. transitions to a specific frequency using target_index() callback.
- setpolicy() callback: In this mode, drivers do not provide target_index()
In addition to the interfaces provided by the cpufreq core for callback, so cpufreq core can't request a transition to a specific frequency.
controlling frequency the driver provides sysfs files for The driver provides minimum and maximum frequency limits and callbacks to set a
controlling P state selection. These files have been added to policy. The policy in cpufreq sysfs is referred to as the "scaling governor".
/sys/devices/system/cpu/intel_pstate/ The cpufreq core can request the driver to operate in any of the two policies:
"performance: and "powersave". The driver decides which frequency to use based
max_perf_pct: limits the maximum P state that will be requested by on the above policy selection considering minimum and maximum frequency limits.
the driver stated as a percentage of the available performance. The
available (P states) performance may be reduced by the no_turbo The Intel P-State driver falls under the latter category, which implements the
setpolicy() callback. This driver decides what P-State to use based on the
requested policy from the cpufreq core. If the processor is capable of
selecting its next P-State internally, then the driver will offload this
responsibility to the processor (aka HWP: Hardware P-States). If not, the
driver implements algorithms to select the next P-State.
Since these policies are implemented in the driver, they are not same as the
cpufreq scaling governors implementation, even if they have the same name in
the cpufreq sysfs (scaling_governors). For example the "performance" policy is
similar to cpufreq’s "performance" governor, but "powersave" is completely
different than the cpufreq "powersave" governor. The strategy here is similar
to cpufreq "ondemand", where the requested P-State is related to the system load.
Sysfs Interface
In addition to the frequency-controlling interfaces provided by the cpufreq
core, the driver provides its own sysfs files to control the P-State selection.
These files have been added to /sys/devices/system/cpu/intel_pstate/.
Any changes made to these files are applicable to all CPUs (even in a
multi-package system).
max_perf_pct: Limits the maximum P-State that will be requested by
the driver. It states it as a percentage of the available performance. The
available (P-State) performance may be reduced by the no_turbo
setting described below. setting described below.
min_perf_pct: limits the minimum P state that will be requested by min_perf_pct: Limits the minimum P-State that will be requested by
the driver stated as a percentage of the max (non-turbo) the driver. It states it as a percentage of the max (non-turbo)
performance level. performance level.
no_turbo: limits the driver to selecting P states below the turbo no_turbo: Limits the driver to selecting P-State below the turbo
frequency range. frequency range.
turbo_pct: displays the percentage of the total performance that turbo_pct: Displays the percentage of the total performance that
is supported by hardware that is in the turbo range. This number is supported by hardware that is in the turbo range. This number
is independent of whether turbo has been disabled or not. is independent of whether turbo has been disabled or not.
num_pstates: displays the number of pstates that are supported num_pstates: Displays the number of P-States that are supported
by hardware. This number is independent of whether turbo has by hardware. This number is independent of whether turbo has
been disabled or not. been disabled or not.
For example, if a system has these parameters:
Max 1 core turbo ratio: 0x21 (Max 1 core ratio is the maximum P-State)
Max non turbo ratio: 0x17
Minimum ratio : 0x08 (Here the ratio is called max efficiency ratio)
Sysfs will show :
max_perf_pct:100, which corresponds to 1 core ratio
min_perf_pct:24, max_efficiency_ratio / max 1 Core ratio
no_turbo:0, turbo is not disabled
num_pstates:26 = (max 1 Core ratio - Max Efficiency Ratio + 1)
turbo_pct:39 = (max 1 core ratio - max non turbo ratio) / num_pstates
Refer to "Intel® 64 and IA-32 Architectures Software Developer’s Manual
Volume 3: System Programming Guide" to understand ratios.
cpufreq sysfs for Intel P-State
Since this driver registers with cpufreq, cpufreq sysfs is also presented.
There are some important differences, which need to be considered.
scaling_cur_freq: This displays the real frequency which was used during
the last sample period instead of what is requested. Some other cpufreq driver,
like acpi-cpufreq, displays what is requested (Some changes are on the
way to fix this for acpi-cpufreq driver). The same is true for frequencies
displayed at /proc/cpuinfo.
scaling_governor: This displays current active policy. Since each CPU has a
cpufreq sysfs, it is possible to set a scaling governor to each CPU. But this
is not possible with Intel P-States, as there is one common policy for all
CPUs. Here, the last requested policy will be applicable to all CPUs. It is
suggested that one use the cpupower utility to change policy to all CPUs at the
same time.
scaling_setspeed: This attribute can never be used with Intel P-State.
scaling_max_freq/scaling_min_freq: This interface can be used similarly to
the max_perf_pct/min_perf_pct of Intel P-State sysfs. However since frequencies
are converted to nearest possible P-State, this is prone to rounding errors.
This method is not preferred to limit performance.
affected_cpus: Not used
related_cpus: Not used
For contemporary Intel processors, the frequency is controlled by the For contemporary Intel processors, the frequency is controlled by the
processor itself and the P-states exposed to software are related to processor itself and the P-State exposed to software is related to
performance levels. The idea that frequency can be set to a single performance levels. The idea that frequency can be set to a single
frequency is fiction for Intel Core processors. Even if the scaling frequency is fictional for Intel Core processors. Even if the scaling
driver selects a single P state the actual frequency the processor driver selects a single P-State, the actual frequency the processor
will run at is selected by the processor itself. will run at is selected by the processor itself.
For legacy mode debugfs files have also been added to allow tuning of Tuning Intel P-State driver
the internal governor algorythm. These files are located at
/sys/kernel/debug/pstate_snb/ These files are NOT present in HWP mode. When HWP mode is not used, debugfs files have also been added to allow the
tuning of the internal governor algorithm. These files are located at
/sys/kernel/debug/pstate_snb/. The algorithm uses a PID (Proportional
Integral Derivative) controller. The PID tunable parameters are:
deadband deadband
d_gain_pct d_gain_pct
...@@ -63,3 +133,90 @@ the internal governor algorythm. These files are located at ...@@ -63,3 +133,90 @@ the internal governor algorythm. These files are located at
p_gain_pct p_gain_pct
sample_rate_ms sample_rate_ms
setpoint setpoint
To adjust these parameters, some understanding of driver implementation is
necessary. There are some tweeks described here, but be very careful. Adjusting
them requires expert level understanding of power and performance relationship.
These limits are only useful when the "powersave" policy is active.
-To make the system more responsive to load changes, sample_rate_ms can
be adjusted (current default is 10ms).
-To make the system use higher performance, even if the load is lower, setpoint
can be adjusted to a lower number. This will also lead to faster ramp up time
to reach the maximum P-State.
If there are no derivative and integral coefficients, The next P-State will be
equal to:
current P-State - ((setpoint - current cpu load) * p_gain_pct)
For example, if the current PID parameters are (Which are defaults for the core
processors like SandyBridge):
deadband = 0
d_gain_pct = 0
i_gain_pct = 0
p_gain_pct = 20
sample_rate_ms = 10
setpoint = 97
If the current P-State = 0x08 and current load = 100, this will result in the
next P-State = 0x08 - ((97 - 100) * 0.2) = 8.6 (rounded to 9). Here the P-State
goes up by only 1. If during next sample interval the current load doesn't
change and still 100, then P-State goes up by one again. This process will
continue as long as the load is more than the setpoint until the maximum P-State
is reached.
For the same load at setpoint = 60, this will result in the next P-State
= 0x08 - ((60 - 100) * 0.2) = 16
So by changing the setpoint from 97 to 60, there is an increase of the
next P-State from 9 to 16. So this will make processor execute at higher
P-State for the same CPU load. If the load continues to be more than the
setpoint during next sample intervals, then P-State will go up again till the
maximum P-State is reached. But the ramp up time to reach the maximum P-State
will be much faster when the setpoint is 60 compared to 97.
Debugging Intel P-State driver
Event tracing
To debug P-State transition, the Linux event tracing interface can be used.
There are two specific events, which can be enabled (Provided the kernel
configs related to event tracing are enabled).
# cd /sys/kernel/debug/tracing/
# echo 1 > events/power/pstate_sample/enable
# echo 1 > events/power/cpu_frequency/enable
# cat trace
gnome-terminal--4510 [001] ..s. 1177.680733: pstate_sample: core_busy=107
scaled=94 from=26 to=26 mperf=1143818 aperf=1230607 tsc=29838618
freq=2474476
cat-5235 [002] ..s. 1177.681723: cpu_frequency: state=2900000 cpu_id=2
Using ftrace
If function level tracing is required, the Linux ftrace interface can be used.
For example if we want to check how often a function to set a P-State is
called, we can set ftrace filter to intel_pstate_set_pstate.
# cd /sys/kernel/debug/tracing/
# cat available_filter_functions | grep -i pstate
intel_pstate_set_pstate
intel_pstate_cpu_init
...
# echo intel_pstate_set_pstate > set_ftrace_filter
# echo function > current_tracer
# cat trace | head -15
# tracer: function
#
# entries-in-buffer/entries-written: 80/80 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
Xorg-3129 [000] ..s. 2537.644844: intel_pstate_set_pstate <-intel_pstate_timer_func
gnome-terminal--4510 [002] ..s. 2537.649844: intel_pstate_set_pstate <-intel_pstate_timer_func
gnome-shell-3409 [001] ..s. 2537.650850: intel_pstate_set_pstate <-intel_pstate_timer_func
<idle>-0 [000] ..s. 2537.654843: intel_pstate_set_pstate <-intel_pstate_timer_func
...@@ -159,8 +159,8 @@ to be strictly associated with a P-state. ...@@ -159,8 +159,8 @@ to be strictly associated with a P-state.
2.2 cpuinfo_transition_latency: 2.2 cpuinfo_transition_latency:
------------------------------- -------------------------------
The cpuinfo_transition_latency field is 0. The PCC specification does The cpuinfo_transition_latency field is CPUFREQ_ETERNAL. The PCC specification
not include a field to expose this value currently. does not include a field to expose this value currently.
2.3 cpuinfo_cur_freq: 2.3 cpuinfo_cur_freq:
--------------------- ---------------------
......
...@@ -242,6 +242,23 @@ nodes to be present and contain the properties described below. ...@@ -242,6 +242,23 @@ nodes to be present and contain the properties described below.
Definition: Specifies the syscon node controlling the cpu core Definition: Specifies the syscon node controlling the cpu core
power domains. power domains.
- dynamic-power-coefficient
Usage: optional
Value type: <prop-encoded-array>
Definition: A u32 value that represents the running time dynamic
power coefficient in units of mW/MHz/uVolt^2. The
coefficient can either be calculated from power
measurements or derived by analysis.
The dynamic power consumption of the CPU is
proportional to the square of the Voltage (V) and
the clock frequency (f). The coefficient is used to
calculate the dynamic power as below -
Pdyn = dynamic-power-coefficient * V^2 * f
where voltage is in uV, frequency is in MHz.
Example 1 (dual-cluster big.LITTLE system 32-bit): Example 1 (dual-cluster big.LITTLE system 32-bit):
cpus { cpus {
......
Binding for ST's CPUFreq driver
===============================
ST's CPUFreq driver attempts to read 'process' and 'version' attributes
from the SoC, then supplies the OPP framework with 'prop' and 'supported
hardware' information respectively. The framework is then able to read
the DT and operate in the usual way.
For more information about the expected DT format [See: ../opp/opp.txt].
Frequency Scaling only
----------------------
No vendor specific driver required for this.
Located in CPU's node:
- operating-points : [See: ../power/opp.txt]
Example [safe]
--------------
cpus {
cpu@0 {
/* kHz uV */
operating-points = <1500000 0
1200000 0
800000 0
500000 0>;
};
};
Dynamic Voltage and Frequency Scaling (DVFS)
--------------------------------------------
This requires the ST CPUFreq driver to supply 'process' and 'version' info.
Located in CPU's node:
- operating-points-v2 : [See ../power/opp.txt]
Example [unsafe]
----------------
cpus {
cpu@0 {
operating-points-v2 = <&cpu0_opp_table>;
};
};
cpu0_opp_table: opp_table {
compatible = "operating-points-v2";
/* ############################################################### */
/* # WARNING: Do not attempt to copy/replicate these nodes, # */
/* # they are only to be supplied by the bootloader !!! # */
/* ############################################################### */
opp0 {
/* Major Minor Substrate */
/* 2 all all */
opp-supported-hw = <0x00000004 0xffffffff 0xffffffff>;
opp-hz = /bits/ 64 <1500000000>;
clock-latency-ns = <10000000>;
opp-microvolt-pcode0 = <1200000>;
opp-microvolt-pcode1 = <1200000>;
opp-microvolt-pcode2 = <1200000>;
opp-microvolt-pcode3 = <1200000>;
opp-microvolt-pcode4 = <1170000>;
opp-microvolt-pcode5 = <1140000>;
opp-microvolt-pcode6 = <1100000>;
opp-microvolt-pcode7 = <1070000>;
};
opp1 {
/* Major Minor Substrate */
/* all all all */
opp-supported-hw = <0xffffffff 0xffffffff 0xffffffff>;
opp-hz = /bits/ 64 <1200000000>;
clock-latency-ns = <10000000>;
opp-microvolt-pcode0 = <1110000>;
opp-microvolt-pcode1 = <1150000>;
opp-microvolt-pcode2 = <1100000>;
opp-microvolt-pcode3 = <1080000>;
opp-microvolt-pcode4 = <1040000>;
opp-microvolt-pcode5 = <1020000>;
opp-microvolt-pcode6 = <980000>;
opp-microvolt-pcode7 = <930000>;
};
};
...@@ -45,21 +45,10 @@ Devices supporting OPPs must set their "operating-points-v2" property with ...@@ -45,21 +45,10 @@ Devices supporting OPPs must set their "operating-points-v2" property with
phandle to a OPP table in their DT node. The OPP core will use this phandle to phandle to a OPP table in their DT node. The OPP core will use this phandle to
find the operating points for the device. find the operating points for the device.
Devices may want to choose OPP tables at runtime and so can provide a list of
phandles here. But only *one* of them should be chosen at runtime. This must be
accompanied by a corresponding "operating-points-names" property, to uniquely
identify the OPP tables.
If required, this can be extended for SoC vendor specfic bindings. Such bindings If required, this can be extended for SoC vendor specfic bindings. Such bindings
should be documented as Documentation/devicetree/bindings/power/<vendor>-opp.txt should be documented as Documentation/devicetree/bindings/power/<vendor>-opp.txt
and should have a compatible description like: "operating-points-v2-<vendor>". and should have a compatible description like: "operating-points-v2-<vendor>".
Optional properties:
- operating-points-names: Names of OPP tables (required if multiple OPP
tables are present), to uniquely identify them. The same list must be present
for all the CPUs which are sharing clock/voltage rails and hence the OPP
tables.
* OPP Table Node * OPP Table Node
This describes the OPPs belonging to a device. This node can have following This describes the OPPs belonging to a device. This node can have following
...@@ -100,6 +89,14 @@ Optional properties: ...@@ -100,6 +89,14 @@ Optional properties:
Entries for multiple regulators must be present in the same order as Entries for multiple regulators must be present in the same order as
regulators are specified in device's DT node. regulators are specified in device's DT node.
- opp-microvolt-<name>: Named opp-microvolt property. This is exactly similar to
the above opp-microvolt property, but allows multiple voltage ranges to be
provided for the same OPP. At runtime, the platform can pick a <name> and
matching opp-microvolt-<name> property will be enabled for all OPPs. If the
platform doesn't pick a specific <name> or the <name> doesn't match with any
opp-microvolt-<name> properties, then opp-microvolt property shall be used, if
present.
- opp-microamp: The maximum current drawn by the device in microamperes - opp-microamp: The maximum current drawn by the device in microamperes
considering system specific parameters (such as transients, process, aging, considering system specific parameters (such as transients, process, aging,
maximum operating temperature range etc.) as necessary. This may be used to maximum operating temperature range etc.) as necessary. This may be used to
...@@ -112,6 +109,9 @@ Optional properties: ...@@ -112,6 +109,9 @@ Optional properties:
for few regulators, then this should be marked as zero for them. If it isn't for few regulators, then this should be marked as zero for them. If it isn't
required for any regulator, then this property need not be present. required for any regulator, then this property need not be present.
- opp-microamp-<name>: Named opp-microamp property. Similar to
opp-microvolt-<name> property, but for microamp instead.
- clock-latency-ns: Specifies the maximum possible transition latency (in - clock-latency-ns: Specifies the maximum possible transition latency (in
nanoseconds) for switching to this OPP from any other OPP. nanoseconds) for switching to this OPP from any other OPP.
...@@ -123,6 +123,26 @@ Optional properties: ...@@ -123,6 +123,26 @@ Optional properties:
- opp-suspend: Marks the OPP to be used during device suspend. Only one OPP in - opp-suspend: Marks the OPP to be used during device suspend. Only one OPP in
the table should have this. the table should have this.
- opp-supported-hw: This enables us to select only a subset of OPPs from the
larger OPP table, based on what version of the hardware we are running on. We
still can't have multiple nodes with the same opp-hz value in OPP table.
It's an user defined array containing a hierarchy of hardware version numbers,
supported by the OPP. For example: a platform with hierarchy of three levels
of versions (A, B and C), this field should be like <X Y Z>, where X
corresponds to Version hierarchy A, Y corresponds to version hierarchy B and Z
corresponds to version hierarchy C.
Each level of hierarchy is represented by a 32 bit value, and so there can be
only 32 different supported version per hierarchy. i.e. 1 bit per version. A
value of 0xFFFFFFFF will enable the OPP for all versions for that hierarchy
level. And a value of 0x00000000 will disable the OPP completely, and so we
never want that to happen.
If 32 values aren't sufficient for a version hierarchy, than that version
hierarchy can be contained in multiple 32 bit values. i.e. <X Y Z1 Z2> in the
above example, Z1 & Z2 refer to the version hierarchy Z.
- status: Marks the node enabled/disabled. - status: Marks the node enabled/disabled.
Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together. Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together.
...@@ -157,20 +177,20 @@ Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together. ...@@ -157,20 +177,20 @@ Example 1: Single cluster Dual-core ARM cortex A9, switch DVFS states together.
compatible = "operating-points-v2"; compatible = "operating-points-v2";
opp-shared; opp-shared;
opp00 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>; opp-microvolt = <970000 975000 985000>;
opp-microamp = <70000>; opp-microamp = <70000>;
clock-latency-ns = <300000>; clock-latency-ns = <300000>;
opp-suspend; opp-suspend;
}; };
opp01 { opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>; opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <980000 1000000 1010000>; opp-microvolt = <980000 1000000 1010000>;
opp-microamp = <80000>; opp-microamp = <80000>;
clock-latency-ns = <310000>; clock-latency-ns = <310000>;
}; };
opp02 { opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>; opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1025000>; opp-microvolt = <1025000>;
clock-latency-ns = <290000>; clock-latency-ns = <290000>;
...@@ -236,20 +256,20 @@ independently. ...@@ -236,20 +256,20 @@ independently.
* independently. * independently.
*/ */
opp00 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>; opp-microvolt = <970000 975000 985000>;
opp-microamp = <70000>; opp-microamp = <70000>;
clock-latency-ns = <300000>; clock-latency-ns = <300000>;
opp-suspend; opp-suspend;
}; };
opp01 { opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>; opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <980000 1000000 1010000>; opp-microvolt = <980000 1000000 1010000>;
opp-microamp = <80000>; opp-microamp = <80000>;
clock-latency-ns = <310000>; clock-latency-ns = <310000>;
}; };
opp02 { opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>; opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1025000>; opp-microvolt = <1025000>;
opp-microamp = <90000; opp-microamp = <90000;
...@@ -312,20 +332,20 @@ DVFS state together. ...@@ -312,20 +332,20 @@ DVFS state together.
compatible = "operating-points-v2"; compatible = "operating-points-v2";
opp-shared; opp-shared;
opp00 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>; opp-microvolt = <970000 975000 985000>;
opp-microamp = <70000>; opp-microamp = <70000>;
clock-latency-ns = <300000>; clock-latency-ns = <300000>;
opp-suspend; opp-suspend;
}; };
opp01 { opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>; opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <980000 1000000 1010000>; opp-microvolt = <980000 1000000 1010000>;
opp-microamp = <80000>; opp-microamp = <80000>;
clock-latency-ns = <310000>; clock-latency-ns = <310000>;
}; };
opp02 { opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>; opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1025000>; opp-microvolt = <1025000>;
opp-microamp = <90000>; opp-microamp = <90000>;
...@@ -338,20 +358,20 @@ DVFS state together. ...@@ -338,20 +358,20 @@ DVFS state together.
compatible = "operating-points-v2"; compatible = "operating-points-v2";
opp-shared; opp-shared;
opp10 { opp@1300000000 {
opp-hz = /bits/ 64 <1300000000>; opp-hz = /bits/ 64 <1300000000>;
opp-microvolt = <1045000 1050000 1055000>; opp-microvolt = <1045000 1050000 1055000>;
opp-microamp = <95000>; opp-microamp = <95000>;
clock-latency-ns = <400000>; clock-latency-ns = <400000>;
opp-suspend; opp-suspend;
}; };
opp11 { opp@1400000000 {
opp-hz = /bits/ 64 <1400000000>; opp-hz = /bits/ 64 <1400000000>;
opp-microvolt = <1075000>; opp-microvolt = <1075000>;
opp-microamp = <100000>; opp-microamp = <100000>;
clock-latency-ns = <400000>; clock-latency-ns = <400000>;
}; };
opp12 { opp@1500000000 {
opp-hz = /bits/ 64 <1500000000>; opp-hz = /bits/ 64 <1500000000>;
opp-microvolt = <1010000 1100000 1110000>; opp-microvolt = <1010000 1100000 1110000>;
opp-microamp = <95000>; opp-microamp = <95000>;
...@@ -378,7 +398,7 @@ Example 4: Handling multiple regulators ...@@ -378,7 +398,7 @@ Example 4: Handling multiple regulators
compatible = "operating-points-v2"; compatible = "operating-points-v2";
opp-shared; opp-shared;
opp00 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000>, /* Supply 0 */ opp-microvolt = <970000>, /* Supply 0 */
<960000>, /* Supply 1 */ <960000>, /* Supply 1 */
...@@ -391,7 +411,7 @@ Example 4: Handling multiple regulators ...@@ -391,7 +411,7 @@ Example 4: Handling multiple regulators
/* OR */ /* OR */
opp00 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>, /* Supply 0 */ opp-microvolt = <970000 975000 985000>, /* Supply 0 */
<960000 965000 975000>, /* Supply 1 */ <960000 965000 975000>, /* Supply 1 */
...@@ -404,7 +424,7 @@ Example 4: Handling multiple regulators ...@@ -404,7 +424,7 @@ Example 4: Handling multiple regulators
/* OR */ /* OR */
opp00 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <970000 975000 985000>, /* Supply 0 */ opp-microvolt = <970000 975000 985000>, /* Supply 0 */
<960000 965000 975000>, /* Supply 1 */ <960000 965000 975000>, /* Supply 1 */
...@@ -417,7 +437,8 @@ Example 4: Handling multiple regulators ...@@ -417,7 +437,8 @@ Example 4: Handling multiple regulators
}; };
}; };
Example 5: Multiple OPP tables Example 5: opp-supported-hw
(example: three level hierarchy of versions: cuts, substrate and process)
/ { / {
cpus { cpus {
...@@ -426,40 +447,73 @@ Example 5: Multiple OPP tables ...@@ -426,40 +447,73 @@ Example 5: Multiple OPP tables
... ...
cpu-supply = <&cpu_supply> cpu-supply = <&cpu_supply>
operating-points-v2 = <&cpu0_opp_table_slow>, <&cpu0_opp_table_fast>; operating-points-v2 = <&cpu0_opp_table_slow>;
operating-points-names = "slow", "fast";
}; };
}; };
cpu0_opp_table_slow: opp_table_slow { opp_table {
compatible = "operating-points-v2"; compatible = "operating-points-v2";
status = "okay"; status = "okay";
opp-shared; opp-shared;
opp00 { opp@600000000 {
/*
* Supports all substrate and process versions for 0xF
* cuts, i.e. only first four cuts.
*/
opp-supported-hw = <0xF 0xFFFFFFFF 0xFFFFFFFF>
opp-hz = /bits/ 64 <600000000>; opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <900000 915000 925000>;
... ...
}; };
opp01 { opp@800000000 {
/*
* Supports:
* - cuts: only one, 6th cut (represented by 6th bit).
* - substrate: supports 16 different substrate versions
* - process: supports 9 different process versions
*/
opp-supported-hw = <0x20 0xff0000ff 0x0000f4f0>
opp-hz = /bits/ 64 <800000000>; opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <900000 915000 925000>;
... ...
}; };
}; };
};
Example 6: opp-microvolt-<name>, opp-microamp-<name>:
(example: device with two possible microvolt ranges: slow and fast)
cpu0_opp_table_fast: opp_table_fast { / {
cpus {
cpu@0 {
compatible = "arm,cortex-a7";
...
operating-points-v2 = <&cpu0_opp_table>;
};
};
cpu0_opp_table: opp_table0 {
compatible = "operating-points-v2"; compatible = "operating-points-v2";
status = "okay";
opp-shared; opp-shared;
opp10 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
... opp-microvolt-slow = <900000 915000 925000>;
opp-microvolt-fast = <970000 975000 985000>;
opp-microamp-slow = <70000>;
opp-microamp-fast = <71000>;
}; };
opp11 { opp@1200000000 {
opp-hz = /bits/ 64 <1100000000>; opp-hz = /bits/ 64 <1200000000>;
... opp-microvolt-slow = <900000 915000 925000>, /* Supply vcc0 */
<910000 925000 935000>; /* Supply vcc1 */
opp-microvolt-fast = <970000 975000 985000>, /* Supply vcc0 */
<960000 965000 975000>; /* Supply vcc1 */
opp-microamp = <70000>; /* Will be used for both slow/fast */
}; };
}; };
}; };
...@@ -28,6 +28,23 @@ radiotap headers and used to control injection: ...@@ -28,6 +28,23 @@ radiotap headers and used to control injection:
IEEE80211_RADIOTAP_F_TX_NOACK: frame should be sent without waiting for IEEE80211_RADIOTAP_F_TX_NOACK: frame should be sent without waiting for
an ACK even if it is a unicast frame an ACK even if it is a unicast frame
* IEEE80211_RADIOTAP_RATE
legacy rate for the transmission (only for devices without own rate control)
* IEEE80211_RADIOTAP_MCS
HT rate for the transmission (only for devices without own rate control).
Also some flags are parsed
IEEE80211_TX_RC_SHORT_GI: use short guard interval
IEEE80211_TX_RC_40_MHZ_WIDTH: send in HT40 mode
* IEEE80211_RADIOTAP_DATA_RETRIES
number of retries when either IEEE80211_RADIOTAP_RATE or
IEEE80211_RADIOTAP_MCS was used
The injection code can also skip all other currently defined radiotap fields The injection code can also skip all other currently defined radiotap fields
facilitating replay of captured radiotap headers directly. facilitating replay of captured radiotap headers directly.
......
...@@ -999,7 +999,7 @@ from its probe routine to make runtime PM work for the device. ...@@ -999,7 +999,7 @@ from its probe routine to make runtime PM work for the device.
It is important to remember that the driver's runtime_suspend() callback It is important to remember that the driver's runtime_suspend() callback
may be executed right after the usage counter has been decremented, because may be executed right after the usage counter has been decremented, because
user space may already have cuased the pm_runtime_allow() helper function user space may already have caused the pm_runtime_allow() helper function
unblocking the runtime PM of the device to run via sysfs, so the driver must unblocking the runtime PM of the device to run via sysfs, so the driver must
be prepared to cope with that. be prepared to cope with that.
......
...@@ -371,6 +371,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: ...@@ -371,6 +371,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
- increment the device's usage counter, run pm_runtime_resume(dev) and - increment the device's usage counter, run pm_runtime_resume(dev) and
return its result return its result
int pm_runtime_get_if_in_use(struct device *dev);
- return -EINVAL if 'power.disable_depth' is nonzero; otherwise, if the
runtime PM status is RPM_ACTIVE and the runtime PM usage counter is
nonzero, increment the counter and return 1; otherwise return 0 without
changing the counter
void pm_runtime_put_noidle(struct device *dev); void pm_runtime_put_noidle(struct device *dev);
- decrement the device's usage counter - decrement the device's usage counter
......
...@@ -83,6 +83,8 @@ rfkill drivers that control devices that can be hard-blocked unless they also ...@@ -83,6 +83,8 @@ rfkill drivers that control devices that can be hard-blocked unless they also
assign the poll_hw_block() callback (then the rfkill core will poll the assign the poll_hw_block() callback (then the rfkill core will poll the
device). Don't do this unless you cannot get the event in any other way. device). Don't do this unless you cannot get the event in any other way.
RFKill provides per-switch LED triggers, which can be used to drive LEDs
according to the switch state (LED_FULL when blocked, LED_OFF otherwise).
5. Userspace support 5. Userspace support
......
...@@ -8466,6 +8466,17 @@ F: fs/timerfd.c ...@@ -8466,6 +8466,17 @@ F: fs/timerfd.c
F: include/linux/timer* F: include/linux/timer*
F: kernel/time/*timer* F: kernel/time/*timer*
POWER MANAGEMENT CORE
M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
L: linux-pm@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
S: Supported
F: drivers/base/power/
F: include/linux/pm.h
F: include/linux/pm_*
F: include/linux/powercap.h
F: drivers/powercap/
POWER SUPPLY CLASS/SUBSYSTEM and DRIVERS POWER SUPPLY CLASS/SUBSYSTEM and DRIVERS
M: Sebastian Reichel <sre@kernel.org> M: Sebastian Reichel <sre@kernel.org>
M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> M: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
......
...@@ -64,73 +64,73 @@ ...@@ -64,73 +64,73 @@
compatible = "operating-points-v2"; compatible = "operating-points-v2";
opp-shared; opp-shared;
opp00 { opp@200000000 {
opp-hz = /bits/ 64 <200000000>; opp-hz = /bits/ 64 <200000000>;
opp-microvolt = <900000>; opp-microvolt = <900000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp01 { opp@300000000 {
opp-hz = /bits/ 64 <300000000>; opp-hz = /bits/ 64 <300000000>;
opp-microvolt = <900000>; opp-microvolt = <900000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp02 { opp@400000000 {
opp-hz = /bits/ 64 <400000000>; opp-hz = /bits/ 64 <400000000>;
opp-microvolt = <925000>; opp-microvolt = <925000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp03 { opp@500000000 {
opp-hz = /bits/ 64 <500000000>; opp-hz = /bits/ 64 <500000000>;
opp-microvolt = <950000>; opp-microvolt = <950000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp04 { opp@600000000 {
opp-hz = /bits/ 64 <600000000>; opp-hz = /bits/ 64 <600000000>;
opp-microvolt = <975000>; opp-microvolt = <975000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp05 { opp@700000000 {
opp-hz = /bits/ 64 <700000000>; opp-hz = /bits/ 64 <700000000>;
opp-microvolt = <987500>; opp-microvolt = <987500>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp06 { opp@800000000 {
opp-hz = /bits/ 64 <800000000>; opp-hz = /bits/ 64 <800000000>;
opp-microvolt = <1000000>; opp-microvolt = <1000000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
opp-suspend; opp-suspend;
}; };
opp07 { opp@900000000 {
opp-hz = /bits/ 64 <900000000>; opp-hz = /bits/ 64 <900000000>;
opp-microvolt = <1037500>; opp-microvolt = <1037500>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp08 { opp@1000000000 {
opp-hz = /bits/ 64 <1000000000>; opp-hz = /bits/ 64 <1000000000>;
opp-microvolt = <1087500>; opp-microvolt = <1087500>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp09 { opp@1100000000 {
opp-hz = /bits/ 64 <1100000000>; opp-hz = /bits/ 64 <1100000000>;
opp-microvolt = <1137500>; opp-microvolt = <1137500>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp10 { opp@1200000000 {
opp-hz = /bits/ 64 <1200000000>; opp-hz = /bits/ 64 <1200000000>;
opp-microvolt = <1187500>; opp-microvolt = <1187500>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp11 { opp@1300000000 {
opp-hz = /bits/ 64 <1300000000>; opp-hz = /bits/ 64 <1300000000>;
opp-microvolt = <1250000>; opp-microvolt = <1250000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp12 { opp@1400000000 {
opp-hz = /bits/ 64 <1400000000>; opp-hz = /bits/ 64 <1400000000>;
opp-microvolt = <1287500>; opp-microvolt = <1287500>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
}; };
opp13 { opp@1500000000 {
opp-hz = /bits/ 64 <1500000000>; opp-hz = /bits/ 64 <1500000000>;
opp-microvolt = <1350000>; opp-microvolt = <1350000>;
clock-latency-ns = <200000>; clock-latency-ns = <200000>;
......
...@@ -17,23 +17,25 @@ ...@@ -17,23 +17,25 @@
* *
*/ */
#include <linux/property.h>
#include <linux/gpio/machine.h> #include <linux/gpio/machine.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/rfkill-gpio.h>
#include "board.h" #include "board.h"
static struct rfkill_gpio_platform_data wifi_rfkill_platform_data = { static struct property_entry __initdata wifi_rfkill_prop[] = {
.name = "wifi_rfkill", PROPERTY_ENTRY_STRING("name", "wifi_rfkill"),
.type = RFKILL_TYPE_WLAN, PROPERTY_ENTRY_STRING("type", "wlan"),
{ },
};
static struct property_set __initdata wifi_rfkill_pset = {
.properties = wifi_rfkill_prop,
}; };
static struct platform_device wifi_rfkill_device = { static struct platform_device wifi_rfkill_device = {
.name = "rfkill_gpio", .name = "rfkill_gpio",
.id = -1, .id = -1,
.dev = {
.platform_data = &wifi_rfkill_platform_data,
},
}; };
static struct gpiod_lookup_table wifi_gpio_lookup = { static struct gpiod_lookup_table wifi_gpio_lookup = {
...@@ -47,6 +49,7 @@ static struct gpiod_lookup_table wifi_gpio_lookup = { ...@@ -47,6 +49,7 @@ static struct gpiod_lookup_table wifi_gpio_lookup = {
void __init tegra_paz00_wifikill_init(void) void __init tegra_paz00_wifikill_init(void)
{ {
platform_device_add_properties(&wifi_rfkill_device, &wifi_rfkill_pset);
gpiod_add_lookup_table(&wifi_gpio_lookup); gpiod_add_lookup_table(&wifi_gpio_lookup);
platform_device_register(&wifi_rfkill_device); platform_device_register(&wifi_rfkill_device);
} }
...@@ -97,13 +97,11 @@ ftrace_modify_code(unsigned long ip, unsigned char *old_code, ...@@ -97,13 +97,11 @@ ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char replaced[MCOUNT_INSN_SIZE]; unsigned char replaced[MCOUNT_INSN_SIZE];
/* /*
* Note: Due to modules and __init, code can * Note:
* disappear and change, we need to protect against faulting * We are paranoid about modifying text, as if a bug was to happen, it
* as well as code changing. We do this by using the * could cause us to read or write to someplace that could cause harm.
* probe_kernel_* functions. * Carefully read and modify the code with probe_kernel_*(), and make
* * sure what we read is what we expected it to be before modifying it.
* No real locking needed, this code is run through
* kstop_machine, or before SMP starts.
*/ */
if (!do_check) if (!do_check)
......
...@@ -54,12 +54,11 @@ static int ftrace_modify_code(unsigned long pc, unsigned char *old_code, ...@@ -54,12 +54,11 @@ static int ftrace_modify_code(unsigned long pc, unsigned char *old_code,
unsigned char replaced[MCOUNT_INSN_SIZE]; unsigned char replaced[MCOUNT_INSN_SIZE];
/* /*
* Note: Due to modules and __init, code can * Note:
* disappear and change, we need to protect against faulting * We are paranoid about modifying text, as if a bug was to happen, it
* as well as code changing. * could cause us to read or write to someplace that could cause harm.
* * Carefully read and modify the code with probe_kernel_*(), and make
* No real locking needed, this code is run through * sure what we read is what we expected it to be before modifying it.
* kstop_machine.
*/ */
/* read the text we want to modify */ /* read the text we want to modify */
......
...@@ -212,13 +212,11 @@ static int ftrace_modify_code(unsigned long ip, unsigned char *old_code, ...@@ -212,13 +212,11 @@ static int ftrace_modify_code(unsigned long ip, unsigned char *old_code,
unsigned char replaced[MCOUNT_INSN_SIZE]; unsigned char replaced[MCOUNT_INSN_SIZE];
/* /*
* Note: Due to modules and __init, code can * Note:
* disappear and change, we need to protect against faulting * We are paranoid about modifying text, as if a bug was to happen, it
* as well as code changing. We do this by using the * could cause us to read or write to someplace that could cause harm.
* probe_kernel_* functions. * Carefully read and modify the code with probe_kernel_*(), and make
* * sure what we read is what we expected it to be before modifying it.
* No real locking needed, this code is run through
* kstop_machine, or before SMP starts.
*/ */
/* read the text we want to modify */ /* read the text we want to modify */
......
...@@ -534,9 +534,10 @@ config X86_INTEL_QUARK ...@@ -534,9 +534,10 @@ config X86_INTEL_QUARK
config X86_INTEL_LPSS config X86_INTEL_LPSS
bool "Intel Low Power Subsystem Support" bool "Intel Low Power Subsystem Support"
depends on ACPI depends on X86 && ACPI
select COMMON_CLK select COMMON_CLK
select PINCTRL select PINCTRL
select IOSF_MBI
---help--- ---help---
Select to build support for Intel Low Power Subsystem such as Select to build support for Intel Low Power Subsystem such as
found on Intel Lynxpoint PCH. Selecting this option enables found on Intel Lynxpoint PCH. Selecting this option enables
......
/* /*
* iosf_mbi.h: Intel OnChip System Fabric MailBox access support * Intel OnChip System Fabric MailBox access support
*/ */
#ifndef IOSF_MBI_SYMS_H #ifndef IOSF_MBI_SYMS_H
...@@ -16,6 +16,18 @@ ...@@ -16,6 +16,18 @@
#define MBI_MASK_LO 0x000000FF #define MBI_MASK_LO 0x000000FF
#define MBI_ENABLE 0xF0 #define MBI_ENABLE 0xF0
/* IOSF SB read/write opcodes */
#define MBI_MMIO_READ 0x00
#define MBI_MMIO_WRITE 0x01
#define MBI_CFG_READ 0x04
#define MBI_CFG_WRITE 0x05
#define MBI_CR_READ 0x06
#define MBI_CR_WRITE 0x07
#define MBI_REG_READ 0x10
#define MBI_REG_WRITE 0x11
#define MBI_ESRAM_READ 0x12
#define MBI_ESRAM_WRITE 0x13
/* Baytrail available units */ /* Baytrail available units */
#define BT_MBI_UNIT_AUNIT 0x00 #define BT_MBI_UNIT_AUNIT 0x00
#define BT_MBI_UNIT_SMC 0x01 #define BT_MBI_UNIT_SMC 0x01
...@@ -28,50 +40,13 @@ ...@@ -28,50 +40,13 @@
#define BT_MBI_UNIT_SATA 0xA3 #define BT_MBI_UNIT_SATA 0xA3
#define BT_MBI_UNIT_PCIE 0xA6 #define BT_MBI_UNIT_PCIE 0xA6
/* Baytrail read/write opcodes */
#define BT_MBI_AUNIT_READ 0x10
#define BT_MBI_AUNIT_WRITE 0x11
#define BT_MBI_SMC_READ 0x10
#define BT_MBI_SMC_WRITE 0x11
#define BT_MBI_CPU_READ 0x10
#define BT_MBI_CPU_WRITE 0x11
#define BT_MBI_BUNIT_READ 0x10
#define BT_MBI_BUNIT_WRITE 0x11
#define BT_MBI_PMC_READ 0x06
#define BT_MBI_PMC_WRITE 0x07
#define BT_MBI_GFX_READ 0x00
#define BT_MBI_GFX_WRITE 0x01
#define BT_MBI_SMIO_READ 0x06
#define BT_MBI_SMIO_WRITE 0x07
#define BT_MBI_USB_READ 0x06
#define BT_MBI_USB_WRITE 0x07
#define BT_MBI_SATA_READ 0x00
#define BT_MBI_SATA_WRITE 0x01
#define BT_MBI_PCIE_READ 0x00
#define BT_MBI_PCIE_WRITE 0x01
/* Quark available units */ /* Quark available units */
#define QRK_MBI_UNIT_HBA 0x00 #define QRK_MBI_UNIT_HBA 0x00
#define QRK_MBI_UNIT_HB 0x03 #define QRK_MBI_UNIT_HB 0x03
#define QRK_MBI_UNIT_RMU 0x04 #define QRK_MBI_UNIT_RMU 0x04
#define QRK_MBI_UNIT_MM 0x05 #define QRK_MBI_UNIT_MM 0x05
#define QRK_MBI_UNIT_MMESRAM 0x05
#define QRK_MBI_UNIT_SOC 0x31 #define QRK_MBI_UNIT_SOC 0x31
/* Quark read/write opcodes */
#define QRK_MBI_HBA_READ 0x10
#define QRK_MBI_HBA_WRITE 0x11
#define QRK_MBI_HB_READ 0x10
#define QRK_MBI_HB_WRITE 0x11
#define QRK_MBI_RMU_READ 0x10
#define QRK_MBI_RMU_WRITE 0x11
#define QRK_MBI_MM_READ 0x10
#define QRK_MBI_MM_WRITE 0x11
#define QRK_MBI_MMESRAM_READ 0x12
#define QRK_MBI_MMESRAM_WRITE 0x13
#define QRK_MBI_SOC_READ 0x06
#define QRK_MBI_SOC_WRITE 0x07
#if IS_ENABLED(CONFIG_IOSF_MBI) #if IS_ENABLED(CONFIG_IOSF_MBI)
bool iosf_mbi_available(void); bool iosf_mbi_available(void);
......
...@@ -105,14 +105,14 @@ ftrace_modify_code_direct(unsigned long ip, unsigned const char *old_code, ...@@ -105,14 +105,14 @@ ftrace_modify_code_direct(unsigned long ip, unsigned const char *old_code,
{ {
unsigned char replaced[MCOUNT_INSN_SIZE]; unsigned char replaced[MCOUNT_INSN_SIZE];
ftrace_expected = old_code;
/* /*
* Note: Due to modules and __init, code can * Note:
* disappear and change, we need to protect against faulting * We are paranoid about modifying text, as if a bug was to happen, it
* as well as code changing. We do this by using the * could cause us to read or write to someplace that could cause harm.
* probe_kernel_* functions. * Carefully read and modify the code with probe_kernel_*(), and make
* * sure what we read is what we expected it to be before modifying it.
* No real locking needed, this code is run through
* kstop_machine, or before SMP starts.
*/ */
/* read the text we want to modify */ /* read the text we want to modify */
...@@ -154,6 +154,8 @@ int ftrace_make_nop(struct module *mod, ...@@ -154,6 +154,8 @@ int ftrace_make_nop(struct module *mod,
if (addr == MCOUNT_ADDR) if (addr == MCOUNT_ADDR)
return ftrace_modify_code_direct(rec->ip, old, new); return ftrace_modify_code_direct(rec->ip, old, new);
ftrace_expected = NULL;
/* Normal cases use add_brk_on_nop */ /* Normal cases use add_brk_on_nop */
WARN_ONCE(1, "invalid use of ftrace_make_nop"); WARN_ONCE(1, "invalid use of ftrace_make_nop");
return -EINVAL; return -EINVAL;
...@@ -220,6 +222,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, ...@@ -220,6 +222,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
unsigned long addr) unsigned long addr)
{ {
WARN_ON(1); WARN_ON(1);
ftrace_expected = NULL;
return -EINVAL; return -EINVAL;
} }
...@@ -314,6 +317,8 @@ static int add_break(unsigned long ip, const char *old) ...@@ -314,6 +317,8 @@ static int add_break(unsigned long ip, const char *old)
if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE)) if (probe_kernel_read(replaced, (void *)ip, MCOUNT_INSN_SIZE))
return -EFAULT; return -EFAULT;
ftrace_expected = old;
/* Make sure it is what we expect it to be */ /* Make sure it is what we expect it to be */
if (memcmp(replaced, old, MCOUNT_INSN_SIZE) != 0) if (memcmp(replaced, old, MCOUNT_INSN_SIZE) != 0)
return -EINVAL; return -EINVAL;
...@@ -413,6 +418,8 @@ static int remove_breakpoint(struct dyn_ftrace *rec) ...@@ -413,6 +418,8 @@ static int remove_breakpoint(struct dyn_ftrace *rec)
ftrace_addr = ftrace_get_addr_curr(rec); ftrace_addr = ftrace_get_addr_curr(rec);
nop = ftrace_call_replace(ip, ftrace_addr); nop = ftrace_call_replace(ip, ftrace_addr);
ftrace_expected = nop;
if (memcmp(&ins[1], &nop[1], MCOUNT_INSN_SIZE - 1) != 0) if (memcmp(&ins[1], &nop[1], MCOUNT_INSN_SIZE - 1) != 0)
return -EINVAL; return -EINVAL;
} }
......
...@@ -25,8 +25,6 @@ ...@@ -25,8 +25,6 @@
#include <asm/cpu_device_id.h> #include <asm/cpu_device_id.h>
#include <asm/iosf_mbi.h> #include <asm/iosf_mbi.h>
/* Side band Interface port */
#define PUNIT_PORT 0x04
/* Power gate status reg */ /* Power gate status reg */
#define PWRGT_STATUS 0x61 #define PWRGT_STATUS 0x61
/* Subsystem config/status Video processor */ /* Subsystem config/status Video processor */
...@@ -85,9 +83,8 @@ static int punit_dev_state_show(struct seq_file *seq_file, void *unused) ...@@ -85,9 +83,8 @@ static int punit_dev_state_show(struct seq_file *seq_file, void *unused)
seq_puts(seq_file, "\n\nPUNIT NORTH COMPLEX DEVICES :\n"); seq_puts(seq_file, "\n\nPUNIT NORTH COMPLEX DEVICES :\n");
while (punit_devp->name) { while (punit_devp->name) {
status = iosf_mbi_read(PUNIT_PORT, BT_MBI_PMC_READ, status = iosf_mbi_read(BT_MBI_UNIT_PMC, MBI_REG_READ,
punit_devp->reg, punit_devp->reg, &punit_pwr_status);
&punit_pwr_status);
if (status) { if (status) {
seq_printf(seq_file, "%9s : Read Failed\n", seq_printf(seq_file, "%9s : Read Failed\n",
punit_devp->name); punit_devp->name);
......
...@@ -111,23 +111,19 @@ static int imr_read(struct imr_device *idev, u32 imr_id, struct imr_regs *imr) ...@@ -111,23 +111,19 @@ static int imr_read(struct imr_device *idev, u32 imr_id, struct imr_regs *imr)
u32 reg = imr_id * IMR_NUM_REGS + idev->reg_base; u32 reg = imr_id * IMR_NUM_REGS + idev->reg_base;
int ret; int ret;
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ, ret = iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->addr_lo);
reg++, &imr->addr_lo);
if (ret) if (ret)
return ret; return ret;
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ, ret = iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->addr_hi);
reg++, &imr->addr_hi);
if (ret) if (ret)
return ret; return ret;
ret = iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ, ret = iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->rmask);
reg++, &imr->rmask);
if (ret) if (ret)
return ret; return ret;
return iosf_mbi_read(QRK_MBI_UNIT_MM, QRK_MBI_MM_READ, return iosf_mbi_read(QRK_MBI_UNIT_MM, MBI_REG_READ, reg++, &imr->wmask);
reg++, &imr->wmask);
} }
/** /**
...@@ -151,31 +147,27 @@ static int imr_write(struct imr_device *idev, u32 imr_id, ...@@ -151,31 +147,27 @@ static int imr_write(struct imr_device *idev, u32 imr_id,
local_irq_save(flags); local_irq_save(flags);
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE, reg++, ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->addr_lo);
imr->addr_lo);
if (ret) if (ret)
goto failed; goto failed;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE, ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->addr_hi);
reg++, imr->addr_hi);
if (ret) if (ret)
goto failed; goto failed;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE, ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->rmask);
reg++, imr->rmask);
if (ret) if (ret)
goto failed; goto failed;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE, ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE, reg++, imr->wmask);
reg++, imr->wmask);
if (ret) if (ret)
goto failed; goto failed;
/* Lock bit must be set separately to addr_lo address bits. */ /* Lock bit must be set separately to addr_lo address bits. */
if (lock) { if (lock) {
imr->addr_lo |= IMR_LOCK; imr->addr_lo |= IMR_LOCK;
ret = iosf_mbi_write(QRK_MBI_UNIT_MM, QRK_MBI_MM_WRITE, ret = iosf_mbi_write(QRK_MBI_UNIT_MM, MBI_REG_WRITE,
reg - IMR_NUM_REGS, imr->addr_lo); reg - IMR_NUM_REGS, imr->addr_lo);
if (ret) if (ret)
goto failed; goto failed;
} }
......
...@@ -58,14 +58,25 @@ config ACPI_CCA_REQUIRED ...@@ -58,14 +58,25 @@ config ACPI_CCA_REQUIRED
bool bool
config ACPI_DEBUGGER config ACPI_DEBUGGER
bool "AML debugger interface (EXPERIMENTAL)" bool "AML debugger interface"
select ACPI_DEBUG select ACPI_DEBUG
help help
Enable in-kernel debugging of AML facilities: statistics, internal Enable in-kernel debugging of AML facilities: statistics,
object dump, single step control method execution. internal object dump, single step control method execution.
This is still under development, currently enabling this only This is still under development, currently enabling this only
results in the compilation of the ACPICA debugger files. results in the compilation of the ACPICA debugger files.
if ACPI_DEBUGGER
config ACPI_DEBUGGER_USER
tristate "Userspace debugger accessiblity"
depends on DEBUG_FS
help
Export /sys/kernel/debug/acpi/acpidbg for userspace utilities
to access the debugger functionalities.
endif
config ACPI_SLEEP config ACPI_SLEEP
bool bool
depends on SUSPEND || HIBERNATION depends on SUSPEND || HIBERNATION
......
...@@ -8,13 +8,13 @@ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT ...@@ -8,13 +8,13 @@ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
# #
# ACPI Boot-Time Table Parsing # ACPI Boot-Time Table Parsing
# #
obj-y += tables.o obj-$(CONFIG_ACPI) += tables.o
obj-$(CONFIG_X86) += blacklist.o obj-$(CONFIG_X86) += blacklist.o
# #
# ACPI Core Subsystem (Interpreter) # ACPI Core Subsystem (Interpreter)
# #
obj-y += acpi.o \ obj-$(CONFIG_ACPI) += acpi.o \
acpica/ acpica/
# All the builtin files are in the "acpi." module_param namespace. # All the builtin files are in the "acpi." module_param namespace.
...@@ -66,10 +66,10 @@ obj-$(CONFIG_ACPI_FAN) += fan.o ...@@ -66,10 +66,10 @@ obj-$(CONFIG_ACPI_FAN) += fan.o
obj-$(CONFIG_ACPI_VIDEO) += video.o obj-$(CONFIG_ACPI_VIDEO) += video.o
obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o obj-$(CONFIG_ACPI_PCI_SLOT) += pci_slot.o
obj-$(CONFIG_ACPI_PROCESSOR) += processor.o obj-$(CONFIG_ACPI_PROCESSOR) += processor.o
obj-y += container.o obj-$(CONFIG_ACPI) += container.o
obj-$(CONFIG_ACPI_THERMAL) += thermal.o obj-$(CONFIG_ACPI_THERMAL) += thermal.o
obj-$(CONFIG_ACPI_NFIT) += nfit.o obj-$(CONFIG_ACPI_NFIT) += nfit.o
obj-y += acpi_memhotplug.o obj-$(CONFIG_ACPI) += acpi_memhotplug.o
obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o obj-$(CONFIG_ACPI_HOTPLUG_IOAPIC) += ioapic.o
obj-$(CONFIG_ACPI_BATTERY) += battery.o obj-$(CONFIG_ACPI_BATTERY) += battery.o
obj-$(CONFIG_ACPI_SBS) += sbshc.o obj-$(CONFIG_ACPI_SBS) += sbshc.o
...@@ -79,6 +79,7 @@ obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o ...@@ -79,6 +79,7 @@ obj-$(CONFIG_ACPI_EC_DEBUGFS) += ec_sys.o
obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o obj-$(CONFIG_ACPI_CUSTOM_METHOD)+= custom_method.o
obj-$(CONFIG_ACPI_BGRT) += bgrt.o obj-$(CONFIG_ACPI_BGRT) += bgrt.o
obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o obj-$(CONFIG_ACPI_CPPC_LIB) += cppc_acpi.o
obj-$(CONFIG_ACPI_DEBUGGER_USER) += acpi_dbg.o
# processor has its own "processor." module_param namespace # processor has its own "processor." module_param namespace
processor-y := processor_driver.o processor-y := processor_driver.o
......
...@@ -51,7 +51,7 @@ struct apd_private_data { ...@@ -51,7 +51,7 @@ struct apd_private_data {
const struct apd_device_desc *dev_desc; const struct apd_device_desc *dev_desc;
}; };
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE #if defined(CONFIG_X86_AMD_PLATFORM_DEVICE) || defined(CONFIG_ARM64)
#define APD_ADDR(desc) ((unsigned long)&desc) #define APD_ADDR(desc) ((unsigned long)&desc)
static int acpi_apd_setup(struct apd_private_data *pdata) static int acpi_apd_setup(struct apd_private_data *pdata)
...@@ -71,6 +71,7 @@ static int acpi_apd_setup(struct apd_private_data *pdata) ...@@ -71,6 +71,7 @@ static int acpi_apd_setup(struct apd_private_data *pdata)
return 0; return 0;
} }
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
static struct apd_device_desc cz_i2c_desc = { static struct apd_device_desc cz_i2c_desc = {
.setup = acpi_apd_setup, .setup = acpi_apd_setup,
.fixed_clk_rate = 133000000, .fixed_clk_rate = 133000000,
...@@ -80,6 +81,14 @@ static struct apd_device_desc cz_uart_desc = { ...@@ -80,6 +81,14 @@ static struct apd_device_desc cz_uart_desc = {
.setup = acpi_apd_setup, .setup = acpi_apd_setup,
.fixed_clk_rate = 48000000, .fixed_clk_rate = 48000000,
}; };
#endif
#ifdef CONFIG_ARM64
static struct apd_device_desc xgene_i2c_desc = {
.setup = acpi_apd_setup,
.fixed_clk_rate = 100000000,
};
#endif
#else #else
...@@ -132,9 +141,14 @@ static int acpi_apd_create_device(struct acpi_device *adev, ...@@ -132,9 +141,14 @@ static int acpi_apd_create_device(struct acpi_device *adev,
static const struct acpi_device_id acpi_apd_device_ids[] = { static const struct acpi_device_id acpi_apd_device_ids[] = {
/* Generic apd devices */ /* Generic apd devices */
#ifdef CONFIG_X86_AMD_PLATFORM_DEVICE
{ "AMD0010", APD_ADDR(cz_i2c_desc) }, { "AMD0010", APD_ADDR(cz_i2c_desc) },
{ "AMD0020", APD_ADDR(cz_uart_desc) }, { "AMD0020", APD_ADDR(cz_uart_desc) },
{ "AMD0030", }, { "AMD0030", },
#endif
#ifdef CONFIG_ARM64
{ "APMC0D0F", APD_ADDR(xgene_i2c_desc) },
#endif
{ } { }
}; };
......
/*
* ACPI AML interfacing support
*
* Copyright (C) 2015, Intel Corporation
* Authors: Lv Zheng <lv.zheng@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
/* #define DEBUG */
#define pr_fmt(fmt) "ACPI : AML: " fmt
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/wait.h>
#include <linux/poll.h>
#include <linux/sched.h>
#include <linux/kthread.h>
#include <linux/proc_fs.h>
#include <linux/debugfs.h>
#include <linux/circ_buf.h>
#include <linux/acpi.h>
#include "internal.h"
#define ACPI_AML_BUF_ALIGN (sizeof (acpi_size))
#define ACPI_AML_BUF_SIZE PAGE_SIZE
#define circ_count(circ) \
(CIRC_CNT((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define circ_count_to_end(circ) \
(CIRC_CNT_TO_END((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define circ_space(circ) \
(CIRC_SPACE((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define circ_space_to_end(circ) \
(CIRC_SPACE_TO_END((circ)->head, (circ)->tail, ACPI_AML_BUF_SIZE))
#define ACPI_AML_OPENED 0x0001
#define ACPI_AML_CLOSED 0x0002
#define ACPI_AML_IN_USER 0x0004 /* user space is writing cmd */
#define ACPI_AML_IN_KERN 0x0008 /* kernel space is reading cmd */
#define ACPI_AML_OUT_USER 0x0010 /* user space is reading log */
#define ACPI_AML_OUT_KERN 0x0020 /* kernel space is writing log */
#define ACPI_AML_USER (ACPI_AML_IN_USER | ACPI_AML_OUT_USER)
#define ACPI_AML_KERN (ACPI_AML_IN_KERN | ACPI_AML_OUT_KERN)
#define ACPI_AML_BUSY (ACPI_AML_USER | ACPI_AML_KERN)
#define ACPI_AML_OPEN (ACPI_AML_OPENED | ACPI_AML_CLOSED)
struct acpi_aml_io {
wait_queue_head_t wait;
unsigned long flags;
unsigned long users;
struct mutex lock;
struct task_struct *thread;
char out_buf[ACPI_AML_BUF_SIZE] __aligned(ACPI_AML_BUF_ALIGN);
struct circ_buf out_crc;
char in_buf[ACPI_AML_BUF_SIZE] __aligned(ACPI_AML_BUF_ALIGN);
struct circ_buf in_crc;
acpi_osd_exec_callback function;
void *context;
unsigned long usages;
};
static struct acpi_aml_io acpi_aml_io;
static bool acpi_aml_initialized;
static struct file *acpi_aml_active_reader;
static struct dentry *acpi_aml_dentry;
static inline bool __acpi_aml_running(void)
{
return acpi_aml_io.thread ? true : false;
}
static inline bool __acpi_aml_access_ok(unsigned long flag)
{
/*
* The debugger interface is in opened state (OPENED && !CLOSED),
* then it is allowed to access the debugger buffers from either
* user space or the kernel space.
* In addition, for the kernel space, only the debugger thread
* (thread ID matched) is allowed to access.
*/
if (!(acpi_aml_io.flags & ACPI_AML_OPENED) ||
(acpi_aml_io.flags & ACPI_AML_CLOSED) ||
!__acpi_aml_running())
return false;
if ((flag & ACPI_AML_KERN) &&
current != acpi_aml_io.thread)
return false;
return true;
}
static inline bool __acpi_aml_readable(struct circ_buf *circ, unsigned long flag)
{
/*
* Another read is not in progress and there is data in buffer
* available for read.
*/
if (!(acpi_aml_io.flags & flag) && circ_count(circ))
return true;
return false;
}
static inline bool __acpi_aml_writable(struct circ_buf *circ, unsigned long flag)
{
/*
* Another write is not in progress and there is buffer space
* available for write.
*/
if (!(acpi_aml_io.flags & flag) && circ_space(circ))
return true;
return false;
}
static inline bool __acpi_aml_busy(void)
{
if (acpi_aml_io.flags & ACPI_AML_BUSY)
return true;
return false;
}
static inline bool __acpi_aml_opened(void)
{
if (acpi_aml_io.flags & ACPI_AML_OPEN)
return true;
return false;
}
static inline bool __acpi_aml_used(void)
{
return acpi_aml_io.usages ? true : false;
}
static inline bool acpi_aml_running(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = __acpi_aml_running();
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_busy(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = __acpi_aml_busy();
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_used(void)
{
bool ret;
/*
* The usage count is prepared to avoid race conditions between the
* starts and the stops of the debugger thread.
*/
mutex_lock(&acpi_aml_io.lock);
ret = __acpi_aml_used();
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_kern_readable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_IN_KERN) ||
__acpi_aml_readable(&acpi_aml_io.in_crc, ACPI_AML_IN_KERN);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_kern_writable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_OUT_KERN) ||
__acpi_aml_writable(&acpi_aml_io.out_crc, ACPI_AML_OUT_KERN);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_user_readable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_OUT_USER) ||
__acpi_aml_readable(&acpi_aml_io.out_crc, ACPI_AML_OUT_USER);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static bool acpi_aml_user_writable(void)
{
bool ret;
mutex_lock(&acpi_aml_io.lock);
ret = !__acpi_aml_access_ok(ACPI_AML_IN_USER) ||
__acpi_aml_writable(&acpi_aml_io.in_crc, ACPI_AML_IN_USER);
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static int acpi_aml_lock_write(struct circ_buf *circ, unsigned long flag)
{
int ret = 0;
mutex_lock(&acpi_aml_io.lock);
if (!__acpi_aml_access_ok(flag)) {
ret = -EFAULT;
goto out;
}
if (!__acpi_aml_writable(circ, flag)) {
ret = -EAGAIN;
goto out;
}
acpi_aml_io.flags |= flag;
out:
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static int acpi_aml_lock_read(struct circ_buf *circ, unsigned long flag)
{
int ret = 0;
mutex_lock(&acpi_aml_io.lock);
if (!__acpi_aml_access_ok(flag)) {
ret = -EFAULT;
goto out;
}
if (!__acpi_aml_readable(circ, flag)) {
ret = -EAGAIN;
goto out;
}
acpi_aml_io.flags |= flag;
out:
mutex_unlock(&acpi_aml_io.lock);
return ret;
}
static void acpi_aml_unlock_fifo(unsigned long flag, bool wakeup)
{
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.flags &= ~flag;
if (wakeup)
wake_up_interruptible(&acpi_aml_io.wait);
mutex_unlock(&acpi_aml_io.lock);
}
static int acpi_aml_write_kern(const char *buf, int len)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.out_crc;
int n;
char *p;
ret = acpi_aml_lock_write(crc, ACPI_AML_OUT_KERN);
if (IS_ERR_VALUE(ret))
return ret;
/* sync tail before inserting logs */
smp_mb();
p = &crc->buf[crc->head];
n = min(len, circ_space_to_end(crc));
memcpy(p, buf, n);
/* sync head after inserting logs */
smp_wmb();
crc->head = (crc->head + n) & (ACPI_AML_BUF_SIZE - 1);
acpi_aml_unlock_fifo(ACPI_AML_OUT_KERN, true);
return n;
}
static int acpi_aml_readb_kern(void)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.in_crc;
char *p;
ret = acpi_aml_lock_read(crc, ACPI_AML_IN_KERN);
if (IS_ERR_VALUE(ret))
return ret;
/* sync head before removing cmds */
smp_rmb();
p = &crc->buf[crc->tail];
ret = (int)*p;
/* sync tail before inserting cmds */
smp_mb();
crc->tail = (crc->tail + 1) & (ACPI_AML_BUF_SIZE - 1);
acpi_aml_unlock_fifo(ACPI_AML_IN_KERN, true);
return ret;
}
/*
* acpi_aml_write_log() - Capture debugger output
* @msg: the debugger output
*
* This function should be used to implement acpi_os_printf() to filter out
* the debugger output and store the output into the debugger interface
* buffer. Return the size of stored logs or errno.
*/
static ssize_t acpi_aml_write_log(const char *msg)
{
int ret = 0;
int count = 0, size = 0;
if (!acpi_aml_initialized)
return -ENODEV;
if (msg)
count = strlen(msg);
while (count > 0) {
again:
ret = acpi_aml_write_kern(msg + size, count);
if (ret == -EAGAIN) {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_kern_writable());
/*
* We need to retry when the condition
* becomes true.
*/
if (ret == 0)
goto again;
break;
}
if (IS_ERR_VALUE(ret))
break;
size += ret;
count -= ret;
}
return size > 0 ? size : ret;
}
/*
* acpi_aml_read_cmd() - Capture debugger input
* @msg: the debugger input
* @size: the size of the debugger input
*
* This function should be used to implement acpi_os_get_line() to capture
* the debugger input commands and store the input commands into the
* debugger interface buffer. Return the size of stored commands or errno.
*/
static ssize_t acpi_aml_read_cmd(char *msg, size_t count)
{
int ret = 0;
int size = 0;
/*
* This is ensured by the running fact of the debugger thread
* unless a bug is introduced.
*/
BUG_ON(!acpi_aml_initialized);
while (count > 0) {
again:
/*
* Check each input byte to find the end of the command.
*/
ret = acpi_aml_readb_kern();
if (ret == -EAGAIN) {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_kern_readable());
/*
* We need to retry when the condition becomes
* true.
*/
if (ret == 0)
goto again;
}
if (IS_ERR_VALUE(ret))
break;
*(msg + size) = (char)ret;
size++;
count--;
if (ret == '\n') {
/*
* acpi_os_get_line() requires a zero terminated command
* string.
*/
*(msg + size - 1) = '\0';
break;
}
}
return size > 0 ? size : ret;
}
static int acpi_aml_thread(void *unsed)
{
acpi_osd_exec_callback function = NULL;
void *context;
mutex_lock(&acpi_aml_io.lock);
if (acpi_aml_io.function) {
acpi_aml_io.usages++;
function = acpi_aml_io.function;
context = acpi_aml_io.context;
}
mutex_unlock(&acpi_aml_io.lock);
if (function)
function(context);
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.usages--;
if (!__acpi_aml_used()) {
acpi_aml_io.thread = NULL;
wake_up(&acpi_aml_io.wait);
}
mutex_unlock(&acpi_aml_io.lock);
return 0;
}
/*
* acpi_aml_create_thread() - Create AML debugger thread
* @function: the debugger thread callback
* @context: the context to be passed to the debugger thread
*
* This function should be used to implement acpi_os_execute() which is
* used by the ACPICA debugger to create the debugger thread.
*/
static int acpi_aml_create_thread(acpi_osd_exec_callback function, void *context)
{
struct task_struct *t;
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.function = function;
acpi_aml_io.context = context;
mutex_unlock(&acpi_aml_io.lock);
t = kthread_create(acpi_aml_thread, NULL, "aml");
if (IS_ERR(t)) {
pr_err("Failed to create AML debugger thread.\n");
return PTR_ERR(t);
}
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.thread = t;
acpi_set_debugger_thread_id((acpi_thread_id)(unsigned long)t);
wake_up_process(t);
mutex_unlock(&acpi_aml_io.lock);
return 0;
}
static int acpi_aml_wait_command_ready(bool single_step,
char *buffer, size_t length)
{
acpi_status status;
if (single_step)
acpi_os_printf("\n%1c ", ACPI_DEBUGGER_EXECUTE_PROMPT);
else
acpi_os_printf("\n%1c ", ACPI_DEBUGGER_COMMAND_PROMPT);
status = acpi_os_get_line(buffer, length, NULL);
if (ACPI_FAILURE(status))
return -EINVAL;
return 0;
}
static int acpi_aml_notify_command_complete(void)
{
return 0;
}
static int acpi_aml_open(struct inode *inode, struct file *file)
{
int ret = 0;
acpi_status status;
mutex_lock(&acpi_aml_io.lock);
/*
* The debugger interface is being closed, no new user is allowed
* during this period.
*/
if (acpi_aml_io.flags & ACPI_AML_CLOSED) {
ret = -EBUSY;
goto err_lock;
}
if ((file->f_flags & O_ACCMODE) != O_WRONLY) {
/*
* Only one reader is allowed to initiate the debugger
* thread.
*/
if (acpi_aml_active_reader) {
ret = -EBUSY;
goto err_lock;
} else {
pr_debug("Opening debugger reader.\n");
acpi_aml_active_reader = file;
}
} else {
/*
* No writer is allowed unless the debugger thread is
* ready.
*/
if (!(acpi_aml_io.flags & ACPI_AML_OPENED)) {
ret = -ENODEV;
goto err_lock;
}
}
if (acpi_aml_active_reader == file) {
pr_debug("Opening debugger interface.\n");
mutex_unlock(&acpi_aml_io.lock);
pr_debug("Initializing debugger thread.\n");
status = acpi_initialize_debugger();
if (ACPI_FAILURE(status)) {
pr_err("Failed to initialize debugger.\n");
ret = -EINVAL;
goto err_exit;
}
pr_debug("Debugger thread initialized.\n");
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.flags |= ACPI_AML_OPENED;
acpi_aml_io.out_crc.head = acpi_aml_io.out_crc.tail = 0;
acpi_aml_io.in_crc.head = acpi_aml_io.in_crc.tail = 0;
pr_debug("Debugger interface opened.\n");
}
acpi_aml_io.users++;
err_lock:
if (IS_ERR_VALUE(ret)) {
if (acpi_aml_active_reader == file)
acpi_aml_active_reader = NULL;
}
mutex_unlock(&acpi_aml_io.lock);
err_exit:
return ret;
}
static int acpi_aml_release(struct inode *inode, struct file *file)
{
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.users--;
if (file == acpi_aml_active_reader) {
pr_debug("Closing debugger reader.\n");
acpi_aml_active_reader = NULL;
pr_debug("Closing debugger interface.\n");
acpi_aml_io.flags |= ACPI_AML_CLOSED;
/*
* Wake up all user space/kernel space blocked
* readers/writers.
*/
wake_up_interruptible(&acpi_aml_io.wait);
mutex_unlock(&acpi_aml_io.lock);
/*
* Wait all user space/kernel space readers/writers to
* stop so that ACPICA command loop of the debugger thread
* should fail all its command line reads after this point.
*/
wait_event(acpi_aml_io.wait, !acpi_aml_busy());
/*
* Then we try to terminate the debugger thread if it is
* not terminated.
*/
pr_debug("Terminating debugger thread.\n");
acpi_terminate_debugger();
wait_event(acpi_aml_io.wait, !acpi_aml_used());
pr_debug("Debugger thread terminated.\n");
mutex_lock(&acpi_aml_io.lock);
acpi_aml_io.flags &= ~ACPI_AML_OPENED;
}
if (acpi_aml_io.users == 0) {
pr_debug("Debugger interface closed.\n");
acpi_aml_io.flags &= ~ACPI_AML_CLOSED;
}
mutex_unlock(&acpi_aml_io.lock);
return 0;
}
static int acpi_aml_read_user(char __user *buf, int len)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.out_crc;
int n;
char *p;
ret = acpi_aml_lock_read(crc, ACPI_AML_OUT_USER);
if (IS_ERR_VALUE(ret))
return ret;
/* sync head before removing logs */
smp_rmb();
p = &crc->buf[crc->tail];
n = min(len, circ_count_to_end(crc));
if (copy_to_user(buf, p, n)) {
ret = -EFAULT;
goto out;
}
/* sync tail after removing logs */
smp_mb();
crc->tail = (crc->tail + n) & (ACPI_AML_BUF_SIZE - 1);
ret = n;
out:
acpi_aml_unlock_fifo(ACPI_AML_OUT_USER, !IS_ERR_VALUE(ret));
return ret;
}
static ssize_t acpi_aml_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
int ret = 0;
int size = 0;
if (!count)
return 0;
if (!access_ok(VERIFY_WRITE, buf, count))
return -EFAULT;
while (count > 0) {
again:
ret = acpi_aml_read_user(buf + size, count);
if (ret == -EAGAIN) {
if (file->f_flags & O_NONBLOCK)
break;
else {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_user_readable());
/*
* We need to retry when the condition
* becomes true.
*/
if (ret == 0)
goto again;
}
}
if (IS_ERR_VALUE(ret)) {
if (!acpi_aml_running())
ret = 0;
break;
}
if (ret) {
size += ret;
count -= ret;
*ppos += ret;
break;
}
}
return size > 0 ? size : ret;
}
static int acpi_aml_write_user(const char __user *buf, int len)
{
int ret;
struct circ_buf *crc = &acpi_aml_io.in_crc;
int n;
char *p;
ret = acpi_aml_lock_write(crc, ACPI_AML_IN_USER);
if (IS_ERR_VALUE(ret))
return ret;
/* sync tail before inserting cmds */
smp_mb();
p = &crc->buf[crc->head];
n = min(len, circ_space_to_end(crc));
if (copy_from_user(p, buf, n)) {
ret = -EFAULT;
goto out;
}
/* sync head after inserting cmds */
smp_wmb();
crc->head = (crc->head + n) & (ACPI_AML_BUF_SIZE - 1);
ret = n;
out:
acpi_aml_unlock_fifo(ACPI_AML_IN_USER, !IS_ERR_VALUE(ret));
return n;
}
static ssize_t acpi_aml_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
int ret = 0;
int size = 0;
if (!count)
return 0;
if (!access_ok(VERIFY_READ, buf, count))
return -EFAULT;
while (count > 0) {
again:
ret = acpi_aml_write_user(buf + size, count);
if (ret == -EAGAIN) {
if (file->f_flags & O_NONBLOCK)
break;
else {
ret = wait_event_interruptible(acpi_aml_io.wait,
acpi_aml_user_writable());
/*
* We need to retry when the condition
* becomes true.
*/
if (ret == 0)
goto again;
}
}
if (IS_ERR_VALUE(ret)) {
if (!acpi_aml_running())
ret = 0;
break;
}
if (ret) {
size += ret;
count -= ret;
*ppos += ret;
}
}
return size > 0 ? size : ret;
}
static unsigned int acpi_aml_poll(struct file *file, poll_table *wait)
{
int masks = 0;
poll_wait(file, &acpi_aml_io.wait, wait);
if (acpi_aml_user_readable())
masks |= POLLIN | POLLRDNORM;
if (acpi_aml_user_writable())
masks |= POLLOUT | POLLWRNORM;
return masks;
}
static const struct file_operations acpi_aml_operations = {
.read = acpi_aml_read,
.write = acpi_aml_write,
.poll = acpi_aml_poll,
.open = acpi_aml_open,
.release = acpi_aml_release,
.llseek = generic_file_llseek,
};
static const struct acpi_debugger_ops acpi_aml_debugger = {
.create_thread = acpi_aml_create_thread,
.read_cmd = acpi_aml_read_cmd,
.write_log = acpi_aml_write_log,
.wait_command_ready = acpi_aml_wait_command_ready,
.notify_command_complete = acpi_aml_notify_command_complete,
};
int __init acpi_aml_init(void)
{
int ret = 0;
if (!acpi_debugfs_dir) {
ret = -ENOENT;
goto err_exit;
}
/* Initialize AML IO interface */
mutex_init(&acpi_aml_io.lock);
init_waitqueue_head(&acpi_aml_io.wait);
acpi_aml_io.out_crc.buf = acpi_aml_io.out_buf;
acpi_aml_io.in_crc.buf = acpi_aml_io.in_buf;
acpi_aml_dentry = debugfs_create_file("acpidbg",
S_IFREG | S_IRUGO | S_IWUSR,
acpi_debugfs_dir, NULL,
&acpi_aml_operations);
if (acpi_aml_dentry == NULL) {
ret = -ENODEV;
goto err_exit;
}
ret = acpi_register_debugger(THIS_MODULE, &acpi_aml_debugger);
if (ret)
goto err_fs;
acpi_aml_initialized = true;
err_fs:
if (ret) {
debugfs_remove(acpi_aml_dentry);
acpi_aml_dentry = NULL;
}
err_exit:
return ret;
}
void __exit acpi_aml_exit(void)
{
if (acpi_aml_initialized) {
acpi_unregister_debugger(&acpi_aml_debugger);
if (acpi_aml_dentry) {
debugfs_remove(acpi_aml_dentry);
acpi_aml_dentry = NULL;
}
acpi_aml_initialized = false;
}
}
module_init(acpi_aml_init);
module_exit(acpi_aml_exit);
MODULE_AUTHOR("Lv Zheng");
MODULE_DESCRIPTION("ACPI debugger userspace IO driver");
MODULE_LICENSE("GPL");
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/clk-provider.h> #include <linux/clk-provider.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/mutex.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/platform_data/clk-lpss.h> #include <linux/platform_data/clk-lpss.h>
#include <linux/pm_runtime.h> #include <linux/pm_runtime.h>
...@@ -26,6 +27,10 @@ ACPI_MODULE_NAME("acpi_lpss"); ...@@ -26,6 +27,10 @@ ACPI_MODULE_NAME("acpi_lpss");
#ifdef CONFIG_X86_INTEL_LPSS #ifdef CONFIG_X86_INTEL_LPSS
#include <asm/cpu_device_id.h>
#include <asm/iosf_mbi.h>
#include <asm/pmc_atom.h>
#define LPSS_ADDR(desc) ((unsigned long)&desc) #define LPSS_ADDR(desc) ((unsigned long)&desc)
#define LPSS_CLK_SIZE 0x04 #define LPSS_CLK_SIZE 0x04
...@@ -71,7 +76,7 @@ struct lpss_device_desc { ...@@ -71,7 +76,7 @@ struct lpss_device_desc {
void (*setup)(struct lpss_private_data *pdata); void (*setup)(struct lpss_private_data *pdata);
}; };
static struct lpss_device_desc lpss_dma_desc = { static const struct lpss_device_desc lpss_dma_desc = {
.flags = LPSS_CLK, .flags = LPSS_CLK,
}; };
...@@ -84,6 +89,23 @@ struct lpss_private_data { ...@@ -84,6 +89,23 @@ struct lpss_private_data {
u32 prv_reg_ctx[LPSS_PRV_REG_COUNT]; u32 prv_reg_ctx[LPSS_PRV_REG_COUNT];
}; };
/* LPSS run time quirks */
static unsigned int lpss_quirks;
/*
* LPSS_QUIRK_ALWAYS_POWER_ON: override power state for LPSS DMA device.
*
* The LPSS DMA controller has neither _PS0 nor _PS3 method. Moreover
* it can be powered off automatically whenever the last LPSS device goes down.
* In case of no power any access to the DMA controller will hang the system.
* The behaviour is reproduced on some HP laptops based on Intel BayTrail as
* well as on ASuS T100TA transformer.
*
* This quirk overrides power state of entire LPSS island to keep DMA powered
* on whenever we have at least one other device in use.
*/
#define LPSS_QUIRK_ALWAYS_POWER_ON BIT(0)
/* UART Component Parameter Register */ /* UART Component Parameter Register */
#define LPSS_UART_CPR 0xF4 #define LPSS_UART_CPR 0xF4
#define LPSS_UART_CPR_AFCE BIT(4) #define LPSS_UART_CPR_AFCE BIT(4)
...@@ -196,13 +218,21 @@ static const struct lpss_device_desc bsw_i2c_dev_desc = { ...@@ -196,13 +218,21 @@ static const struct lpss_device_desc bsw_i2c_dev_desc = {
.setup = byt_i2c_setup, .setup = byt_i2c_setup,
}; };
static struct lpss_device_desc bsw_spi_dev_desc = { static const struct lpss_device_desc bsw_spi_dev_desc = {
.flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX .flags = LPSS_CLK | LPSS_CLK_GATE | LPSS_CLK_DIVIDER | LPSS_SAVE_CTX
| LPSS_NO_D3_DELAY, | LPSS_NO_D3_DELAY,
.prv_offset = 0x400, .prv_offset = 0x400,
.setup = lpss_deassert_reset, .setup = lpss_deassert_reset,
}; };
#define ICPU(model) { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, }
static const struct x86_cpu_id lpss_cpu_ids[] = {
ICPU(0x37), /* Valleyview, Bay Trail */
ICPU(0x4c), /* Braswell, Cherry Trail */
{}
};
#else #else
#define LPSS_ADDR(desc) (0UL) #define LPSS_ADDR(desc) (0UL)
...@@ -574,6 +604,17 @@ static void acpi_lpss_restore_ctx(struct device *dev, ...@@ -574,6 +604,17 @@ static void acpi_lpss_restore_ctx(struct device *dev,
{ {
unsigned int i; unsigned int i;
for (i = 0; i < LPSS_PRV_REG_COUNT; i++) {
unsigned long offset = i * sizeof(u32);
__lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset);
dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n",
pdata->prv_reg_ctx[i], offset);
}
}
static void acpi_lpss_d3_to_d0_delay(struct lpss_private_data *pdata)
{
/* /*
* The following delay is needed or the subsequent write operations may * The following delay is needed or the subsequent write operations may
* fail. The LPSS devices are actually PCI devices and the PCI spec * fail. The LPSS devices are actually PCI devices and the PCI spec
...@@ -586,14 +627,34 @@ static void acpi_lpss_restore_ctx(struct device *dev, ...@@ -586,14 +627,34 @@ static void acpi_lpss_restore_ctx(struct device *dev,
delay = 0; delay = 0;
msleep(delay); msleep(delay);
}
for (i = 0; i < LPSS_PRV_REG_COUNT; i++) { static int acpi_lpss_activate(struct device *dev)
unsigned long offset = i * sizeof(u32); {
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret;
__lpss_reg_write(pdata->prv_reg_ctx[i], pdata, offset); ret = acpi_dev_runtime_resume(dev);
dev_dbg(dev, "restoring 0x%08x to LPSS reg at offset 0x%02lx\n", if (ret)
pdata->prv_reg_ctx[i], offset); return ret;
}
acpi_lpss_d3_to_d0_delay(pdata);
/*
* This is called only on ->probe() stage where a device is either in
* known state defined by BIOS or most likely powered off. Due to this
* we have to deassert reset line to be sure that ->probe() will
* recognize the device.
*/
if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
lpss_deassert_reset(pdata);
return 0;
}
static void acpi_lpss_dismiss(struct device *dev)
{
acpi_dev_runtime_suspend(dev);
} }
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
...@@ -621,6 +682,8 @@ static int acpi_lpss_resume_early(struct device *dev) ...@@ -621,6 +682,8 @@ static int acpi_lpss_resume_early(struct device *dev)
if (ret) if (ret)
return ret; return ret;
acpi_lpss_d3_to_d0_delay(pdata);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX) if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata); acpi_lpss_restore_ctx(dev, pdata);
...@@ -628,6 +691,89 @@ static int acpi_lpss_resume_early(struct device *dev) ...@@ -628,6 +691,89 @@ static int acpi_lpss_resume_early(struct device *dev)
} }
#endif /* CONFIG_PM_SLEEP */ #endif /* CONFIG_PM_SLEEP */
/* IOSF SB for LPSS island */
#define LPSS_IOSF_UNIT_LPIOEP 0xA0
#define LPSS_IOSF_UNIT_LPIO1 0xAB
#define LPSS_IOSF_UNIT_LPIO2 0xAC
#define LPSS_IOSF_PMCSR 0x84
#define LPSS_PMCSR_D0 0
#define LPSS_PMCSR_D3hot 3
#define LPSS_PMCSR_Dx_MASK GENMASK(1, 0)
#define LPSS_IOSF_GPIODEF0 0x154
#define LPSS_GPIODEF0_DMA1_D3 BIT(2)
#define LPSS_GPIODEF0_DMA2_D3 BIT(3)
#define LPSS_GPIODEF0_DMA_D3_MASK GENMASK(3, 2)
static DEFINE_MUTEX(lpss_iosf_mutex);
static void lpss_iosf_enter_d3_state(void)
{
u32 value1 = 0;
u32 mask1 = LPSS_GPIODEF0_DMA_D3_MASK;
u32 value2 = LPSS_PMCSR_D3hot;
u32 mask2 = LPSS_PMCSR_Dx_MASK;
/*
* PMC provides an information about actual status of the LPSS devices.
* Here we read the values related to LPSS power island, i.e. LPSS
* devices, excluding both LPSS DMA controllers, along with SCC domain.
*/
u32 func_dis, d3_sts_0, pmc_status, pmc_mask = 0xfe000ffe;
int ret;
ret = pmc_atom_read(PMC_FUNC_DIS, &func_dis);
if (ret)
return;
mutex_lock(&lpss_iosf_mutex);
ret = pmc_atom_read(PMC_D3_STS_0, &d3_sts_0);
if (ret)
goto exit;
/*
* Get the status of entire LPSS power island per device basis.
* Shutdown both LPSS DMA controllers if and only if all other devices
* are already in D3hot.
*/
pmc_status = (~(d3_sts_0 | func_dis)) & pmc_mask;
if (pmc_status)
goto exit;
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO2, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
LPSS_IOSF_GPIODEF0, value1, mask1);
exit:
mutex_unlock(&lpss_iosf_mutex);
}
static void lpss_iosf_exit_d3_state(void)
{
u32 value1 = LPSS_GPIODEF0_DMA1_D3 | LPSS_GPIODEF0_DMA2_D3;
u32 mask1 = LPSS_GPIODEF0_DMA_D3_MASK;
u32 value2 = LPSS_PMCSR_D0;
u32 mask2 = LPSS_PMCSR_Dx_MASK;
mutex_lock(&lpss_iosf_mutex);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIOEP, MBI_CR_WRITE,
LPSS_IOSF_GPIODEF0, value1, mask1);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO2, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
iosf_mbi_modify(LPSS_IOSF_UNIT_LPIO1, MBI_CFG_WRITE,
LPSS_IOSF_PMCSR, value2, mask2);
mutex_unlock(&lpss_iosf_mutex);
}
static int acpi_lpss_runtime_suspend(struct device *dev) static int acpi_lpss_runtime_suspend(struct device *dev)
{ {
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
...@@ -640,7 +786,17 @@ static int acpi_lpss_runtime_suspend(struct device *dev) ...@@ -640,7 +786,17 @@ static int acpi_lpss_runtime_suspend(struct device *dev)
if (pdata->dev_desc->flags & LPSS_SAVE_CTX) if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_save_ctx(dev, pdata); acpi_lpss_save_ctx(dev, pdata);
return acpi_dev_runtime_suspend(dev); ret = acpi_dev_runtime_suspend(dev);
/*
* This call must be last in the sequence, otherwise PMC will return
* wrong status for devices being about to be powered off. See
* lpss_iosf_enter_d3_state() for further information.
*/
if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
lpss_iosf_enter_d3_state();
return ret;
} }
static int acpi_lpss_runtime_resume(struct device *dev) static int acpi_lpss_runtime_resume(struct device *dev)
...@@ -648,10 +804,19 @@ static int acpi_lpss_runtime_resume(struct device *dev) ...@@ -648,10 +804,19 @@ static int acpi_lpss_runtime_resume(struct device *dev)
struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev)); struct lpss_private_data *pdata = acpi_driver_data(ACPI_COMPANION(dev));
int ret; int ret;
/*
* This call is kept first to be in symmetry with
* acpi_lpss_runtime_suspend() one.
*/
if (lpss_quirks & LPSS_QUIRK_ALWAYS_POWER_ON && iosf_mbi_available())
lpss_iosf_exit_d3_state();
ret = acpi_dev_runtime_resume(dev); ret = acpi_dev_runtime_resume(dev);
if (ret) if (ret)
return ret; return ret;
acpi_lpss_d3_to_d0_delay(pdata);
if (pdata->dev_desc->flags & LPSS_SAVE_CTX) if (pdata->dev_desc->flags & LPSS_SAVE_CTX)
acpi_lpss_restore_ctx(dev, pdata); acpi_lpss_restore_ctx(dev, pdata);
...@@ -660,6 +825,10 @@ static int acpi_lpss_runtime_resume(struct device *dev) ...@@ -660,6 +825,10 @@ static int acpi_lpss_runtime_resume(struct device *dev)
#endif /* CONFIG_PM */ #endif /* CONFIG_PM */
static struct dev_pm_domain acpi_lpss_pm_domain = { static struct dev_pm_domain acpi_lpss_pm_domain = {
#ifdef CONFIG_PM
.activate = acpi_lpss_activate,
.dismiss = acpi_lpss_dismiss,
#endif
.ops = { .ops = {
#ifdef CONFIG_PM #ifdef CONFIG_PM
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
...@@ -705,8 +874,14 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, ...@@ -705,8 +874,14 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb,
} }
switch (action) { switch (action) {
case BUS_NOTIFY_ADD_DEVICE: case BUS_NOTIFY_BIND_DRIVER:
pdev->dev.pm_domain = &acpi_lpss_pm_domain; pdev->dev.pm_domain = &acpi_lpss_pm_domain;
break;
case BUS_NOTIFY_DRIVER_NOT_BOUND:
case BUS_NOTIFY_UNBOUND_DRIVER:
pdev->dev.pm_domain = NULL;
break;
case BUS_NOTIFY_ADD_DEVICE:
if (pdata->dev_desc->flags & LPSS_LTR) if (pdata->dev_desc->flags & LPSS_LTR)
return sysfs_create_group(&pdev->dev.kobj, return sysfs_create_group(&pdev->dev.kobj,
&lpss_attr_group); &lpss_attr_group);
...@@ -714,7 +889,6 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb, ...@@ -714,7 +889,6 @@ static int acpi_lpss_platform_notify(struct notifier_block *nb,
case BUS_NOTIFY_DEL_DEVICE: case BUS_NOTIFY_DEL_DEVICE:
if (pdata->dev_desc->flags & LPSS_LTR) if (pdata->dev_desc->flags & LPSS_LTR)
sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group); sysfs_remove_group(&pdev->dev.kobj, &lpss_attr_group);
pdev->dev.pm_domain = NULL;
break; break;
default: default:
break; break;
...@@ -754,10 +928,19 @@ static struct acpi_scan_handler lpss_handler = { ...@@ -754,10 +928,19 @@ static struct acpi_scan_handler lpss_handler = {
void __init acpi_lpss_init(void) void __init acpi_lpss_init(void)
{ {
if (!lpt_clk_init()) { const struct x86_cpu_id *id;
bus_register_notifier(&platform_bus_type, &acpi_lpss_nb); int ret;
acpi_scan_add_handler(&lpss_handler);
} ret = lpt_clk_init();
if (ret)
return;
id = x86_match_cpu(lpss_cpu_ids);
if (id)
lpss_quirks |= LPSS_QUIRK_ALWAYS_POWER_ON;
bus_register_notifier(&platform_bus_type, &acpi_lpss_nb);
acpi_scan_add_handler(&lpss_handler);
} }
#else #else
......
...@@ -367,7 +367,7 @@ static struct acpi_scan_handler acpi_pnp_handler = { ...@@ -367,7 +367,7 @@ static struct acpi_scan_handler acpi_pnp_handler = {
*/ */
static int is_cmos_rtc_device(struct acpi_device *adev) static int is_cmos_rtc_device(struct acpi_device *adev)
{ {
struct acpi_device_id ids[] = { static const struct acpi_device_id ids[] = {
{ "PNP0B00" }, { "PNP0B00" },
{ "PNP0B01" }, { "PNP0B01" },
{ "PNP0B02" }, { "PNP0B02" },
......
...@@ -77,14 +77,21 @@ module_param(allow_duplicates, bool, 0644); ...@@ -77,14 +77,21 @@ module_param(allow_duplicates, bool, 0644);
static int disable_backlight_sysfs_if = -1; static int disable_backlight_sysfs_if = -1;
module_param(disable_backlight_sysfs_if, int, 0444); module_param(disable_backlight_sysfs_if, int, 0444);
#define REPORT_OUTPUT_KEY_EVENTS 0x01
#define REPORT_BRIGHTNESS_KEY_EVENTS 0x02
static int report_key_events = -1;
module_param(report_key_events, int, 0644);
MODULE_PARM_DESC(report_key_events,
"0: none, 1: output changes, 2: brightness changes, 3: all");
static bool device_id_scheme = false; static bool device_id_scheme = false;
module_param(device_id_scheme, bool, 0444); module_param(device_id_scheme, bool, 0444);
static bool only_lcd = false; static bool only_lcd = false;
module_param(only_lcd, bool, 0444); module_param(only_lcd, bool, 0444);
static int register_count; static DECLARE_COMPLETION(register_done);
static DEFINE_MUTEX(register_count_mutex); static DEFINE_MUTEX(register_done_mutex);
static struct mutex video_list_lock; static struct mutex video_list_lock;
static struct list_head video_bus_head; static struct list_head video_bus_head;
static int acpi_video_bus_add(struct acpi_device *device); static int acpi_video_bus_add(struct acpi_device *device);
...@@ -412,6 +419,13 @@ static int video_enable_only_lcd(const struct dmi_system_id *d) ...@@ -412,6 +419,13 @@ static int video_enable_only_lcd(const struct dmi_system_id *d)
return 0; return 0;
} }
static int video_set_report_key_events(const struct dmi_system_id *id)
{
if (report_key_events == -1)
report_key_events = (uintptr_t)id->driver_data;
return 0;
}
static struct dmi_system_id video_dmi_table[] = { static struct dmi_system_id video_dmi_table[] = {
/* /*
* Broken _BQC workaround http://bugzilla.kernel.org/show_bug.cgi?id=13121 * Broken _BQC workaround http://bugzilla.kernel.org/show_bug.cgi?id=13121
...@@ -500,6 +514,24 @@ static struct dmi_system_id video_dmi_table[] = { ...@@ -500,6 +514,24 @@ static struct dmi_system_id video_dmi_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile M9410"), DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile M9410"),
}, },
}, },
/*
* Some machines report wrong key events on the acpi-bus, suppress
* key event reporting on these. Note this is only intended to work
* around events which are plain wrong. In some cases we get double
* events, in this case acpi-video is considered the canonical source
* and the events from the other source should be filtered. E.g.
* by calling acpi_video_handles_brightness_key_presses() from the
* vendor acpi/wmi driver or by using /lib/udev/hwdb.d/60-keyboard.hwdb
*/
{
.callback = video_set_report_key_events,
.driver_data = (void *)((uintptr_t)REPORT_OUTPUT_KEY_EVENTS),
.ident = "Dell Vostro V131",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Vostro V131"),
},
},
{} {}
}; };
...@@ -1480,7 +1512,7 @@ static void acpi_video_bus_notify(struct acpi_device *device, u32 event) ...@@ -1480,7 +1512,7 @@ static void acpi_video_bus_notify(struct acpi_device *device, u32 event)
/* Something vetoed the keypress. */ /* Something vetoed the keypress. */
keycode = 0; keycode = 0;
if (keycode) { if (keycode && (report_key_events & REPORT_OUTPUT_KEY_EVENTS)) {
input_report_key(input, keycode, 1); input_report_key(input, keycode, 1);
input_sync(input); input_sync(input);
input_report_key(input, keycode, 0); input_report_key(input, keycode, 0);
...@@ -1544,7 +1576,7 @@ static void acpi_video_device_notify(acpi_handle handle, u32 event, void *data) ...@@ -1544,7 +1576,7 @@ static void acpi_video_device_notify(acpi_handle handle, u32 event, void *data)
acpi_notifier_call_chain(device, event, 0); acpi_notifier_call_chain(device, event, 0);
if (keycode) { if (keycode && (report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS)) {
input_report_key(input, keycode, 1); input_report_key(input, keycode, 1);
input_sync(input); input_sync(input);
input_report_key(input, keycode, 0); input_report_key(input, keycode, 0);
...@@ -2017,8 +2049,8 @@ int acpi_video_register(void) ...@@ -2017,8 +2049,8 @@ int acpi_video_register(void)
{ {
int ret = 0; int ret = 0;
mutex_lock(&register_count_mutex); mutex_lock(&register_done_mutex);
if (register_count) { if (completion_done(&register_done)) {
/* /*
* if the function of acpi_video_register is already called, * if the function of acpi_video_register is already called,
* don't register the acpi_vide_bus again and return no error. * don't register the acpi_vide_bus again and return no error.
...@@ -2039,22 +2071,22 @@ int acpi_video_register(void) ...@@ -2039,22 +2071,22 @@ int acpi_video_register(void)
* When the acpi_video_bus is loaded successfully, increase * When the acpi_video_bus is loaded successfully, increase
* the counter reference. * the counter reference.
*/ */
register_count = 1; complete(&register_done);
leave: leave:
mutex_unlock(&register_count_mutex); mutex_unlock(&register_done_mutex);
return ret; return ret;
} }
EXPORT_SYMBOL(acpi_video_register); EXPORT_SYMBOL(acpi_video_register);
void acpi_video_unregister(void) void acpi_video_unregister(void)
{ {
mutex_lock(&register_count_mutex); mutex_lock(&register_done_mutex);
if (register_count) { if (completion_done(&register_done)) {
acpi_bus_unregister_driver(&acpi_video_bus); acpi_bus_unregister_driver(&acpi_video_bus);
register_count = 0; reinit_completion(&register_done);
} }
mutex_unlock(&register_count_mutex); mutex_unlock(&register_done_mutex);
} }
EXPORT_SYMBOL(acpi_video_unregister); EXPORT_SYMBOL(acpi_video_unregister);
...@@ -2062,15 +2094,29 @@ void acpi_video_unregister_backlight(void) ...@@ -2062,15 +2094,29 @@ void acpi_video_unregister_backlight(void)
{ {
struct acpi_video_bus *video; struct acpi_video_bus *video;
mutex_lock(&register_count_mutex); mutex_lock(&register_done_mutex);
if (register_count) { if (completion_done(&register_done)) {
mutex_lock(&video_list_lock); mutex_lock(&video_list_lock);
list_for_each_entry(video, &video_bus_head, entry) list_for_each_entry(video, &video_bus_head, entry)
acpi_video_bus_unregister_backlight(video); acpi_video_bus_unregister_backlight(video);
mutex_unlock(&video_list_lock); mutex_unlock(&video_list_lock);
} }
mutex_unlock(&register_count_mutex); mutex_unlock(&register_done_mutex);
}
bool acpi_video_handles_brightness_key_presses(void)
{
bool have_video_busses;
wait_for_completion(&register_done);
mutex_lock(&video_list_lock);
have_video_busses = !list_empty(&video_bus_head);
mutex_unlock(&video_list_lock);
return have_video_busses &&
(report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS);
} }
EXPORT_SYMBOL(acpi_video_handles_brightness_key_presses);
/* /*
* This is kind of nasty. Hardware using Intel chipsets may require * This is kind of nasty. Hardware using Intel chipsets may require
......
...@@ -50,6 +50,7 @@ acpi-y += \ ...@@ -50,6 +50,7 @@ acpi-y += \
exdump.o \ exdump.o \
exfield.o \ exfield.o \
exfldio.o \ exfldio.o \
exmisc.o \
exmutex.o \ exmutex.o \
exnames.o \ exnames.o \
exoparg1.o \ exoparg1.o \
...@@ -57,7 +58,6 @@ acpi-y += \ ...@@ -57,7 +58,6 @@ acpi-y += \
exoparg3.o \ exoparg3.o \
exoparg6.o \ exoparg6.o \
exprep.o \ exprep.o \
exmisc.o \
exregion.o \ exregion.o \
exresnte.o \ exresnte.o \
exresolv.o \ exresolv.o \
...@@ -66,6 +66,7 @@ acpi-y += \ ...@@ -66,6 +66,7 @@ acpi-y += \
exstoren.o \ exstoren.o \
exstorob.o \ exstorob.o \
exsystem.o \ exsystem.o \
extrace.o \
exutils.o exutils.o
acpi-y += \ acpi-y += \
...@@ -196,7 +197,6 @@ acpi-$(ACPI_FUTURE_USAGE) += \ ...@@ -196,7 +197,6 @@ acpi-$(ACPI_FUTURE_USAGE) += \
dbfileio.o \ dbfileio.o \
dbtest.o \ dbtest.o \
utcache.o \ utcache.o \
utfileio.o \
utprint.o \ utprint.o \
uttrack.o \ uttrack.o \
utuuid.o utuuid.o
......
...@@ -44,6 +44,8 @@ ...@@ -44,6 +44,8 @@
#ifndef _ACAPPS #ifndef _ACAPPS
#define _ACAPPS #define _ACAPPS
#include <stdio.h>
/* Common info for tool signons */ /* Common info for tool signons */
#define ACPICA_NAME "Intel ACPI Component Architecture" #define ACPICA_NAME "Intel ACPI Component Architecture"
...@@ -85,11 +87,40 @@ ...@@ -85,11 +87,40 @@
acpi_os_printf (description); acpi_os_printf (description);
#define ACPI_OPTION(name, description) \ #define ACPI_OPTION(name, description) \
acpi_os_printf (" %-18s%s\n", name, description); acpi_os_printf (" %-20s%s\n", name, description);
/* Check for unexpected exceptions */
#define ACPI_CHECK_STATUS(name, status, expected) \
if (status != expected) \
{ \
acpi_os_printf ("Unexpected %s from %s (%s-%d)\n", \
acpi_format_exception (status), #name, _acpi_module_name, __LINE__); \
}
/* Check for unexpected non-AE_OK errors */
#define ACPI_CHECK_OK(name, status) ACPI_CHECK_STATUS (name, status, AE_OK);
#define FILE_SUFFIX_DISASSEMBLY "dsl" #define FILE_SUFFIX_DISASSEMBLY "dsl"
#define FILE_SUFFIX_BINARY_TABLE ".dat" /* Needs the dot */ #define FILE_SUFFIX_BINARY_TABLE ".dat" /* Needs the dot */
/* acfileio */
acpi_status
ac_get_all_tables_from_file(char *filename,
u8 get_only_aml_tables,
struct acpi_new_table_desc **return_list_head);
u8 ac_is_file_binary(FILE * file);
acpi_status ac_validate_table_header(FILE * file, long table_offset);
/* Values for get_only_aml_tables */
#define ACPI_GET_ONLY_AML_TABLES TRUE
#define ACPI_GET_ALL_TABLES FALSE
/* /*
* getopt * getopt
*/ */
...@@ -107,30 +138,6 @@ extern char *acpi_gbl_optarg; ...@@ -107,30 +138,6 @@ extern char *acpi_gbl_optarg;
*/ */
u32 cm_get_file_size(ACPI_FILE file); u32 cm_get_file_size(ACPI_FILE file);
#ifndef ACPI_DUMP_APP
/*
* adisasm
*/
acpi_status
ad_aml_disassemble(u8 out_to_file,
char *filename, char *prefix, char **out_filename);
void ad_print_statistics(void);
acpi_status ad_find_dsdt(u8 **dsdt_ptr, u32 *dsdt_length);
void ad_dump_tables(void);
acpi_status ad_get_local_tables(void);
acpi_status
ad_parse_table(struct acpi_table_header *table,
acpi_owner_id * owner_id, u8 load_table, u8 external);
acpi_status ad_display_tables(char *filename, struct acpi_table_header *table);
acpi_status ad_display_statistics(void);
/* /*
* adwalk * adwalk
*/ */
...@@ -168,6 +175,5 @@ char *ad_generate_filename(char *prefix, char *table_id); ...@@ -168,6 +175,5 @@ char *ad_generate_filename(char *prefix, char *table_id);
void void
ad_write_table(struct acpi_table_header *table, ad_write_table(struct acpi_table_header *table,
u32 length, char *table_name, char *oem_table_id); u32 length, char *table_name, char *oem_table_id);
#endif
#endif /* _ACAPPS */ #endif /* _ACAPPS */
...@@ -80,9 +80,15 @@ struct acpi_db_execute_walk { ...@@ -80,9 +80,15 @@ struct acpi_db_execute_walk {
/* /*
* dbxface - external debugger interfaces * dbxface - external debugger interfaces
*/ */
acpi_status ACPI_DBR_DEPENDENT_RETURN_OK(acpi_status
acpi_db_single_step(struct acpi_walk_state *walk_state, acpi_db_single_step(struct acpi_walk_state
union acpi_parse_object *op, u32 op_type); *walk_state,
union acpi_parse_object *op,
u32 op_type))
ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_db_signal_break_point(struct
acpi_walk_state
*walk_state))
/* /*
* dbcmds - debug commands and output routines * dbcmds - debug commands and output routines
...@@ -182,11 +188,15 @@ void acpi_db_display_method_info(union acpi_parse_object *op); ...@@ -182,11 +188,15 @@ void acpi_db_display_method_info(union acpi_parse_object *op);
void acpi_db_decode_and_display_object(char *target, char *output_type); void acpi_db_decode_and_display_object(char *target, char *output_type);
void ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_db_display_result_object(union acpi_operand_object *obj_desc, acpi_db_display_result_object(union
struct acpi_walk_state *walk_state); acpi_operand_object
*obj_desc,
struct
acpi_walk_state
*walk_state))
acpi_status acpi_db_display_all_methods(char *display_count_arg); acpi_status acpi_db_display_all_methods(char *display_count_arg);
void acpi_db_display_arguments(void); void acpi_db_display_arguments(void);
...@@ -198,9 +208,13 @@ void acpi_db_display_calling_tree(void); ...@@ -198,9 +208,13 @@ void acpi_db_display_calling_tree(void);
void acpi_db_display_object_type(char *object_arg); void acpi_db_display_object_type(char *object_arg);
void ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_db_display_argument_object(union acpi_operand_object *obj_desc, acpi_db_display_argument_object(union
struct acpi_walk_state *walk_state); acpi_operand_object
*obj_desc,
struct
acpi_walk_state
*walk_state))
/* /*
* dbexec - debugger control method execution * dbexec - debugger control method execution
...@@ -231,10 +245,7 @@ void acpi_db_open_debug_file(char *name); ...@@ -231,10 +245,7 @@ void acpi_db_open_debug_file(char *name);
acpi_status acpi_db_load_acpi_table(char *filename); acpi_status acpi_db_load_acpi_table(char *filename);
acpi_status acpi_status acpi_db_load_tables(struct acpi_new_table_desc *list_head);
acpi_db_get_table_from_file(char *filename,
struct acpi_table_header **table,
u8 must_be_aml_table);
/* /*
* dbhistry - debugger HISTORY command * dbhistry - debugger HISTORY command
...@@ -257,7 +268,7 @@ acpi_db_command_dispatch(char *input_buffer, ...@@ -257,7 +268,7 @@ acpi_db_command_dispatch(char *input_buffer,
void ACPI_SYSTEM_XFACE acpi_db_execute_thread(void *context); void ACPI_SYSTEM_XFACE acpi_db_execute_thread(void *context);
acpi_status acpi_db_user_commands(char prompt, union acpi_parse_object *op); acpi_status acpi_db_user_commands(void);
char *acpi_db_get_next_token(char *string, char *acpi_db_get_next_token(char *string,
char **next, acpi_object_type * return_type); char **next, acpi_object_type * return_type);
......
...@@ -161,6 +161,11 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info, ...@@ -161,6 +161,11 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
/* /*
* evhandler - Address space handling * evhandler - Address space handling
*/ */
union acpi_operand_object *acpi_ev_find_region_handler(acpi_adr_space_type
space_id,
union acpi_operand_object
*handler_obj);
u8 u8
acpi_ev_has_default_handler(struct acpi_namespace_node *node, acpi_ev_has_default_handler(struct acpi_namespace_node *node,
acpi_adr_space_type space_id); acpi_adr_space_type space_id);
...@@ -193,9 +198,11 @@ void ...@@ -193,9 +198,11 @@ void
acpi_ev_detach_region(union acpi_operand_object *region_obj, acpi_ev_detach_region(union acpi_operand_object *region_obj,
u8 acpi_ns_is_locked); u8 acpi_ns_is_locked);
acpi_status void acpi_ev_associate_reg_method(union acpi_operand_object *region_obj);
void
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
acpi_adr_space_type space_id); acpi_adr_space_type space_id, u32 function);
acpi_status acpi_status
acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function); acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function);
......
...@@ -145,6 +145,7 @@ ACPI_GLOBAL(acpi_cache_t *, acpi_gbl_operand_cache); ...@@ -145,6 +145,7 @@ ACPI_GLOBAL(acpi_cache_t *, acpi_gbl_operand_cache);
ACPI_INIT_GLOBAL(u32, acpi_gbl_startup_flags, 0); ACPI_INIT_GLOBAL(u32, acpi_gbl_startup_flags, 0);
ACPI_INIT_GLOBAL(u8, acpi_gbl_shutdown, TRUE); ACPI_INIT_GLOBAL(u8, acpi_gbl_shutdown, TRUE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_early_initialization, TRUE);
/* Global handlers */ /* Global handlers */
...@@ -164,7 +165,7 @@ ACPI_GLOBAL(u8, acpi_gbl_next_owner_id_offset); ...@@ -164,7 +165,7 @@ ACPI_GLOBAL(u8, acpi_gbl_next_owner_id_offset);
/* Initialization sequencing */ /* Initialization sequencing */
ACPI_GLOBAL(u8, acpi_gbl_reg_methods_executed); ACPI_INIT_GLOBAL(u8, acpi_gbl_reg_methods_enabled, FALSE);
/* Misc */ /* Misc */
...@@ -326,7 +327,6 @@ ACPI_GLOBAL(struct acpi_external_file *, acpi_gbl_external_file_list); ...@@ -326,7 +327,6 @@ ACPI_GLOBAL(struct acpi_external_file *, acpi_gbl_external_file_list);
#ifdef ACPI_DEBUGGER #ifdef ACPI_DEBUGGER
ACPI_INIT_GLOBAL(u8, acpi_gbl_abort_method, FALSE); ACPI_INIT_GLOBAL(u8, acpi_gbl_abort_method, FALSE);
ACPI_INIT_GLOBAL(u8, acpi_gbl_method_executing, FALSE);
ACPI_INIT_GLOBAL(acpi_thread_id, acpi_gbl_db_thread_id, ACPI_INVALID_THREAD_ID); ACPI_INIT_GLOBAL(acpi_thread_id, acpi_gbl_db_thread_id, ACPI_INVALID_THREAD_ID);
ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_ini_methods); ACPI_GLOBAL(u8, acpi_gbl_db_opt_no_ini_methods);
...@@ -345,7 +345,6 @@ ACPI_GLOBAL(acpi_object_type, acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS]); ...@@ -345,7 +345,6 @@ ACPI_GLOBAL(acpi_object_type, acpi_gbl_db_arg_types[ACPI_DEBUGGER_MAX_ARGS]);
/* These buffers should all be the same size */ /* These buffers should all be the same size */
ACPI_GLOBAL(char, acpi_gbl_db_line_buf[ACPI_DB_LINE_BUFFER_SIZE]);
ACPI_GLOBAL(char, acpi_gbl_db_parsed_buf[ACPI_DB_LINE_BUFFER_SIZE]); ACPI_GLOBAL(char, acpi_gbl_db_parsed_buf[ACPI_DB_LINE_BUFFER_SIZE]);
ACPI_GLOBAL(char, acpi_gbl_db_scope_buf[ACPI_DB_LINE_BUFFER_SIZE]); ACPI_GLOBAL(char, acpi_gbl_db_scope_buf[ACPI_DB_LINE_BUFFER_SIZE]);
ACPI_GLOBAL(char, acpi_gbl_db_debug_filename[ACPI_DB_LINE_BUFFER_SIZE]); ACPI_GLOBAL(char, acpi_gbl_db_debug_filename[ACPI_DB_LINE_BUFFER_SIZE]);
...@@ -360,9 +359,6 @@ ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc); ...@@ -360,9 +359,6 @@ ACPI_GLOBAL(u16, acpi_gbl_node_type_count_misc);
ACPI_GLOBAL(u32, acpi_gbl_num_nodes); ACPI_GLOBAL(u32, acpi_gbl_num_nodes);
ACPI_GLOBAL(u32, acpi_gbl_num_objects); ACPI_GLOBAL(u32, acpi_gbl_num_objects);
ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_ready);
ACPI_GLOBAL(acpi_mutex, acpi_gbl_db_command_complete);
#endif /* ACPI_DEBUGGER */ #endif /* ACPI_DEBUGGER */
/***************************************************************************** /*****************************************************************************
......
...@@ -219,6 +219,13 @@ struct acpi_table_list { ...@@ -219,6 +219,13 @@ struct acpi_table_list {
#define ACPI_ROOT_ORIGIN_ALLOCATED (1) #define ACPI_ROOT_ORIGIN_ALLOCATED (1)
#define ACPI_ROOT_ALLOW_RESIZE (2) #define ACPI_ROOT_ALLOW_RESIZE (2)
/* List to manage incoming ACPI tables */
struct acpi_new_table_desc {
struct acpi_table_header *table;
struct acpi_new_table_desc *next;
};
/* Predefined table indexes */ /* Predefined table indexes */
#define ACPI_INVALID_TABLE_INDEX (0xFFFFFFFF) #define ACPI_INVALID_TABLE_INDEX (0xFFFFFFFF)
...@@ -388,7 +395,8 @@ union acpi_predefined_info { ...@@ -388,7 +395,8 @@ union acpi_predefined_info {
/* Return object auto-repair info */ /* Return object auto-repair info */
typedef acpi_status(*acpi_object_converter) (union acpi_operand_object typedef acpi_status(*acpi_object_converter) (struct acpi_namespace_node * scope,
union acpi_operand_object
*original_object, *original_object,
union acpi_operand_object union acpi_operand_object
**converted_object); **converted_object);
...@@ -420,6 +428,7 @@ struct acpi_simple_repair_info { ...@@ -420,6 +428,7 @@ struct acpi_simple_repair_info {
struct acpi_reg_walk_info { struct acpi_reg_walk_info {
acpi_adr_space_type space_id; acpi_adr_space_type space_id;
u32 function;
u32 reg_run_count; u32 reg_run_count;
}; };
...@@ -861,6 +870,7 @@ struct acpi_parse_state { ...@@ -861,6 +870,7 @@ struct acpi_parse_state {
#define ACPI_PARSEOP_CLOSING_PAREN 0x10 #define ACPI_PARSEOP_CLOSING_PAREN 0x10
#define ACPI_PARSEOP_COMPOUND 0x20 #define ACPI_PARSEOP_COMPOUND 0x20
#define ACPI_PARSEOP_ASSIGNMENT 0x40 #define ACPI_PARSEOP_ASSIGNMENT 0x40
#define ACPI_PARSEOP_ELSEIF 0x80
/***************************************************************************** /*****************************************************************************
* *
......
...@@ -400,17 +400,6 @@ ...@@ -400,17 +400,6 @@
#define ACPI_HW_OPTIONAL_FUNCTION(addr) NULL #define ACPI_HW_OPTIONAL_FUNCTION(addr) NULL
#endif #endif
/*
* Some code only gets executed when the debugger is built in.
* Note that this is entirely independent of whether the
* DEBUG_PRINT stuff (set by ACPI_DEBUG_OUTPUT) is on, or not.
*/
#ifdef ACPI_DEBUGGER
#define ACPI_DEBUGGER_EXEC(a) a
#else
#define ACPI_DEBUGGER_EXEC(a)
#endif
/* /*
* Macros used for ACPICA utilities only * Macros used for ACPICA utilities only
*/ */
......
...@@ -77,6 +77,7 @@ ...@@ -77,6 +77,7 @@
/* Object is not a package element */ /* Object is not a package element */
#define ACPI_NOT_PACKAGE_ELEMENT ACPI_UINT32_MAX #define ACPI_NOT_PACKAGE_ELEMENT ACPI_UINT32_MAX
#define ACPI_ALL_PACKAGE_ELEMENTS (ACPI_UINT32_MAX-1)
/* Always emit warning message, not dependent on node flags */ /* Always emit warning message, not dependent on node flags */
...@@ -183,13 +184,20 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object, ...@@ -183,13 +184,20 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object,
union acpi_operand_object **return_object); union acpi_operand_object **return_object);
acpi_status acpi_status
acpi_ns_convert_to_unicode(union acpi_operand_object *original_object, acpi_ns_convert_to_unicode(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object); union acpi_operand_object **return_object);
acpi_status acpi_status
acpi_ns_convert_to_resource(union acpi_operand_object *original_object, acpi_ns_convert_to_resource(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object); union acpi_operand_object **return_object);
acpi_status
acpi_ns_convert_to_reference(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object);
/* /*
* nsdump - Namespace dump/print utilities * nsdump - Namespace dump/print utilities
*/ */
......
...@@ -93,9 +93,10 @@ ...@@ -93,9 +93,10 @@
#define AOPOBJ_AML_CONSTANT 0x01 /* Integer is an AML constant */ #define AOPOBJ_AML_CONSTANT 0x01 /* Integer is an AML constant */
#define AOPOBJ_STATIC_POINTER 0x02 /* Data is part of an ACPI table, don't delete */ #define AOPOBJ_STATIC_POINTER 0x02 /* Data is part of an ACPI table, don't delete */
#define AOPOBJ_DATA_VALID 0x04 /* Object is initialized and data is valid */ #define AOPOBJ_DATA_VALID 0x04 /* Object is initialized and data is valid */
#define AOPOBJ_OBJECT_INITIALIZED 0x08 /* Region is initialized, _REG was run */ #define AOPOBJ_OBJECT_INITIALIZED 0x08 /* Region is initialized */
#define AOPOBJ_SETUP_COMPLETE 0x10 /* Region setup is complete */ #define AOPOBJ_REG_CONNECTED 0x10 /* _REG was run */
#define AOPOBJ_INVALID 0x20 /* Host OS won't allow a Region address */ #define AOPOBJ_SETUP_COMPLETE 0x20 /* Region setup is complete */
#define AOPOBJ_INVALID 0x40 /* Host OS won't allow a Region address */
/****************************************************************************** /******************************************************************************
* *
......
此差异已折叠。
...@@ -92,7 +92,13 @@ acpi_ps_get_next_simple_arg(struct acpi_parse_state *parser_state, ...@@ -92,7 +92,13 @@ acpi_ps_get_next_simple_arg(struct acpi_parse_state *parser_state,
acpi_status acpi_status
acpi_ps_get_next_namepath(struct acpi_walk_state *walk_state, acpi_ps_get_next_namepath(struct acpi_walk_state *walk_state,
struct acpi_parse_state *parser_state, struct acpi_parse_state *parser_state,
union acpi_parse_object *arg, u8 method_call); union acpi_parse_object *arg,
u8 possible_method_call);
/* Values for u8 above */
#define ACPI_NOT_METHOD_CALL FALSE
#define ACPI_POSSIBLE_METHOD_CALL TRUE
acpi_status acpi_status
acpi_ps_get_next_arg(struct acpi_walk_state *walk_state, acpi_ps_get_next_arg(struct acpi_walk_state *walk_state,
......
此差异已折叠。
...@@ -120,7 +120,7 @@ ...@@ -120,7 +120,7 @@
#define AML_CREATE_WORD_FIELD_OP (u16) 0x8b #define AML_CREATE_WORD_FIELD_OP (u16) 0x8b
#define AML_CREATE_BYTE_FIELD_OP (u16) 0x8c #define AML_CREATE_BYTE_FIELD_OP (u16) 0x8c
#define AML_CREATE_BIT_FIELD_OP (u16) 0x8d #define AML_CREATE_BIT_FIELD_OP (u16) 0x8d
#define AML_TYPE_OP (u16) 0x8e #define AML_OBJECT_TYPE_OP (u16) 0x8e
#define AML_CREATE_QWORD_FIELD_OP (u16) 0x8f /* ACPI 2.0 */ #define AML_CREATE_QWORD_FIELD_OP (u16) 0x8f /* ACPI 2.0 */
#define AML_LAND_OP (u16) 0x90 #define AML_LAND_OP (u16) 0x90
#define AML_LOR_OP (u16) 0x91 #define AML_LOR_OP (u16) 0x91
...@@ -238,7 +238,8 @@ ...@@ -238,7 +238,8 @@
#define ARGP_TERMLIST 0x0F #define ARGP_TERMLIST 0x0F
#define ARGP_WORDDATA 0x10 #define ARGP_WORDDATA 0x10
#define ARGP_QWORDDATA 0x11 #define ARGP_QWORDDATA 0x11
#define ARGP_SIMPLENAME 0x12 #define ARGP_SIMPLENAME 0x12 /* name_string | local_term | arg_term */
#define ARGP_NAME_OR_REF 0x13 /* For object_type only */
/* /*
* Resolved argument types for the AML Interpreter * Resolved argument types for the AML Interpreter
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册